The Ethics of Mind-Reading

Can policy catch up with brain-computer interfaces, like Neuralink?

Ebani Dhawan
4 min readNov 11, 2022
Art by Olivia Kehoe

I wrote this article without typing. No, not even with an old-fashioned pen and paper.

Okay, not really. But, it’s not too distant a reality, thanks to brain-computer interfaces (BCIs).

You may have heard of BCIs in August 2020 when Elon Musk’s Neuralink live-streamed their progress update. It stirred public conversation about incorporating BCIs into today’s society. With potential to aid those unable to be independent, BCIs are seen as a way to enhance lives. However, this evolving neurotechnology blurs the fine line separating biology and technology, raising an important question amongst neuro-ethicists and regulators: will BCIs undermine what it means to be an autonomous human?

BCIs are computer-based systems that interact with the brain and central nervous system to record electrical signals produced by neural activity. This technology is either invasive, connecting directly to nerves via implanted electrodes, or non-invasive, such as electroencephalography-based systems that record signals from outside the skull. BCIs also can stimulate neural circuits in various cortical regions, controlling specific brain circuits.

As the range of potential applications of BCIs grows, so too does the pressure facing regulatory bodies. Although neurotechnology is in its infancy, these groups need to think about policy. Postponing this to when BCIs are socially entrenched would be too late. The need for legal clarity is imperative as neurotechnology is rapidly outpacing science policy, a typical relationship between law and science.

The major ethical issue policymakers will need to explore is the potential for BCIs to breach our sense of agency ‒ the feeling that we are the primary cause of an action. Psychological evidence shows that this sense of control is dependent on sensory feedback, some sort of ‘confirmation’ that the committed action has had the expected consequences. However, there is a delay between the generation of brain activity and the output of the BCI, which diminishes the feeling of influence over an action. As a result, a diluted sense of agency can emerge.

This raises the regulatory question: who is accountable for the action? An altered sense of agency, whether it be diminished or shared, should mean altered consequences of accountability. BCIs present a possibility of technological external control over personal agency, which conflicts with the societal values of autonomy the legal system aims to protect. Western jurisprudence requires voluntary control (i.e. agency) to constitute legal liability. The majority of BCI-mediated actions are ‘disembodied’; they do not require a bodily movement. So, legally, those BCI-mediated actions are not considered as actions because it does not fit with the notion of actus reus ‒ an act needs to be committed for the individual to be liable. And, many common law codes consider an act to be committed with a bodily movement.

Turning to the effects of BCIs on an area closely entwined with our sense of agency ‒ self-identity. Neuroimaging research highlights the medial prefrontal cortex and the medial posterior parietal cortex as the hubs of self-knowledge, knowing who we are. Acting autonomously means acting authentically, which can only occur when we are aware of our personal identity. BCIs can change this self-awareness. A 2019 study conducted by Frederic Gilbert at the University of Tasmania found that one participant transformed into a “new person” after an implantation of a BCI. Another, known as ‘Patient 6’, experienced a “radical symbiosis” with the BCI, so much so that losing it caused her deep grief, feeling as though she had lost herself. With such radical identity changes, which one is the true self? And, our old question pops up again: who is accountable?

Another significant ethical concern relates to privacy. Recent studies led by Martinovic and Rosenfeld have shown that using BCIs to override a user’s privacy is not fiction. Sensitive information, such as debit card numbers, was successfully revealed through EEG recordings. Having access to one’s brain signals can be seen as analogous to mind-reading. Although this may be empowering to certain users, such as those who have lost their brain-body connection, it exposes them to a new form of vulnerability. BCIs collect a vast amount of brain signals, some of our most intimate biodata. Our brain is the last refuge of personal freedom and self-determination, but BCIs threaten to publicise it. Losing the ability to keep information confidential undermines what it means to be a free society, a democracy and have human rights.

It is crucial that policy is proactive and precautionary when dealing with neurotechnology such as BCIs. Cognitively diverse expert groups and university and industry researchers require sufficient time to gather high quality evidence to back up the policy recommendations they make. With applications in medicine, user authentication, entertainment and smartphone technology, BCIs are humanity’s next foray into transhumanism, blurring the lines between what is human and what is machine. BCI policymaking is still not a frequent point of discussion and it is imperative that society decides how to treat neurotechnology before it is too late.

Originally published by UCL Kinesis Magazine on June 27, 2021.
Editor: Altay Shaw

--

--