Oct. 12, 2022 – Ever since his mid-30s, Greg lived in a nursing home. An assault 6 years earlier left him barely conscious, unable to talk or eat. Two years of rehab did little to help him. Most people in Greg’s condition would have remained nonverbal and separated from the world for the rest of their lives. But at age 38, Greg received a brain implant through a clinical trial.
Surgeons installed an electrode on either side of his thalamus, the main relay station of the brain.
“People who are in the minimally conscious state have intact brain circuitry, but those circuits are under-activated,” explains Joseph Fins, MD, chief of the Division of Medical Ethics at Weill Cornell Medicine in New York City. Delivering electrical impulses to affected regions can revive those circuits, restoring lost or weakened function.
“These devices are like pacemakers for the brain,” says Fins, who co-authored a study in Nature about Greg’s surgery.
The researchers switched Greg’s device off and on every 30 days for 6 months, observing how the electrical stimulation (or lack thereof) altered his abilities. They saw remarkable things.
“With the deep brain stimulator, he was able to say six- or-seven-word sentences, the first 16 words of the Pledge of Allegiance. Tell his mother he loved her. Go shopping at Old Navy and voice a preference for the kind of clothing his mother was buying,” recalls Fins, who shared Greg’s journey in his book, Rights Come to Mind: Brain Injury, Ethics and the Struggle for Consciousness.
After 6 years of silence, Greg regained his voice.
Yet success stories like his aren’t without controversy, as the technology has raised many ethical questions: Can a minimally conscious person consent to brain surgery? What happens to the people being studied when clinical trials are over? How can people’s neural data be responsibly used – and protected?
“I think that motto, ‘Move fast and break things,’ is a really bad approach,” says Veljko Dubljevic, PhD, an associate professor of science, technology, and society at North Carolina State University. He’s referring to the unofficial tagline of Silicon Valley, the headquarters for Elon Musk’s neurotechnology company, Neuralink.
Neuralink was founded in 2016, nearly a decade after the study about Greg’s brain implant was published. Yet it has been Musk’s company that has most visibly thrust neurotechnology into public consciousness, owing somewhat to its founder’s often overstated promises. (In 2019, Musk claimed his brain-computer interface would be implanted in humans in 2020. He has since moved that target to 2022.) Musk has called his device “a Fitbit in your skull,” though it’s officially named the “Link.”
Brain-computer interfaces, or BCIs, are already implanted in 36 people around the world, according to Blackrock, a leading maker of these devices. What makes Neuralink different is its ambitious goal to implant over 1,000 thinner-than-hair electrodes. If the Link works as intended – by monitoring a person’s brain activity and commanding a computer to do what they want – people with brain disorders, like quadriplegia, could regain a lot of independence.
The History Behind Brain Implants
BCIs – brain implants that communicate with an external device, typically a computer – are often framed as a science-fiction dream that geniuses like Musk are making a reality. But they’re deeply indebted to a technology that’s been used for decades: deep brain stimulation (DBS). In 1948, a neurosurgeon at Columbia University implanted an electrode into the brain of a woman diagnosed with depression and anorexia. The patient improved – until the wire broke a few weeks later. Still, the stage was set for longer-term neuromodulation.
It would be movement disorders, not depression, that ultimately catapulted DBS into the medical mainstream. In the late 1980s, French researchers published a study suggesting the devices could improve essential tremor and the tremor associated with Parkinson’s. The FDA approved DBS for essential tremor in 1997; approval for Parkinson’s followed in 2002. DBS is now the most common surgical treatment for Parkinson’s disease.
Since then, deep brain stimulation has been used, often experimentally, to treat a variety of conditions, ranging from obsessive-compulsive disorder to Tourette’s to addiction. The advancements are staggering: Newer closed-loop devices can directly respond to the brain’s activity, detecting, for example, when a seizure in someone with epilepsy is about to happen, then sending an electrical impulse to stop it. Implanted electrodes recently enabled a blind woman to decipher lines, shapes, and letters.
In clinical trials, BCIs have helped people with paralysis move prosthetic limbs. In July, Synchron – widely considered Neuralink’s chief competitor – implanted its Stentrode device into its first human subject in the U.S. This launched an unprecedented FDA-approved trial and puts Synchron ahead of Neuralink (which is still in the animal-testing phase). Australian research has already shown that people with Lou Gehrig’s disease (also called amyotrophic lateral sclerosis, or ALS) can shop and bank online using the Stentrode.
With breakthroughs like these, it’s hard to envision any downsides to brain implants. But neuroethicists warn that if we don’t act proactively – if companies fail to build ethical concerns into the very fabric of neurotechnology – there could be serious downstream consequences.
The Ethics of Safety and Durability
It’s tempting to dismiss these concerns as premature. But neurotechnology has already gained a firm foothold, with deep brain stimulators implanted in 200,000 people worldwide. And it’s still not clear who is responsible for the care of those who received the devices from clinical trials.
Even if recipients report benefits, that could change over time as the brain encapsulates the implant in glial tissue. This “scarification” interferes with the electrical signal, says Dubljevic, reducing the implant’s ability to communicate. But removing the device could pose a significant risk, such as bleeding in the brain. Although cutting-edge designs aim to resolve this – the Stentrode, for example, is inserted into a blood vessel, rather than through open brain surgery – many devices are still implanted, probe-like, deep into the brain.
Although device removal is usually offered at the end of studies, the cost is often not covered as part of the trial. Researchers typically ask the individual’s insurance to pay for the procedure, according to a study in the journal Neuron. But insurers have no obligation to remove a brain implant without a medically necessary reason. A patient’s dislike for the device generally isn’t sufficient.
Acceptance among recipients is hardly uniform. Patient interviews suggest these devices can alter identity, making people feel less like themselves, especially if they’re already prone to poor self-image.
“Some feel like they’re controlled by the device,” says Dubljevic, obligated to obey the implant’s warnings; for example, if a seizure may be imminent, being forced not to take a walk or go about their day normally.
“The more common thing is that they feel like they have more control and greater sense of self,” says Paul Ford, PhD, director of the NeuroEthics Program at the Cleveland Clinic. But even those who like and want to keep their devices may find a dearth of post-trial support – especially if the implant wasn’t statistically proven to be helpful.
Eventually, when the device’s battery dies, the person will need a surgery to replace it.
“Who’s gonna pay for that? It’s not part of the clinical trial,” Fins says. “This is kind of like giving people Teslas and not having charging stations where they’re going.”
As neurotechnology advances, it’s critical that health care systems invest in the infrastructure to maintain brain implants – in much the same way that someone with a pacemaker can walk into any hospital and have a cardiologist adjust their device, Fins says.
“If we’re serious about developing this technology, we should be serious about our responsibilities longitudinally to these participants.”
The Ethics of Privacy
It’s not just the medical aspects of brain implants that raise concerns, but also the glut of personal data they record. Dubljevic compares neural data now to blood samples 50 years ago, before scientists could extract genetic information. Fast-forward to today, when those same vitals can easily be linked to individuals.
“Technology may progress so that more personal information can be gleaned from recordings of brain data,” he says. “It’s currently not mind-reading in any way, shape, or form. But it may become mind-reading in something like 20 or 30 years.”
That term – mind-reading – is thrown around a lot in this field.
“It’s kind of the science-fiction version of where the technology is today,” says Fins. (Brain implants are not currently able to read minds.)
But as device signals become clearer, data will become more precise. Eventually, says Dubljevic, scientists may be able to figure out attitudes or psychological states.
“Someone could be labeled as less attentive or less intelligent” based on neural patterns, he says.
Brain data could also expose unknown medical conditions – for example, a history of stroke – that may be used to raise an individual’s insurance premiums or deny coverage altogether. Hackers could potentially seize control of brain implants, shutting them off or sending rogue signals to the user’s brain.
Some researchers, including Fins, say that storing brain data is no riskier than keeping medical records on your phone.
“It’s about cybersecurity writ large,“ he says.
But others see brain data as uniquely personal.
“These are the only data that reveal a person’s mental processes,” argues a report from UNESCO’s International Bioethics Committee (IBC). “If the assumption is that ‘I am defined by my brain,’ then neural data may be considered as the origin of the self and require special definition and protection.”
“The brain is such a key part of who we are – what makes us us,” says Laura Cabrera, PhD, the chair of neuroethics at Penn State University. “Who owns the data? Is it the medical system? Is it you, as a patient or user? I think that hasn’t really been resolved.”
Many of the measures put in place to regulate what Google or Facebook gathers and shares could also be applied to brain data. Some insist that the industry default should be to keep neural data private, rather than requiring people to opt out of sharing. But Dubljevic, takes a more nuanced view, since the sharing of raw data among researchers is essential for technological advancement and accountability.
What’s clear is that forestalling research isn’t the solution – transparency is. As part of the consent process, patients should be told where their data is being stored, for how long, and for what purpose, says Cabrera. In 2008, the U.S. passed a law prohibiting discrimination in health care coverage and employment based on genetic information. This could serve as a helpful precedent, she says.
The Legal Question
Around the globe, legislators are studying the question of neural data. A few years ago, a visit from a Columbia University neurobiologist sparked Chile’s Senate to draft a bill to regulate how neurotechnology could be used and how data would be safeguarded.
“Scientific and technological development will be at the service of people,” the amendment promised, “and will be carried out with respect for life and physical and mental integrity.”
Chile’s new Constitution was voted down in September, effectively killing the neuro-rights bill. But other countries are considering similar legislation. In 2021, France amended its bioethics law to prohibit discrimination due to brain data, while also building in the right to ban devices that modify brain activity.
Fins isn’t convinced this type of legislation is wholly good. He points to people like Greg – the 38-year-old who regained his ability to communicate through a brain implant. If it’s illegal to alter or investigate the brain’s state, “then you couldn’t find out if there was covert consciousness”– mental awareness that isn’t outwardly apparent – “thereby destining people to profound isolation,” he says.
Access to neurotechnology needs protecting too, especially for those who need it to communicate.
“It’s one thing to do something over somebody’s objection. That’s a violation of consent – a violation of personhood,” says Fins. “It’s quite another thing to intervene to promote agency.”
In cases of minimal consciousness, a medical surrogate, such as a family member, can often be called upon to provide consent. Overly restrictive laws could prevent the implantation of neural devices in these people.
“It’s a very complicated area,” says Fins.
The Future of Brain Implants
Currently, brain implants are strictly therapeutic. But, in some corners, “enhancement is an aspiration,” says Dubljevic. Animal studies suggest the potential is there. In a 2013 study, researchers monitored the brains of rats as they navigated a maze; electrical stimulation then transferred that neural data to rats at another lab. This second group of rodents navigated the maze as if they’d seen it before, suggesting that the transfer of memories may eventually become a reality. Possibilities like this raise the specter of social inequity, since only the wealthiest may afford cognitive enhancement.
They could also lead to ethically questionable military programs.
“We have heard staff at DARPA and the U.S. Intelligence Advanced Research Projects Activity discuss plans to provide soldiers and analysts with enhanced mental abilities (‘super-intelligent agents’),” a group of researchers wrote in a 2017 paper in Nature. Brain implants could even become a requirement for soldiers, who may be obligated to take part in trials; some researchers advise stringent international regulations for military use of the technology, like the Geneva Protocol for chemical and biological weapons.
The temptation to explore every application of neurotechnology will likely prove irresistible for entrepreneurs and scientists alike. That makes precautions essential.
“While it’s not surprising to see many potential ethical issues and questions arising from use of a novel technology,” a team of researchers, including Dubljevic, wrote in a 2020 paper in Philosophies, “what is surprising is the lack of suggestions to resolve them.”
It’s critical that the industry proceed with the right mindset, he says, emphasizing collaboration and making ethics a priority at every stage.
“How do we avoid problems that may arise and find solutions prior to those problems even arising?” Dubljevic asks. “Some proactive thinking goes a long way.”