Resume: Researchers enabled a silent person to produce speech using only thoughts. Depth electrodes in the participant’s brain sent electrical signals to a computer, which then spoke imagined syllables.
This technology offers hope for paralyzed people to regain their ability to speak. The study marks an important step towards brain-computer interfaces for voluntary communication.
Key Facts:
- Technology: Depth electrodes send brain signals to a computer for speech.
- Attendees: The experiment involved an epilepsy patient with implanted electrodes.
- Future impact: Could enable paralyzed people to communicate through thoughts.
Source: Tel Aviv University
A scientific breakthrough by researchers from Tel Aviv University and Tel Aviv Sourasky Medical Center (Ichilov Hospital) has shown that a silent person can speak using only the power of his thoughts.
In one experiment, a silent participant imagined saying one of two syllables. Depth electrodes implanted in his brain sent the electrical signals to a computer, which then pronounced the syllables.
The study was led by Dr. Ariel Tankus of Tel Aviv University’s Faculty of Medicine and Health Sciences and Tel Aviv Sourasky Medical Center (Ichilov Hospital), together with Dr. Ido Strauss of Tel Aviv University’s Faculty of Medicine and Health Sciences and Director of the Functional Neurosurgery Department at Ichilov Hospital.
The results of this research have been published in the journal Neurosurgery.
These findings offer hope for enabling people who are completely paralyzed (due to conditions such as ALS, brain stem stroke, or brain injury) to speak voluntarily again.
“The patient in the study is an epilepsy patient who has been admitted to the hospital to undergo a resection of the epileptic focus in his brain,” Dr. Tankus explains. “To do this, of course, you have to locate the focus, which is the source of the ‘short circuit’ that sends powerful electrical waves through the brain.
“This situation affects a smaller subgroup of epilepsy patients who do not respond well to medication and require neurosurgical intervention, and an even smaller subgroup of epilepsy patients whose likely focus is deep in the brain, rather than on the surface of the cortex.
“To identify the exact location, electrodes must be implanted into deep structures of their brain. They are then hospitalized, waiting for the next attack.
“When a seizure occurs, the electrodes tell neurologists and neurosurgeons where the focus is so they can operate with precision. From a scientific perspective, this offers a rare opportunity to glimpse the depths of a living human brain.
“Fortunately, the epilepsy patient admitted to the Ichilov Hospital agreed to take part in the experiment, which may eventually help completely paralyzed people to express themselves again through artificial speech.”
In the first phase of the experiment, with the depth electrodes already implanted in the patient’s brain, the Tel Aviv University researchers asked the patient to say two syllables out loud: /a/ and /e/.
They recorded brain activity as he articulated these sounds. Using deep learning and machine learning, the researchers trained artificial intelligence models to identify the specific brain cells whose electrical activity indicated they wanted to say /a/ or /e/.
Once the computer had learned to recognize the pattern of electrical activity associated with these two syllables in the patient’s brain, he was asked to imagine saying only /a/ and /e/. The computer then translated the electrical signals and played the prerecorded sounds of /a/ or /e/ accordingly.
“My area of research is concerned with speech coding and decoding, that is, how individual brain cells participate in the speech process – producing speech, hearing speech, and imagining speech, or ‘silent speaking,’” says Dr. Tankus.
“In this experiment, for the first time in history, we were able to link the word types to the activity of individual cells from the brain areas from which we had recorded data.
“This allowed us to distinguish between the electrical signals that represent the sounds /a/ and /e/. At the moment, our research involves two building blocks of speech, two syllables.
“Of course, our ambition is to achieve full speech, but even two different syllables can enable a completely paralyzed person to signal ‘yes’ and ‘no’. So in the future it will be possible to train a computer for an ALS patient in the early stages of the disease, while he or she can still speak.
“The computer would learn to recognize the electrical signals in the patient’s brain, allowing it to interpret those signals even after the patient loses the ability to move his muscles. And that’s just one example.
“Our research is an important step toward developing a brain-computer interface that can replace the brain’s control pathways for speech production, allowing completely paralyzed people to voluntarily interact with their environment again.”
About this BCI and neurotechnology research news
Author: Ariel Tankus
Source: Tel Aviv University
Contact: Ariel Tankus – Tel Aviv University
Image: The image is attributed to Neuroscience News
Original research: Closed access.
“A frontal lobe-hippocampal speech neuroprosthesis: decoding high-frequency activity into phonemes” by Ariel Tankus et al. Neurosurgery
Abstract
A frontal lobe-hippocampal speech neuroprosthesis: decoding high-frequency activity into phonemes
BACKGROUND AND OBJECTIVES:
Loss of speech due to injury or disease is devastating. Here we report a novel speech neuroprosthesis that artificially articulates the building blocks of speech based on high-frequency activity in brain regions never before used for a neuroprosthesis: the anterior cingulate and orbitofrontal cortex and the hippocampus.
METHODS:
A 37-year-old male neurosurgical epilepsy patient with intact speech, implanted with depth electrodes for clinical reasons only, operated the neuroprosthesis almost immediately and naturally to voluntarily produce 2 vowel sounds.
RESULTS:
During the first set of trials, the participant made the neuroprosthesis artificially produce the different vowel sounds with 85% accuracy. In the following trials, performance improved consistently, which can be attributed to neuroplasticity. We show that a neuroprosthesis trained on overt speech data can be controlled silently.
CONCLUSION:
This may open the way for a new strategy of neuroprosthesis implantation in earlier stages of the disease (e.g. amyotrophic lateral sclerosis), while speech is intact, for improved training that still allows silent control in later stages. The results demonstrate the clinical feasibility of direct decoding of high-frequency activity involving spiking activity in the above-mentioned areas for silent production of phonemes that could serve as part of a neuroprosthesis to replace lost speech control pathways.