Decoded Brain Signals Could Give Voiceless People A Way To Talk

Apr 24, 2019
Originally published on April 25, 2019 3:59 pm

Scientists have found a way to transform brain signals into spoken words and sentences.

The approach could someday help people who have lost the ability to speak or gesture, a team from the University of California, San Francisco reported Wednesday in the journal Nature.

"Finding a way to restore speech is one of the great challenges in neurosciences," says Dr. Leigh Hochberg, a professor of engineering at Brown University who wasn't associated with the study. "This is a really exciting new contribution to the field."

Right now, people who are paralyzed and can't speak or gesture often rely on eye movements or a brain-controlled computer cursor to communicate. These methods allow them to spell out words one letter at a time.

But spelling out letters "is not the most efficient way to communicate," says Dr. Edward Chang, a neurosurgeon at UCSF and an author of the study. That approach allows a person to type fewer than 10 words a minute, compared with speaking about 150 words per minute with natural speech.

So Chang and a team of scientists have been looking for a way to let paralyzed patients produce entire words and sentences as if they were talking.

"The main goal that we had was really trying to figure out if we could actually decode brain activity into audible speech," Chang says.

The team studied five volunteers with severe epilepsy. As part of their treatment, these patients had electrodes temporarily placed on the surface of their brains.

The electrodes allowed doctors to locate brain areas causing seizures. And the electrodes also gave Chang's team a way to study the brain activity associated with speaking.

The volunteers read hundreds of sentences out loud while the scientists recorded signals from the brain's speech centers, which control muscles in the tongue, lips, jaw and larynx.

Next, a computer learned how to decode those signals and use them to synthesize speech.

Chang was "shocked" at how intelligible and natural the simulated speech was.

And a test on volunteers found that they could understand what the computer was saying most of the time.

The technology doesn't try to decode a person's thoughts. Instead it decodes the brain signals produced when a person actually tries to speak.

A similar approach has allowed people who are paralyzed to control a robotic arm by pretending they are moving their own arm.

"Instead of moving a robotic arm, this is really more focused on thinking about how to control a robotic vocal tract," Chang says.

Chang hopes to try the approach soon in patients who have lost the ability to speak. The National Institutes of Health's BRAIN Initiative was the primary funder for the research.

In the meantime, the study represents a major advance in turning brain activity into speech, scientists say.

The experiments provide "compelling proof-of-concept demonstrations," wrote Chethan Pandarinath and Yahia H. Ali of Emory University and Georgia Tech in a commentary accompanying the study.

Previous efforts to transform brain signals into speech have been less intelligible and required much more computing power, the commentary said.

Hochberg, who also has affiliations with Massachusetts General Hospital and the Providence VA Medical Center, was especially impressed by the quality of recordings of synthesized speech from Chang's lab.

"I pressed play, I listened to it with my eyes closed and what I heard was something that was recognizable as speech," he says.

As a doctor, Hochberg often encounters patients who would benefit from a device based on this technology.

"I see people who may yesterday have been walking and talking and today as a result of a brain stem stroke are suddenly unable to move and unable to speak," he says.

People with ALS or Lou Gehrig's disease also lose those abilities, he says. And some people born with severe cerebral palsy have great difficulty speaking.

The study adds to the evidence that restoring fluent speech to these people will be possible someday.

"We're not there yet," Hochberg says. "There's still a lot of research, and clinical research in particular, that needs to happen.

Even so, the field is advancing with remarkable speed, Hochberg says.

Just a few years ago, he says, scientists expected it would take decades to turn brain signals into intelligible speech. "Now that interval can be measured in years," he says.

Correction: 4/24/19

In a previous Web version of this story, Chethan Pandarinath's surname was misspelled as Pandarinth and Yahia H. Ali's first name was misspelled as Yahio. Additionally, we said Pandarinath was affiliated with Georgia Tech and Ali with Emory. In fact, they each are affiliated with both institutions.

Copyright 2019 NPR. To see more, visit https://www.npr.org.

AUDIE CORNISH, HOST:

Scientists have used a computer system to transform signals from a person's brain into spoken words and sentences. NPR's Jon Hamilton reports that the approach might someday help people with injuries or diseases that have left them unable to move or speak.

JON HAMILTON, BYLINE: Right now people who can't talk or even gesture often rely on eye movements to communicate. This allows them to spell out words one letter at a time. But Dr. Edward Chang, a neurosurgeon at the University of California, San Francisco, thought there must be a better option.

EDWARD CHANG: Spelling out letters is not the most efficient way to communicate.

HAMILTON: So Chang and a team of scientists have been looking for a way to let these people produce entire words and sentences quickly.

CHANG: The main goal that we had was really trying to figure out if we could actually decode brain activity into audible speech.

HAMILTON: The team studied people with severe epilepsy. As part of their treatment, the volunteers had electrodes placed temporarily on the surface of their brains. That allowed doctors to locate brain areas causing seizures, and it gave Chang's team a way to study the brain activity associated with speaking.

CHANG: Five volunteers read out hundreds of sentences and stories while we simultaneously recorded activity from the brain's speech centers.

HAMILTON: The volunteers read sentences like...

UNIDENTIFIED PERSON: Ship building is a most fascinating process.

HAMILTON: ...While the scientists recorded the brain signals sent to the muscles that control the tongue, lips, jaw and larynx. Next, a computer learned how to decode those signals and use them to synthesize speech. The result sounded like this.

COMPUTER-GENERATED VOICE: Ship building is a most fascinating process.

HAMILTON: Chang says the simulated speech was much better than he expected.

CHANG: I was pretty shocked, to be honest with you.

HAMILTON: And a test with volunteers found that they could understand what the computer was saying most of the time. Chang says this technology doesn't try to decode a person's thoughts. Instead, it focuses on the brain signals produced when a person actually tries to speak. He says a similar approach has allowed people who are paralyzed to control a robotic arm by trying to move their own arm.

CHANG: Instead of moving a robotic arm, this is really more focused on thinking about how to control a robotic vocal tract.

HAMILTON: Chang says he hopes to try the approach soon in patients. In the meantime, scientists say Chang's study represents a major advance. Dr. Leigh Hochberg, who studies brain-computer interfaces at Brown University, says he was especially impressed by a recording of the synthesized voice from Chang's lab.

LEIGH HOCHBERG: I pressed play. I listened to it with my eyes closed. And what I heard was something that was recognizable as speech.

HAMILTON: And Hochberg says as a doctor, he often encounters patients who would benefit from a device based on this technology.

HOCHBERG: I see people who may yesterday have been walking and talking and today, as the result of a brain stem stroke, are suddenly unable to move and unable to speak.

HAMILTON: People with ALS or Lou Gehrig's disease also lose those abilities but more gradually. Hochberg says the new study adds to the evidence that restoring speech will be possible someday.

HOCHBERG: We're not there yet. We want to be there, and there's still a lot of research and clinical research in particular that needs to happen.

HAMILTON: Even so, Hochberg says the field is advancing with remarkable speed.

HOCHBERG: A few years ago, the right answer was probably measured in decades. Now that interval can be measured in years.

HAMILTON: The new research appears in the journal Nature. Jon Hamilton, NPR News. Transcript provided by NPR, Copyright NPR.