In a groundbreaking advancement, researchers have developed a brain-computer interface that lets a man with ALS produce natural, real-time speech—transforming thoughts into voice almost instantly.
Key Points at a Glance
- New BCI technology translates brain signals into speech in just 1/40th of a second
- First-of-its-kind system allows users to modulate tone, intonation, and even sing
- Participant with ALS used the BCI to “talk” in real-time with family
- System outperforms current text-based speech interfaces in speed and expression
For people who’ve lost the ability to speak due to neurological disorders, communication has long been frustratingly slow. But a pioneering new technology from UC Davis offers a dramatic shift—enabling near-instantaneous, expressive speech via a brain-computer interface (BCI). The system, developed for people with conditions like amyotrophic lateral sclerosis (ALS), is the first to synthesize natural voice from brain activity in real time.
In a clinical trial published in Nature, a participant with ALS successfully used the system to converse with family, ask questions, interject, and even sing short melodies—all with his thoughts.
“Our previous BCI translated thoughts into text. This new system is more like a phone call—fast, fluid, and expressive,” explained senior author Sergey Stavisky, co-director of the UC Davis Neuroprosthetics Lab. The breakthrough stems from surgically implanted microelectrode arrays in the brain’s speech motor cortex. These arrays capture neuronal signals that are then decoded using advanced AI algorithms into speech sounds and played through a speaker in just 25 milliseconds.
For users, that means no more robotic delay. The participant could control tone and cadence—asking questions, emphasizing words, and even crafting melody with intonation control. “It lets people be part of conversations again, not just reply,” said Stavisky.
Lead researcher Maitreyee Wairagkar noted a major innovation: “Our system knows precisely when the person is trying to speak and what they’re trying to say, in that exact moment. That’s the leap—true real-time control.”
Results were remarkable: listeners understood nearly 60% of the synthesized words—up from just 4% without the BCI. Even more, the system could handle new words not previously in its database, showing adaptability and learning.
Behind the scenes, the AI model was trained by aligning the participant’s neural firing patterns with displayed words and sentences, learning to map specific thoughts to specific sounds. It’s a feat of bioengineering, neuroscience, and artificial intelligence working in synchrony.
“This is transformative,” said co-author David Brandman, who performed the implant surgery. “It’s a voice restored—not just technically, but emotionally. Our voice is our identity.”
While the technology is still in early testing, researchers hope to expand the BrainGate2 trial to more participants and conditions like stroke. If successful, the system could represent a seismic shift in assistive communication, restoring one of humanity’s most powerful tools: speech.
Source: UC Davis Health
Enjoying our articles?
We don’t show ads — so you can focus entirely on the story, without pop-ups or distractions. We don’t do sponsored content either, because we want to stay objective and only write about what truly fascinates us. If you’d like to help us keep going — buy us a coffee. It’s a small gesture that means a lot. Click here – Thank You!