A research team at the University of California-San Francisco (UCSF) has developed a method to decode human thoughts into speech in real-time.
Voice control is advancing rapidly as the standard method for interacting with devices, but it is impractical in public settings and not viable for anyone who is mute or suffers from a speech impediment that current technologies are unable to process.
The UCSF innovation is intended to enable mute or speech-challenged individuals to speak through a ‘neural speech prosthesis’ that can produce relatively normal-sounding speech from decoded brain activity.
“Real-time processing of brain activity has been used to decode simple speech sounds, but this is the first time this approach has been used to identify spoken words and phrases,” says lead researcher David Moses. “It’s important to keep in mind that we achieved this using a very limited vocabulary, but in future studies we hope to increase the flexibility as well as the accuracy of what we can translate from brain activity.”
Facebook provided sponsored research funding for the research. While the UCSF team is developing the technology to help mute individuals or those with ALS and other conditions to speak through their thoughts, Facebook may be seeking a way to develop brain-controlled augmented reality glasses.
Source: Big News Network
Royalty rate benchmarks available for immediate download! CLICK HERE to see our library of royalty rate references.