Source: https://tech.fb.com/bci-milestone-new-research-from-ucsf-with-support-from-facebook-shows-the-potential-of-brain-computer-interfaces-for-restoring-speech-communication/ |
Researchers at the University of California, San Francisco, have achieved encouraging results in their Brain-Computer Interface (BCI) research, proving that someone with severe speech loss could write out what they wanted to say nearly instantaneously merely by speaking.
This project was started by Facebook Reality Lab (FRL) in 2017 with the goal of creating a quiet, non-invasive voice interface that would allow users to type by merely picturing the words they wished to speak.
Research Steno is the last step of this project, and it is the first example of attempted speech mixed with language models to operate a BCI. This innovation has restored a person’s capacity to communicate by deciphering brain signals transmitted from the motor cortex to the vocal tract. This study marks a watershed moment in neuroscience, bringing to a close Facebook’s years-long cooperation with UCSF’s Chang Lab.
UCSF was able to dramatically boost its server capacity because of Facebook’s financing, allowing them to test more models concurrently and obtain more accurate results.
UCSF researchers had previously succeeded in deciphering a limited selection of complete spoken words and sentences using real-time brain activity. Furthermore, Chang Lab study showed that their system could recognize a much larger vocabulary with a very low incidence of word mistakes. The researchers point out, however, that these results were achieved while participants were speaking loudly. As a result, scientists were unclear if their approach could decipher words in real time while individuals were simply talking at the moment.
The study’s most recent findings show that attempted conversational communication can be decoded in real time. This study also demonstrates how language models may be used by algorithms to increase brain-to-text communication accuracy.
After a series of strokes, the subject of this research lost his ability to speak correctly. As part of an elective procedure, electrodes were placed on the participant’s brain surface. The subject worked closely with the UCSF team throughout the study to record dozens of hours of BCI-assisted speech. UCSF then utilized this data to create voice recognition and classification machine learning models. Despite the fact that I am crippled, I continue to work.
The study demonstrates how statistical characteristics of language are used to improve the BCI’s accuracy.
They aim to use the same technology to improve the algorithm prediction using a BCI, just as your phone can auto-correct and auto-complete information to increase the precision of the information that you put into a text.