In a breakthrough study published in the journal Scientific Reports, engineers from Columbia University (CU) in New York describe a system capable of monitoring a person‘s brain activity and translating relevant bits into clear, easily recognisable speech.
“Our voices help connect us to our friends, family and the world around us, which is why losing the power of one’s voice due to injury or disease is so devastating,” said Nima Mesgarani, PhD, senior author on the paper and a principal investigator at CU’s Mortimer B. Zuckermann Mind Brain Behavior Institute.
With some further tweaks and updates, the system – built using currently available speech synthesisers and artificial intelligence – could soon become a lifeline to people who have lost their ability to speak due to stroke or diseases like amyotropic lateral sclerosis (ALS).
Furthermore, the ability to convert brain patterns that correlate with speech, both heard and produced, could prove tremendously useful to researchers working on brain-computer interface (BCI) applications.
For the study, Nima Mesgarani and his colleagues recruited a group of epilepsy patients undergoing brain surgery who were asked to listen to sentences spoken by different people while the research team measured their brain activity.
Next, the participants listened to speakers reciting digits between 0 and 9 while having their brain signals recorded, which were then fed into a vocoder – a type of computer algorithm capable of synthesising speech after being trained on recordings of people talking (the same technology is used by Amazon Echo and Apple Siri).
After some cleaning up, the robot-sounding audio was successfully recognised by as many as 75% of the participants, which is above and beyond any previous attempts.
The team now plans to conduct further experiments using more compelx sentences, with the hope of eventually using the system as part of an implant, similar to those worn by people with epilepsy, that would translate thought directly in speech.
“In this scenario, if the wearer thinks ‘I need a glass of water,’ our system could take the brain signals generated by that thought, and turn them into synthesized, verbal speech,” said Dr. Mesgarani. “This would be a game changer. It would give anyone who has lost their ability to speak, whether through injury or disease, the renewed chance to connect to the world around them.”
Source: zuckermaninstitute.columbia.edu.