Researchers read language from brainwaves

"Brain-to-text" method reconstructs spoken words and sentences from brain signals

From the activity patterns (blue / yellow) of the brain, the spoken words can be recognized © CSL / KIT
Read out

Taking mind-reading a step closer: only on the basis of the brainwaves does a new software recognize which words or sentences a test person is currently speaking. The "brain-to-text" method uses previously learned language patterns to evaluate the brain waves and thus for the first time continuously reconstructs spoken sounds, words and sentences and reproduces them as text by computer, as researchers report in the journal "Frontiers in Neuroscience".

Mind reading is an old human dream. The idea that one can communicate directly through thoughts without speaking fascinates and frightens at the same time. Thanks to modern computer technology and learning algorithms, this ability is getting closer and closer. Already in 2012, researchers had succeeded in reconstructing individual words heard by the subjects from brain waves.

From brain signals to text

Christian Herff from the Karlsruhe Institute of Technology (KIT) and his colleagues have now gone a step further. They have developed a method that recognizes not only individual phrases but also continuously spoken language and transforms it into text. The brain-to-text system combines information from the subject's cortex with linguistic knowledge and machine learning algorithms to extract the most likely word sequence.

The researchers tested their system on seven epileptic patients, who had already been implanted with some electrodes in the brain to prevent seizures. "These electrocorticographic circuits provide us with the electrical potential in a high spatial and temporal resolution, without being tortured by the skull, " explain the Herff and his colleagues.

Syllables as a basis

To enable the brain-to-text system to learn the typical brain-flow patterns, the researchers first recorded the signals, while the subjects read aloud various phrases and texts. From the variety of recorded brain waves, they then isolated those that are closely related to language and mouth movements. display

Among the reconstructed propositions were passages from the US Declaration of Independence. CSL / KIT

Based on these speech patterns, the software learned which signals correspond to which words. The special feature of this is that the brain-to-text system also learns individual syllables and parts of words and can later recognize even unknown words composed of these syllables. "Even with a limited set of words in his lexicon, brain-to-text can reconstruct spoken phrases from neural data, " the researchers said. This video explains the principle.

Word error rates below 25 percent

That this worked, showed a test with the seven subjects: They should read aloud to read more text sections, while their brain currents were derived. It turned out that Brain-to-Text was able to reproduce the spoken word remarkably well: "Our results demonstrate that the system can achieve word error rates of less than 25 percent, " say Herff and his colleagues. If the software only has to reconstruct new words using phonemes, half of the cases are correct.

The brain-to-text system still only works with the spoken language. However, according to the researchers, this could be an important first step to be able to read out imaginary language later. For example, with such systems, it would then be possible to communicate with locked-in patients. Although a patient is conscious of this phenomenon, he can not communicate with the outside world through movements or speech. (Frontiers in Neuroscience, 2015; doi: 10.3389 / fnins.2015.00217)

(Karlsruhe Institute of Technology, 15.06.2015 - NPO)