Brain-computer interfaces could open up an opportunity for people with severe paralysis to reconnect with their environment. A computer converts certain patterns of brain activity into speech. Previous devices mostly used the displayed animations. The researchers have now tested a system that does not require this inflection: it recognizes ideas directly from letters. Combined with a comprehensive integrated dictionary, this should make the process much easier and faster.

Severe neurological damage, such as that from a stroke or progressive amyotrophic lateral sclerosis (ALS), can leave people without control over their body’s muscles. People who live with so-called closed syndrome are in full possession of their mental abilities, but are no longer able to understand themselves because they cannot speak or move. With the help of brain-computer interfaces, researchers are trying to give them connection to the outside world again. However, previous systems have a drawback that the process is usually very counter-intuitive and each individual entry is time-consuming.

Enable natural communication

A team led by Sean Metzger of the University of California, San Francisco, has developed a system that is said to be faster and more user-friendly than previous models and has a low error rate. “Current brain-computer interfaces for communication typically rely on decoding imaginary hand and arm movements into letters to enable the spelling of intended sentences,” the researchers explained. “Although this approach has already yielded promising results, direct decoding of speech-to-speech tests can be more natural and faster.”

To make this possible, Metzger and his colleagues trained a system to recognize the letter a person is thinking of. The test subject was a 36-year-old man who was paralyzed from spasticity after suffering a stroke and could no longer speak. Since he is still able to move his head, he communicates in everyday life with the help of a speech computer that is controlled in this way. To conduct experiments on brain-computer interfaces, the researchers implanted electrodes into areas of his brain associated with language, with his consent. In an earlier study, he had already used it to test a system in which a computer could decode up to 50 words if a person tried to say them out loud. However, due to his paralysis, this required great effort and his vocabulary was limited.

See also  Efforts to combat tuberculosis thwarted

fake messages

On the other hand, the new system is capable of recognizing dummy letters. Metzger and his colleagues taught the subject to use the NATO spelling code, for example “Alpha” for A, “Charlie” for C, and “November” for N. They recorded his brain activity exactly when he thought about these character codes and used them to train the AI ​​for self-learning . For the actual experiment, they presented the test person with 75 different sentences, which he had to explain one by one. They also asked him some questions that he was supposed to answer using the brain-computer interface.

The software evaluated his brain’s signals in real time and also compared them to an integrated 1,152-word dictionary to find out the most likely letter and word. In this way, the system achieved a relatively low error rate per character of 6.13 percent. Compared to the voice computer he uses in everyday life, with which a test person could enter about 17 characters per minute, it was noticeably faster with the new device: on average, it managed 29.4 characters per minute. In order to begin to spell, it was enough to imagine the modern person. He can finish the program with an imaginary hand movement.

advanced dictionary

In other experiments, in which researchers tested the software’s capabilities to recognize speech without subjects, they expanded the integrated dictionary to more than 9,000 words. The personality error rate increased only slightly to 8.23 ​​percent. The authors summarize: “These findings demonstrate the clinical utility of a speech prosthesis for generating sentences from large vocabulary through a spelling-based approach and complement previous demonstrations of direct whole-word decoding.” In future studies, they want to validate the approach with other subjects.

See also  Physicists create the largest time crystal to date using quantum computers

Quelle: Sean Metzger (University of California, San Francisco) et al., Nature Communications, Two: 10.1038 / s41467-022-33611-3


Please enter your comment!
Please enter your name here