اعلان

Ticker

6/recent/ticker-posts

The brain-computer interface at Stanford


 


Researchers at Stanford University have developed a brain-computer interface that is the first in neurotechnology to translate imagined words straight from neural activity into speech. The new method functions even when a person only thinks about speaking, in contrast to previous systems that relied on identifying brain signals produced when people attempted to move their mouths or voice cords.

The trial included four participants with severe paralysis from illnesses such as brainstem stroke and amyotrophic lateral sclerosis. The only way for one person to react was to move his eyes: up for "yes" and side-to-side for "no." Doctors placed tiny electrode arrays into each participant's motor cortex, the area of the brain that typically controls speech-related movements, as part of the study, which was published this week in Cell.

The BrainGate BCI consortium, a long-standing academic partnership in brain-computer interface research, developed the technology. After being positioned, the electrodes captured speech motor cortex activity as individuals completed two different tasks: silently envisioning particular words and trying to speak out loud.

In order to identify and categorize unique patterns of brain activity associated with phonemes—the smallest individual sound units in spoken language—machine learning models were created. In real time, the machine then reassembled these phonemes into complete phrases and sentences. When compared to attempted speech, the researchers discovered that imagined speech created a weaker but still noticeable neuronal imprint. Nevertheless, the decoding method achieved up to 74% accuracy rates.

The Financial Times quoted Stanford neuroscientist Erin Kunz, the study's principal author, as saying, "It's like when you just think about speaking." According to her, BCIs that comprehend inner speech could make communication "easier and more natural" for people with severe speech and motor disabilities.