Researchers at Stanford University have developed a brain-computer interface that is the first in neurotechnology to translate imagined words straight from neural activity into speech. The new method functions even when a person only thinks about speaking, in contrast to previous systems that relied on identifying brain signals produced when people attempted to move their mouths or voice cords.
The trial included four participants with severe paralysis from illnesses such as brainstem stroke and amyotrophic lateral sclerosis. The only way for one person to react was to move his eyes: up for "yes" and side-to-side for "no." Doctors placed tiny electrode arrays into each participant's motor cortex, the area of the brain that typically controls speech-related movements, as part of the study, which was published this week in Cell.
The Financial Times quoted Stanford neuroscientist Erin Kunz, the study's principal author, as saying, "It's like when you just think about speaking." According to her, BCIs that comprehend inner speech could make communication "easier and more natural" for people with severe speech and motor disabilities.
