Scientists use AI to build 'semantic decoder' that can read minds
Scientists have used artificial intelligence (AI) to decode people's thoughts from a brain scan.
The new system, called a semantic decoder, can translate a person's brain activity while listening to a story, or silently imagining telling a story, into a continuous stream of text.
It has been developed by researchers at The University of Texas at Austin - and it is thought that it might help people who are mentally conscious yet unable to physically speak, such as those debilitated by strokes, to communicate intelligibly again.
The work relies in part on a transformer model, similar to the ones that power Open AI's ChatGPT and Google's Bard. Unlike other language-decoding systems in development, this system does not require subjects to have surgical implants, making the process noninvasive. Participants also do not need to use only words from a prescribed list.
Brain activity is measured using an fMRI scanner after extensive training of the decoder, in which the individual listens to hours of podcasts in the scanner. Later, provided that the participant is open to having their thoughts decoded, their listening to a new story or imagining telling a story allows the machine to generate corresponding text from brain activity alone.
"For a noninvasive method, this is a real leap forward compared to what's been done before, which is typically single words or short sentences," said Alex Huth, an assistant professor of neuroscience and computer science at UT Austin. "We're getting the model to decode continuous language for extended periods of time with complicated ideas."
The result is not a word-for-word transcript. Instead, researchers designed it to capture the gist of what is being said or thought, albeit imperfectly. About half the time, when the decoder has been trained to monitor a participant's brain activity, the machine produces text that closely (and sometimes precisely) matches the intended meanings of the original words.