Paris, 2 May 2023 (GNP): On Monday, scientists claimed that they have discovered a technique to utilize brain scans and artificial intelligence (AI) modeling to decode “the gist” of what individuals are thinking.
US scientists noted that the device presented concerns regarding “mental privacy” while the language decoder’s primary objective is to aid those who have lost their capacity of speaking.
In an effort to ease these concerns, they conducted experiments that demonstrated their decoder could not be used on anybody who had not let it be trained on their brain activity over a period of time in a functional magnetic resonance imaging (fMRI) scanner.
According to earlier studies, a brain implant can let people who are unable of talking or typing to write words or even phrases. These “brain-computer interfaces” focused their attention on the part of the brain that coordinates the mouth’s ability to produce words.
Alexander Huth, a neuroscientist from the University of Texas in Austin and co-author of the study, claimed the language decoder utilized by his team operates on a significantly different level. During an online press conference, Huth explained that “Our system really works at the level of ideas, of semantics, of meaning”.
Research published in the journal Nature Neuroscience revealed that it is the first AI system to be able to rebuild continuous language without the need for an intrusive brain implant.
Three participants in the study listened to spoken narrative stories for a total of 16 hours while they were inside an fMRI scanner. This gave the researchers a way to visualize how words, phrases, and meanings elicited reactions in the parts of the brain associated with language processing. They incorporated this information into a neural network language model using GPT-1, the forerunner of the AI system ultimately used in the enormously successful ChatGPT.
The model was taught to anticipate how each person’s brain would process speech, then it was trained to eliminate choices until it came up with the most suitable answer. Each participant then listened to a fresh narrative while being tested to see how accurate the model was.
The study’s first author Jerry Tang claimed the decoder could “recover the gist of what the user was hearing”.
Our language decoding paper (@AmandaLeBel3 @shaileeejain @alex_ander) is out! We found that it is possible to use functional MRI scans to predict the words that a user was hearing or imagining when the scans were collected https://t.co/hpYSGAHNQi
— Jerry Tang (@jerryptang) May 1, 2023
The research, according to David Rodriguez-Arias Vailhen, a bioethics professor at Spain’s Granada University who was not engaged in it, went above and beyond what had previously been accomplished through brain-computer interfaces.
He warned that this may occur against people’s will, as when they are sleeping, and that this moves us closer to a day when robots are “able to read minds and transcribe thoughts”.