Artificial intelligence, AI, has recently led to a breakthrough in brain scanning, with a decoder program at the University of Texas successfully translating brain activity into a stream of text. By only analyzing fMRI images, the program was able to reconstruct speech and stories told by a person with astonishing accuracy.

The concept of mind reading technology is contentious; however, researchers hope that this unexpectedly accurate innovation could soon be used to help patients who struggle to communicate, such as stroke and motor neuron disease survivors.

Traditionally, fMRI scans have been used to diagnose brain injuries and diseases for many years now. To be able to interpret these scans and convert them into meaning would be a huge leap forward for helping brain injury victims communicate.

While the technology is able to map out the brain in comprehensive detail, there are fundamental limits to scanning and deciphering brain activity in real time due to the speed at which blood flows through the organ to trigger neural activity when in use.

By using large language models, AI programs were able to correspond neural activity with conceptual meanings in speech, meaning that scientists were able to correspond neural activity with phrases and sentences. For example, when participants listened to the words "I don't have my driver's license yet", the AI program translated this to "she has not even started to learn to drive yet".

Eerily, the program may offer an insight into our most primal, emotional, and brutally honest internal dialogues, with streaks of judgement and impatience laid bare while processing the seemingly innocuous statement.

Our brains work in gists and concepts, and often generalize. In another brain study, when subjects were asked to picture the president of the United States, read his name, or hear him speak, the same area of the brain would light up each time, suggesting that each concept has a dedicated neural pathway, which responds and recognizes it.

In our brains, there is a network of hundreds of millions of electrical synapse connections, and within there is a neuron that lights up in an fMRI scan for your favorite food, the comedian you watched on television last night, the exam you sat over 10 years ago, and whatever else you have processed as your reality in life.

Currently, this semantic reconstruction of continuous language from non-invasive brain scanning has difficulty with distinguishing between pronouns such as he and she, as well as plural terms. This may be to do with how loose and conceptually the brain operates, however it may also be the possibility that the AI decoder has not quite managed to identify the subtle difference in brain waves, which distinguish such terms.

A future in which paralysed patients or stroke victims are able to communicate with their loved ones without speech, writing, or signs is now looking achievable as this technology develops. To be able to quantify a continuous stream of thought and extract it is a huge step forward. As AI brain wave decoders become more prominent and reach mainstream attention in the next few years, regulators will have to keep a close eye to ensure that the technology is not abused for nefarious purposes.


Barry He is a London-based columnist for China Daily.

The World Internet Conference (WIC) was established as an international organization on July 12, 2022, headquartered in Beijing, China. It was jointly initiated by Global System for Mobile Communication Association (GSMA), National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT), China Internet Network Information Center (CNNIC), Alibaba Group, Tencent, and Zhijiang Lab.