AI generates images by examining brain activity, ignites ethical debate

Neuroscientist Yu Takagi believes there are misunderstandings with the research they conducted

By
Web Desk
|
A representational image showing the human brain. — Unsplash/File
A representational image showing the human brain. — Unsplash/File

Neuroscientist and assistant professor at Osaka University Yu Takagi was amazed when he saw that artificial intelligence (AI) decoded the brain activity of its subject as it created images on the screen that the subject was thinking about, Al Jazeera reported.

Takagi, 34, said: "I still remember when I saw the first [AI-generated] images,"

He said he started to doubt himself about being crazy.

He said by using Stable Diffusion (SD) — a deep learning AI model developed in Germany — his team analysed the brain scans of subjects that were shown up to 10,000 images while they were inside an MRI machine.

After building a sample model to translate mental activity in a readable form, Takagi and his partner Shinji Nishimoto, SD created images which were akin to the original ones.

Even if AI machines are not trained earlier or not shown pictures, they are capable to create such images.

Takagi said that they were not expecting these results. He also said that AI machines can only generate images that a person has seen. It does not imply that it is a breakthrough in mind-reading.

He also believed that there are misunderstandings with the research they conducted.

"We can’t decode imaginations or dreams; we think this is too optimistic. But, of course, there is potential in the future”, he added.

The increasing advancements in AI technology have ignited the debate about its future use and ethical concerns.

Is AI a risk to humans?

Recently, a pause on AI was called on by technology leaders including Tesla and Twitter CEO Elon Musk in an open letter mentioning "profound risks to society and humanity."

Takagi was of the view that despite serious concerns about the misuse of technology, it is not without benefits.

He stated: "For us, privacy issues are the most important thing. If a government or institution can read people's minds, it's a very sensitive issue. There need to be high-level discussions to make sure this can't happen."

Their research has sparked discussions, taken further by miraculous advancements in AI technology, for instance, OpenAI's ChatGPT — capable of human-like responses to questions.

The data company, Altmetric, said their study ranks in the top 1% for engagement among the more than 23 million research outputs tracked to date.

Al Jazeera reported that the study has also been accepted to the Conference on Computer Vision and Pattern Recognition (CVPR), set for June 2023, a common route for legitimising significant breakthroughs in neuroscience.

Takagi believed: "There are two primary bottlenecks to genuine mind reading: brain-scanning technology and AI itself."

A representational image showing a woman going into MRI machine. — Unsplash/File
A representational image showing a woman going into MRI machine. — Unsplash/File

Notwithstanding the development, scientists maintain that the world is still far away from decoding visual experiences.

Subjects need to sit in an fMRI scanner for around 40 hours according to the research.

Limitations of mind-reading AI

In a paper that was published in 2021, researchers at the Korea Advanced Institute of Science and Technology found that “conventional neural interfaces lack chronic recording stability due to the soft and complex nature of neural tissue, which reacts in unusual ways when brought into contact with synthetic interfaces.”

Moreover, the researchers also added that current recording techniques generally rely on electrical pathways to transfer the signal, which is susceptible to electrical noises from surroundings. Because the electrical noises significantly disturb the sensitivity, achieving fine signals from the target region with high sensitivity is not yet an easy feat."

Takagi, despite limitations, is hopeful of the advancements AI is making. “I’m optimistic for AI but I’m not optimistic for brain technology.” He also said, “I think this is the consensus among neuroscientists.”

This framework used in research can be used in other brain scanning devices such as EEG like the brain-computer implants being developed by Elon Musk’s Neuralink.

Takago also noted that apart from clinal uses, it can also be used for entertainment purposes.

Ethical concerns about AI

Ricardo Silva while talking with Al Jazeera said that "it's hard to predict what a successful clinical application might be at this stage, as it is still very exploratory research." 

Silva, a professor of computational neuroscience at University College London and research fellow at the Alan Turing Institute also added: "This may turn out to be one extra way of developing a marker for Alzheimer's detection and progression evaluation by assessing in which ways one could spot persistent anomalies in images of visual navigation tasks reconstructed from a patient’s brain activity."

He also voiced concerns over the use of this technology which could be used for actual mind reading.

"The most pressing issue is to which extent the data collector should be forced to disclose in full detail the uses of the data collected," Silva noted.

"It's one thing to sign up as a way of taking a snapshot of your younger self for, maybe, future clinical use… It's yet another completely different thing to have it used in secondary tasks such as marketing, or worse, used in legal cases against someone's own interests."

Still, Takagi and his partner have no intention of slowing down their research. They are already planning version two of their project, which will focus on improving the technology and applying it to other modalities.

"We are now developing a much better [image] reconstructing technique," Takagi noted while mentioning "and it's happening at a very rapid pace."