In the past, many labs utilized AI to analyze brain scans and generate images of things people had just seen, like faces or landscapes. This new study utilized a special AI program called Stable Diffusion, developed by a group in Germany and made public in 2022. It operates similarly to other AIs capable of generating images from text.
For this study, scientists in Japan added additional training to the Stable Diffusion program. They associated more words with thousands of photos and observed brain activity when participants viewed those photos during a brain scan study. This enhanced Stable Diffusion’s ability to comprehend the brain.
AI can Access Different Parts of the Brain to Recreate Dreams
The AI program utilizes information from various brain regions responsible for processing visual information, such as the occipital and temporal lobes. The system analyzes brain scans obtained using a specialized machine that indicates which parts of the brain are active.
Advertisement
When people look at a picture, the temporal lobes primarily register the content of the image (such as people or objects), while the occipital lobe mainly processes spatial arrangement, like their positions. This information is recorded in the brain scan, and the AI employs it to generate an image that resembles what the person saw.
In this study, scientists trained an AI algorithm called Stable Diffusion with brain scans from four individuals as they viewed a variety of photos. They reserved some of the scans for later testing, not using them in the training process. The AI initially generates random noise, akin to a TV screen fuzz, and then refines it into a clear image by comparing the person’s brain activity to the patterns learned during training. This results in an image that depicts the content of the photo and its spatial arrangement.
AI might be Good but not the Best
The AI performed well in representing spatial arrangements but struggled with objects like a clock tower. Instead of creating a clock tower, it generated something abstract. To address this, the scientists incorporated keywords from the photo captions. So, if a training photo featured a clock tower, the brain activity pattern would be associated with that object. Consequently, if the same pattern emerged in the testing phase, the AI would include the clock tower in the image. This refinement led to images that closely resembled the originals (1✔ ✔Trusted Source
Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding
Go to source
).
The AI underwent further testing with additional brain scans from the same individuals, producing accurate images of new subjects like a toy bear or an airplane. However, at present, it can only operate with scans from the individuals who initially trained it. Expanding its capabilities to work with other individuals would necessitate additional training. Nevertheless, this could signify a significant advancement in comprehending how our brains function and in generating images based on our thoughts and dreams.
“With further refinements, this technology could unlock a deeper understanding of our thoughts and dreams, and even shed light on how different species perceive the world.”
Reference :
- Brain Recording, Mind-Reading, and Neurotechnology: Ethical Issues from Consumer Devices to Brain-Based Speech Decoding – (https:www.ncbi.nlm.nih.gov/pmc/articles/PMC7417394/)
Source: Medindia