From bringing human-like conversations through advanced chat rooms to streamlining various aspects of our daily work, AI is making new strides all over the world. While the pandemonium surrounding AI replacing humans in many jobs refuses to abate, its incredible potential is also sparking renewed optimism about the future possibilities of helping humanity.

Dreams are an important part of the human experience. Not only are they exciting, but they are sometimes surprising in their coarseness. However, not everyone can interpret their dreams correctly. Much of it is lost in translation, making many people wonder if they can capture images, thoughts, and feelings in physical form.

As neuroscientists around the world have tackled the daunting task of turning mental images into tangible things, AI seems to be leading the way. Recent studies have shown that AI can read brain scans and provide accurate interpretations of mental images.

Researchers Shinji Nishimoto and Yu Takagi from Osaka University in Japan developed high-resolution images by analyzing brain activity. Technologies like the duo have the potential to provide many applications that include discovering how animals perceive the world around them, recording dreams in humans and even helping to communicate with the paralyzed.

Dream interpretation

This is not the first time something of this magnitude has been attempted. Previously, various studies have reported that AI has used AI to read brain scans to create images of landscapes and faces. This is the first time an AI algorithm called Stable Diffusion has been used. As part of the study, the researchers provided additional training for the default stable media system. This essentially means combining the textual interpretations of thousands of photos with brain patterns recorded when participants in brain imaging studies view the same images.

While previous AI algorithms used to decode brain scans based on large datasets, Stable Diffusion can achieve the task with minimal training - basically by incorporating image details into its algorithm. Ariel Goldstein, a neuroscientist from Princeton University who participated in the study, called it a new method that combines text and visual information to detect the brain.

Brain activity index

The study suggests that the AI ​​algorithm processes information from different brain regions such as the occipital and temporal lobes that are involved in image perception. The system interprets the information from functional magnetic resonance imaging or fMRI scans of the brain.

The researchers said that when people watch a video, the temporal lobes record information about its content, while the occipital lobe records thoughts and feelings. All this information is recorded using fMRI, which helps to detect changes in the blood flow in the active areas of the brain. The recorded information, according to the researchers, can be converted into a replica of the image using AI.

An integrated training and stability algorithm based on an online dataset provided by the University of Minnesota. The dataset consists of brain scans from four participants who each viewed 10,000 images. However, part of the brain scans of the training participants were not used to test the AI ​​system after completion.