Check this out: the above video purports to show, through some kind of process, the images and “movies” that play inside our heads.
For now, this is only about the reconstruction the brain makes while watching video clips. But imagine this technique allowing visualization of one’s thoughts or dreams…
Here’s how it works, according to the Berkeley researchers who ran the experiment:
The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.
Researchers at UC Berkeley used functional magnetic resonance imaging (fMRI) and some seriously complex computational models to figure out what images our minds create when presented with movie and TV clips. So far, the process is only able to reconstruct the neural equivalents of things people have already seen, but eventually it might be possible to construct the images people see in dreams and memories.
This could also open up new ways to communicate with those whose speech is severely impaired, such as stroke victims, patients with neurological diseases, and even people in comas. It’s probably worth stressing that we’re decades away from using this tech to read people’s thoughts and intentions, just in case that’s something you’re worried about.
The researchers developed this technique by showing study participants a series of black-and-white photographs while imaging their minds. By comparing the photographs with the scans, they were able to engineer a way to recognize any image from how the brain responded. With that basic principle in place, it was then only a question of building up a sufficiently complex computer model to decode moving, color images like those in the video above.