The brain activity recorded while subjects viewed the first set of clips was fed into a computer program that learned, second by second, to associate visual patterns in the movie with the corresponding brain activity. Brain activity evoked by the second set of clips was used to test the movie reconstruction algorithm. This was done by feeding 18 million seconds of random YouTube videos into the computer program so that it could predict the brain activity that each film clip would most likely evoke in each subject. Finally, the 100 clips that the computer program decided were most similar to the clip that the subject had probably seen were merged to produce a blurry yet continuous reconstruction of the original movie.
Researchers at UC Berkeley used functional magnetic resonance imaging (fMRI) and some seriously complex computational models to figure out what images our minds create when presented with movie and TV clips. So far, the process is only able to reconstruct the neural equivalents of things people have already seen, but eventually it might be possible to construct the images people see in dreams and memories.
This could also open up new ways to communicate with those whose speech is severely impaired, such as stroke victims, patients with neurological diseases, and even people in comas. It’s probably worth stressing that we’re decades away from using this tech to read people’s thoughts and intentions, just in case that’s something you’re worried about.
The researchers developed this technique by showing study participants a series of black-and-white photographs while imaging their minds. By comparing the photographs with the scans, they were able to engineer a way to recognize any image from how the brain responded. With that basic principle in place, it was then only a question of building up a sufficiently complex computer model to decode moving, color images like those in the video above.
Gizmodo has the latest installment in the ongoing end of privacy saga:
At the heart of the controversy over “body scanners” is a promise: The images of our naked bodies will never be public. U.S. Marshals in a Florida Federal courthouse saved 35,000 images on their scanner. These are those images.
We understand that it will be controversial to release these photographs. But identifying features have been eliminated. And fortunately for those who walked through the scanner in Florida last year, this mismanaged machine used the less embarrassing imaging technique.
Yet the leaking of these photographs demonstrates the security limitations of not just this particular machine, but millimeter wave and x-ray backscatter body scanners operated by federal employees in our courthouses and by TSA officers in airports across the country. That we can see these images today almost guarantees that others will be seeing similar images in the future. If you’re lucky, it might even be a picture of you or your family.
While the fidelity of the scans from this machine are of surprisingly low resolution, especially compared to the higher resolution “naked scanners” using the potentially harmful x-ray backscatter technology, the TSA and other government agencies have repeatedly touted the quality of “Advanced Imaging Technology” while simultaneously assuring customers that operators “cannot store, print, transmit or save the image, and the image.” According to the TSA—and of course other agencies—images from the scanners are “automatically deleted from the system after it is cleared by the remotely located security officer.” Whatever the stated policy, it’s clear that it is trivial for operators to save images and remove them for distribution if they choose not to follow guidelines or that other employees could remove images that are inappropriately if accidentally stored.
To the point, these sample images were removed from the machine in Orlando by the U.S. Marshals for distribution under the FOIA request before the machine was sent back to its manufacturer—images intact.
We look forward to seeing your next vacation photos.