Researchers at the University of California, Berkeley have developed an algorithm that can be applied to functional magnetic resonance imaging (fMRI) to reconstruct YouTube videos from viewers' brain activity.
Jack Gallant, study leader and a UC Berkeley neuroscientist, and Shinji Nishimoto, a post-doctoral researcher in Gallant's lab, were able to "read the mind" by deciphering and rebuilding the human visual experience.
However, they were also careful to point out that any technology that allows us to read each other's thoughts and intentions are at least decades away.
The study, which appears in Current Biology this week, marks the first time that anyone has used brain imaging to determine what moving images a person is seeing. It could help researchers model the human visual system on a computer, and it raises the tantalizing prospect of one day being able to use the model to reconstruct other types of dynamic imagery, such as dreams and memories.
Jack Gallant, a UC Berkeley neuroscientist and coauthor of the study, described the results as a "major leap toward reconstructing internal imagery," adding that "we are opening a window into the movies in our minds."
"Once we had this model built, we could read brain activity for that subject and run it backwards through the model to try to uncover what the viewer saw," added Gallant.
"If you can decode movies people saw, you might be able to decode things in the brain that are movie-like but have no real-world analog, like dreams," Gallant said.
"The brain isn't just one big blob of tissue. It actually consists of dozens, even hundreds of modules, each of which does a different thing," noting, Gallant hope to look at more visual modules, and try to build models for every single part of visual system.
Whether the technology could also be used to watch people's dreams or memories -- even intentions -- depends on how close those abstract visual experiences are to the real thing. "We simply don't know at this point. But it's our next line of research," noted Gallant.
Shinji Nishimoto, a neuroscientist in Gallant's lab and the study's lead author, said the results shed light on how the brain understands and processes visual experiences.
"We need to know how the brain works in naturalistic conditions," Nishimoto said in a statement. "For that, we need to first understand how the brain works while we are watching movies."
"Our natural visual experience is like watching a movie," explained Shinji Nishimoto. "In order for this technology to have wide applicability, we must understand how the brain processes these dynamic visual experiences," he added.
Ultimately, Nishimoto said, scientists need to understand how the brain processes dynamic visual events that we experience in everyday life.
Gallant's coauthors acted as study subjects, watching YouTube videos inside a magnetic resonance imaging machine for several hours at a time. The team then used the brain imaging data to develop a computer model that matched features of the videos -- like colors, shapes and movements -- with patterns of brain activity.
The researchers say the technology could one day be used to broadcast imagery -- the scenes that play out inside our minds independent from vision.
The ultimate goal of this research is to create a computational version of the human brain that "sees" the world as we do. The study also demonstrates an unexpected use for an existing technology. "Everyone always thought it was impossible to recover dynamic brain activity with fMRI," says Gallant.
Other coauthors of the study are Thomas Naselaris with UC Berkeley's Helen Wills Neuroscience Institute; An T. Vu with UC Berkeley's Joint Graduate Group in Bioengineering; and Yuval Benjamini and Professor Bin Yu with the UC Berkeley Department of Statistics.
Cihan news agencyLast Mod: 28 Eylül 2011, 10:00