Researchers decode thought
Researchers at CMU have developed a computer-based model designed to decode the patterns of neural activity within the human brain. Their model is not only able to differentiate between a person’s perceptions of different physical objects, but is also able to predict the brain activation patterns associated with new, untested objects.
For the past six years, Tom Mitchell, a computer scientist at CMU and chair of its Machine Learning department, and Marcel Just, a cognitive neuroscientist here and director of its Center for Cognitive Brain Imaging, have led a team in researching how thoughts are represented within the brain.
The knowledge acquired from this research may lead to an increased understanding of certain neurological conditions.
“With these techniques, we can ask how a person with autism, for example, represents a concept like ‘friendship,’ ‘parent,’ or ‘love,’ and compare that neural representation to those of typical people, in order to see what is different,” Just said.
“It is amazing to think that the [required] technology was not even available 10 or 15 years ago,” Mitchell said. “But now we can use it to study questions that people have been interested in for thousands of years. We are living at the right time in history to be able to study those questions experimentally, instead of just philosophically.”
Their model works by analyzing brain images provided by functional magnetic resonance imaging (fMRI) to determine the characteristic distribution of neural activity across the brain observed in conjunction with specific words.
After developing a computer-based model designed to classify brain images, the team inputted fMRI data obtained from test subjects as they were shown a set of concrete nouns, in either word or picture form.
“In one of our early studies, we showed people words representing tools and words representing buildings,” Mitchell said. “We found that we could train our model so that it could successfully distinguish new tool words from new building words.”
Having developed a model able to categorize objects correctly approximately 90 percent of the time, the team sought to determine the effect that viewing an object as a picture, as opposed to viewing an object as a word, has on a person’s brain activation patterns. The team therefore trained their model on fMRI data collected from subjects looking at pictures, and then tested the model on fMRI data collected as subjects read corresponding words.
“The accuracy was almost the same,” Mitchell said. “The fact that it doesn’t matter whether we use a word or a picture means that we are really capturing the neural activity associated with the meaning of an item, and not just the [item’s representation].”
In a research paper published in the May 30 issue of Science, the team explains their creation of a computational model that is able to predict the 3-D fMRI images associated with objects for which the model has not yet seen fMRI data.
The team tested this model by collecting fMRI data for 60 objects, and then training their model with 58 of them.
The model would then predict the brain images associated with the other two objects, and was able to use these predictions to recognize the actual fMRI images correctly 78 percent of the time.
Having tested the concept behind their computational model, the team began applying it to new words.
They programmed their model search a trillion-word corpus of text and count the number of co-occurrences of each of a set of 25 verbs with each concrete noun in an effort to determine which actions people typically associate with each object.
This data was used to generate fMRI images for new objects by predicting the level of neural activity at 20,000 different points within the brain.
The team is currently exploring additional methods of collecting brain image data, such as electroencephalography (EEG).
In addition, the team has begun to collect fMRI images using multiple-word phrases to determine how the neural activation patterns for phrases differ from the neural activation patterns for their component words.
Data has also been collected for abstract concepts in an effort to expand the team’s research beyond concrete nouns.
“One of the next challenges that we are in the process of undertaking is understanding how the brain represents abstract concepts, such as ‘love,’ ‘parent,’ and ‘democracy,’ ” Just said.