Scientists have trained a computer to analyze the brain
activity of someone listening to music and, based only on those neuronal
patterns, re-create the song.
اضافة اعلان
The recently published research, produced a recognizable, if
muffled version of Pink Floyd’s 1979 song, “Another Brick in the Wall (Part
1).”
Before this, researchers had figured out how to use brain
activity to reconstruct music with similar features to the song someone was
listening to. Now, “you can actually listen to the brain and restore the music
that person heard,” said Gerwin Schalk, a neuroscientist who directs a research
lab in Shanghai and collected data for this study.
Researchers also found a spot in the brain’s temporal lobe
that reacted when volunteers heard the 16th notes of the song’s guitar groove.
They proposed that this particular area might be involved in our perception of
rhythm.
The findings offer a first step toward creating more
expressive devices to assist people who can’t speak. Over the past few years,
scientists have made major breakthroughs in extracting words from the
electrical signals produced by the brains of people with muscle paralysis when
they attempt to speak.
But a significant amount of the information conveyed through
speech comes from what linguists call “prosodic” elements, such as tone — “the
things that make us a lively speaker and not a robot,” Schalk said.
By better understanding how the brain metabolizes music,
scientists hope to build new “speech prosthetics” for people with neurological
diseases affecting their vocal production. The aim is for these devices to
relay not only what someone is trying to say, but retain some of the
musicality, rhythm, and emotion of the organic speech.
To collect data for the study, researchers recorded from the
brains of 29 epilepsy patients at Albany Medical Center in New York state from
2009-15.
As part of their epilepsy treatment, patients had a net of
nail-like electrodes implanted in their brains. This created a rare opportunity
for neuroscientists to record from their brain activity while they listened to
music.
The team chose the Pink Floyd song partly because older
patients liked it. “If they said, ‘I can’t listen to this garbage,’” then the
data would have been terrible, Schalk said. Plus, the song features 41 seconds
of lyrics and 2 1/2 minutes of moody instrumentals, a combination that was
useful for teasing out how the brain processes words versus melody.
Robert Knight, a neuroscientist at the University of
California, Berkeley, and the leader of the team, asked one of his postdoctoral
fellows, Ludovic Bellier, to try to use the data set to reconstruct the music
“because he was in a band,” Knight said. The lab had done similar work reconstructing
words.
By analyzing data from every patient, Bellier identified
what parts of the brain lit up during the song and what frequencies these areas
were reacting to.
Much like how the resolution of an image depends on its
number of pixels, the quality of an audio recording depends on the number of
frequencies it can represent. To legibly reconstruct “Another Brick in the
Wall,” researchers used 128 frequency bands. That meant training 128 computer
models, which collectively brought the song into focus.
The researchers then ran the output from four individual
brains through the model. The resulting re-creations were all recognizably the
Pink Floyd song but had noticeable differences. Patient electrode placement
probably explains most of the variance, researchers said, but personal
characteristics, such as whether a person was a musician, also matter.
The data captured fine-grained patterns from individual
clusters of brain cells. But the approach was also limited: Scientists could
see brain activity only where doctors had placed electrodes to search for
seizures. That’s part of why the re-created songs sound like they are being
played underwater.
Other groups are doing similar experiments using noninvasive
brain scanners, such as functional magnetic resonance imaging, or fMRI, which
gives a less detailed measure of activity but scans across the entire brain.
Yu Takagi, a neuroscientist at Osaka University,
collaborated this year with scientists at Google to use fMRI data to identify
the genre of music that a volunteer was listening to while in a brain scanner.
Takagi said the new study was significant because it showed
that meaningful data could be collected from a relatively small number of
neuronal clusters. “You don’t need that many electrodes to make something
quality,” he said.
The new research also underscored what makes music different
from speech. When the study volunteers heard a song, the right side of their
brains tended to be more involved than the left, whereas the opposite happens
when people hear plain speech. This finding, replicating previous research,
helps explain why some stroke patients who can’t speak well can clearly sing
sentences.
“It’s a technical tour de force,” said Robert Zatorre, a
neuroscientist at McGill University whose lab established how the brain
separates lyrics from music using brain scans. But to play back a song from
someone’s head? “That’s a very interesting contribution,” he said.
Read more Music
Jordan News