In what seems like something out of a sci-fi movie, scientists have plucked the famous Pink Floyd song “Another Brick in the Wall” from individuals’ brains.

Previously, researchers have used electrodes, computer models and brain scans to decode and reconstruct individual words and entire thoughts from peoples brain activity (SN: 6/3/23, P. 14).

The new study, published August 15 in PLOS Biology, adds music into the mix, showing that songs can also be decoded from brain activity and revealing how different brain areas pick up an array of acoustic elements. The finding may eventually help improve communication devices used by people with paralysis or other conditions that limit the ability to speak.

Neuroscientist Ludovic Bellier of the University of California, Berkeley and colleagues decoded the song from data captured by electrodes on the brains of 29 people with epilepsy. While in the hospital being monitored for the disorder, the individuals listened to the 1979 rock song.

People’s nerve cells, particularly those in auditory areas, responded to hearing the song. The electrodes detected not only neural signals associated with words, but also rhythm, harmony and other musical aspects. With that information, the researchers developed a computer model to reconstruct sounds from the brain activity data, and found that the model could produce sounds that resemble the song.

“It’s a real tour de force.’ says neuroscientist Robert Zatorre of McGill University in Montreal. “Because you’re recording the activity of neurons directly from the brain, you get very direct information about exactly what the patterns of activity are”.

The study highlights which parts of the brain respond to different elements of music. Take the superior temporal gyri, or STGs, which are located in the lower middle of each side of the brain. Activity in one area within the STGs intensified at the onset of specific sounds, such as when a guitar note played. When vocals were used, activity in another area increased and stayed elevated.

The STG on the right side of the brain, but not the left, seemed to be crucial in decoding music. Removing information from the right STG in the computer model decreased the accuracy of the song reconstruction, the researchers found.

“Music is a core part of human experience; says Bellier, who has been playing musical instruments since he was 6 years old. “Understanding how the brain processes music can really tell us about human nature. You can go to a country and not understand the language, but be able to enjoy the music.’

Further probing musical perception will probably be difficult because the brain areas that process it are hard to access without invasive methods. And Zatorre wonders about the broader application of the computer model, which was trained on just the Pink Floyd song. In addition to other songs, “does [it] work on other kinds of sounds, like a dog barking or phone ringing?” he asks.

The goal, Bellier says, is to eventually be able to decode and generate natural sounds in addition to music. In the shorter term, incorporating the more musical elements of speech, including pitch and timbre, into brain-computer devices might help individuals with brain lesions, paralysis or other conditions communicate better.

Tags: , , , , , , ,