The capacity to engage with and comprehend music spans nearly every human society. While other creatures also display musical behaviors (think bird song, humpback whale calls, or bonobo vocalizations), our musical cognition appears to be evolutionary distinct within the animal kingdom.
A new study has given us more insight into the brain’s relationship with music, finding that singing has a distinct neural signature when compared to speech or instrumental music.
Getting a look at the brain is no easy feat, however.
To get a precise picture of what happens in the brains as people hear sounds, researchers used a technique called electrocorticography (ECoG), where electrodes are placed within the skull to record electrical activity from the brain.
The type of data gathered from ECoG is much more precise than other techniques of measuring brain activity because the electrodes are placed directly on the brain than in electroencephalography (EEG), and they measure electrical activity rather than where blood is flowing in the brain, which a proxy for neuron activity (functional magnetic resonance imaging or fMRI does this).
Obviously, applying electrodes directly onto the brain is an invasive procedure, so researchers gathered their data over several years from epilepsy patients who were already undergoing surgeries to treat seizures.
Electrodes are usually placed within the scalp of epilepsy patients to monitor their neural activity for days before surgery. During that time, if the patients agree, they can take part in studies where they have their brain activity recorded while performing certain tasks.
In this case, the task involved listening to 165 commonly heard sounds, ranging from the vibration of a mobile phone to pouring liquid to a man speaking to typing. Included in this mix of sounds was music with singing and instrumental music without any vocalization.
Fascinatingly, researchers found a distinct population of neurons that responded specifically to singing, with this population differing from the neural representations of instrumental music and speech more generally.
“Our key novel finding is that one of these components responded nearly exclusively to music with singing. This finding indicates that the human brain contains a neural population specific to the analysis of song,” the authors add.
“These findings suggest that music is represented by multiple distinct neural populations, selective for different aspects of music, at least one of which responds specifically to singing,” say the authors.
In the paper, the researchers speculate about characteristics of singing that make it a distinct category in need of its own neuro-dynamic signature.
“Singing is distinguished from speech by its melodic intonation contour and rhythmicity and from instrumental music by vocal resonances and other voice-specific structure. Thus, a natural hypothesis is that song-selective neural populations nonlinearly integrate across multiple features that differentiate singing from speech and music, such as melodic intonation and vocal resonances,” suggest the authors.
The researchers combined their ECoG data with fMRI data from a previous study that used the same methodology, giving the researchers a better idea of the location of the neural activity.
“This way of combining ECoG and fMRI is a significant methodological advance,” says Josh McDermott, an MIT cognitive neuroscientist who co-authored the study.
The research gives neuroscientists a better idea of how our brains represent the nuances of music. And although questions such as how neural music and song selectivity arose over the course of our development or evolution remain, the novel technique of combining ECoG and fMRI data could help future studies answer these questions.
This research has been published in Current Biology.