Skip to main content

Philippe Albouy publishes an article in Science that helps to better understand the strategy used by the brain to process speech and music.

Speech and music: two hemispheres are better than one | A study published in Science helps to better understand the brain’s strategy for processing speech and music

By : Jean Hamann, ULaval nouvelles | Translated by CERVO

Speech and music are two forms of acoustic communication processed differently by the brain. Indeed, several studies conducted over the last few decades have shown that speech is processed preferentially in the left hemisphere, whereas the processing of a musical melody takes place mainly in the right hemisphere. A study published today in the journal Science by researchers from Université Laval, McGill University and Université Aix-Marseille provides new evidence that helps us better understand the basis of this asymmetry.

The first author of the study, Philippe Albouy, from the School of Psychology and the CERVO Research Centre at Université Laval, recalls that two hypotheses have been proposed to explain the lateralization of the decoding of acoustic stimuli.

« A « modular » view proposes that this lateralization is associated with the existence of brain regions specifically dedicated to the perception of speech, in the left hemisphere, and to the perception of music, in the right hemisphere. The other hypothesis, which we tested in our study, is that distinct neural networks in each hemisphere would be sensitive to certain acoutistic properties (frequency, temporal structure) of speech or music ».

In an attempt to get a clearer picture, the researchers used the services of a composer who was asked to create 10 phrases and 10 musical melodies, the combination of which resulted in 100 short songs. « This way of doing things allows us to study the decoding of both speech and melody using a single signal that contains both information, » says Albouy.

Using filters, the researchers altered the recordings of these pieces to change their temporal or spectral dimension, two of the many properties of a sound signal, says Albouy: « Temporal degradation is the blurring of sound signals over time. Rather than having a signal that is very precise in time, like when we speak, we scramble the signal, which has the effect of erasing the sharpness of syllables or phonemes, » he explains. To produce a spectral degradation of the songs, the researchers apply a filter that alters the frequencies of the acoustic signals that make up the sounds.

The researchers then recruited 49 subjects to listen to the a cappella songs, presented in pairs, and asked them to compare either the lyrics or the melodies. 

The result: « When time information is distorted, participants have difficulty recognizing the lyrics, but not the melodies. Conversely, when spectral information is distorted, they have difficulty recognizing the melodies, but not the lyrics. This shows that the recognition and perception of lyrics and melodies depend on different acoustic characteristics, » says Albouy.

The researchers took their investigation a step further by performing a functional magnetic resonance imaging examination of the brain as participants made comparisons between a cappella songs. The images show that speech processing is observed mainly in the left auditory cortex, and melody processing mainly in the right auditory cortex.

« There would not, strictly speaking, be any region of the brain that specializes in music or speech processing. »

– Philippe Albouy

The researchers found that the degradation of the spectral dimension only disrupted the activity of the right auditory cortex, and only for melody processing. The degradation of the temporal dimension only affected the left auditory cortex, and only for speech processing. « The differential response in each hemisphere depends on the type of acoustic information perceived, » says Professor Albouy.

These results suggest that there is, strictly speaking, no region of the brain that specializes in music or speech processing. « Humans have developed complementary neural systems that allow the integration of certain acoustic properties of auditory stimuli in each hemisphere, » says Albouy. These processing units are therefore sensitive to different properties of sound signals and this explains the lateralization observed in the perception of speech and music. »

The other authors of the study are Lucas Benjamin and Robert Zatorre, from McGill University, and Benjamin Morillon, from the Université Aix-Marseille.

Read the article, in French, with musical excerpts on the ULaval Nouvelles website: Parole et musique: deux hémisphères valent mieux qu’un

Read the original research article in Science:

Albouy P, Benjamin L, Morillon B, Zatorre RJ. Distinct sensitivity to spectrotemporal modulation supports brain asymmetry for speech and melody. Science. 2020;367(6481):1043–1047. doi:10.1126/science.aaz3468

For more details on the study, view the vidéo prepared by the researchers.