The Speech/Song Illusion

This post was contributed by Rosy Edey, PhD student and graduate teaching assistant in the Department of Psychological Sciences. Rosy attended a Birkbeck Science Week 2016 event on Thursday 14  April – ‘Talk: The Speech/Song Illusion’ (led by Dr Adam Tierney)

speak-238488_1920

Sadly all good things must come to an end, and the finale of Birkbeck’s 2016 Science Week was a compelling musical one, by one of Birkbeck’s newest members of the Psychology Department, Dr Adam Tierney. In a humorous and engaging way Adam took the audience through the scientific story of the “evolution of music”. Music seems almost completely purposeless, and let’s face it a little bit strange, so why do we love it so much?

What is music?

Adam placed the first known musical instrument (an intricate bone flute) back 40,000 years, which was way before the first record of written word (5000 years ago), but much later than (a good estimate of) when we first evolved to make vocalisations (400,000 years ago). The absolute origin of music is obviously very difficult to pinpoint – as it is possible (and probable) that way before we built tools – like the bone flute – to make music, we were signing our hearts out in the moonlight.

This questionable timing of the birth of music raises the question: what came first, speech or music? Whichever one came first, if one evolved from the other we would expect music and language to share similar characteristics. Indeed, Adam presented evidence that both the huge varieties of globally spoken languages and music from around the world share common universalities (which at first seemed very unlikely based on the diversity of music that was perfectly demonstrated through a bizarre example of washing machine “music” and also a collection of songs from the playlist from the Voyager I and II spacecraft gold plates).

These shared acoustic qualities included alternating beat patterns, descending melodic contours, and increases in final phrase duration. Using the very complicated sounding “Normalised Pairwise Variability Index” (i.e. jargon for a measure of rhythmic alteration, or a measure of paired stress in phrases) Adam showed there were also commonalities between languages and music within and between specific countries (basically English music sounds English, and French sounds French, but English music/ language does not sound like French music/language). All of these beautiful subtleties hidden in the acoustics of spoken word and music provide vast amounts of data, which signal meaning to the listener. These underlying similarities do hint that music and speech are distant cousins.

Music as Speech with added extras

Playing music with speech can change it into a song; The Jazzy Sarah Palin Interview was a good example of this:

 

And it seems even without music our brains can transform speech into music. Diana Deutsch discovered this phenomenon in 1995, while looping some spoken data.

After several iterations the phrase “sometimes behave so strangely” no longer sounded like speech, and had converted into song (I now cannot even read this phrase without hearing the tune). All the phrases in Adam’s Corpus of Illusion Stimuli turned into singing, but interestingly, the “control” sentences didn’t have the same effect. This illusion appears to be a useful tool to test further the idea of music evolution and ask more detailed questions, such as: “what is required for speech to become song?” and “what mechanisms are going on in our brains when we change speech into song?”

Testing the Science

Dr Adam Tierney

Dr Adam Tierney

Adam has pulled out the acoustic elements that predict what speech phrases are heard as song. He suggests there are two main factors which induce the illusion; increased beat variability and increased pitch intervals. Remarkably, there is large variability between people’s experience, and being a trained musician doesn’t improve your ability to detect the illusion.

So what is going on in the brain? Adam’s hunch was that these ‘musical’ phrases are processed in the same way as when listening to speech, but with a little added extra. And this does in fact seem to be the case, we activate a similar network to when we hear normal speech, but extra activation in regions that are highly pitch sensitive (e.g. Heschl’s Gyrus – a very early part of the auditory system), and also motor regions (e.g. precentral gyrus – which hosts a map of the body, but specifically the mouth region) when we listen to the ‘song’. Interestingly, there were no regions that were more active for just speech over the song phrases. Adam suggested participants were imagining singing and tapping along to the beat, and processing the pitch more deeply in these ‘song’ phrases. This evidence neatly fits the behavioural data, showing that phrases that have a strong rhythm and more of a melody are processed differently by the brain, which results in them being distorted from speech into song.

Although it is virtually impossible to know the true origin of music, Adam managed to make quite a convincing case that song is just speech with some ribbons on, and quite possibly did evolve from speech.

Find out more

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.