Tag Archives: Psychology

The Speech/Song Illusion

This post was contributed by Rosy Edey, PhD student and graduate teaching assistant in the Department of Psychological Sciences. Rosy attended a Birkbeck Science Week 2016 event on Thursday 14  April – ‘Talk: The Speech/Song Illusion’ (led by Dr Adam Tierney)

speak-238488_1920

Sadly all good things must come to an end, and the finale of Birkbeck’s 2016 Science Week was a compelling musical one, by one of Birkbeck’s newest members of the Psychology Department, Dr Adam Tierney. In a humorous and engaging way Adam took the audience through the scientific story of the “evolution of music”. Music seems almost completely purposeless, and let’s face it a little bit strange, so why do we love it so much?

What is music?

Adam placed the first known musical instrument (an intricate bone flute) back 40,000 years, which was way before the first record of written word (5000 years ago), but much later than (a good estimate of) when we first evolved to make vocalisations (400,000 years ago). The absolute origin of music is obviously very difficult to pinpoint – as it is possible (and probable) that way before we built tools – like the bone flute – to make music, we were signing our hearts out in the moonlight.

This questionable timing of the birth of music raises the question: what came first, speech or music? Whichever one came first, if one evolved from the other we would expect music and language to share similar characteristics. Indeed, Adam presented evidence that both the huge varieties of globally spoken languages and music from around the world share common universalities (which at first seemed very unlikely based on the diversity of music that was perfectly demonstrated through a bizarre example of washing machine “music” and also a collection of songs from the playlist from the Voyager I and II spacecraft gold plates).

These shared acoustic qualities included alternating beat patterns, descending melodic contours, and increases in final phrase duration. Using the very complicated sounding “Normalised Pairwise Variability Index” (i.e. jargon for a measure of rhythmic alteration, or a measure of paired stress in phrases) Adam showed there were also commonalities between languages and music within and between specific countries (basically English music sounds English, and French sounds French, but English music/ language does not sound like French music/language). All of these beautiful subtleties hidden in the acoustics of spoken word and music provide vast amounts of data, which signal meaning to the listener. These underlying similarities do hint that music and speech are distant cousins.

Music as Speech with added extras

Playing music with speech can change it into a song; The Jazzy Sarah Palin Interview was a good example of this:

 

And it seems even without music our brains can transform speech into music. Diana Deutsch discovered this phenomenon in 1995, while looping some spoken data.

After several iterations the phrase “sometimes behave so strangely” no longer sounded like speech, and had converted into song (I now cannot even read this phrase without hearing the tune). All the phrases in Adam’s Corpus of Illusion Stimuli turned into singing, but interestingly, the “control” sentences didn’t have the same effect. This illusion appears to be a useful tool to test further the idea of music evolution and ask more detailed questions, such as: “what is required for speech to become song?” and “what mechanisms are going on in our brains when we change speech into song?”

Testing the Science

Dr Adam Tierney

Dr Adam Tierney

Adam has pulled out the acoustic elements that predict what speech phrases are heard as song. He suggests there are two main factors which induce the illusion; increased beat variability and increased pitch intervals. Remarkably, there is large variability between people’s experience, and being a trained musician doesn’t improve your ability to detect the illusion.

So what is going on in the brain? Adam’s hunch was that these ‘musical’ phrases are processed in the same way as when listening to speech, but with a little added extra. And this does in fact seem to be the case, we activate a similar network to when we hear normal speech, but extra activation in regions that are highly pitch sensitive (e.g. Heschl’s Gyrus – a very early part of the auditory system), and also motor regions (e.g. precentral gyrus – which hosts a map of the body, but specifically the mouth region) when we listen to the ‘song’. Interestingly, there were no regions that were more active for just speech over the song phrases. Adam suggested participants were imagining singing and tapping along to the beat, and processing the pitch more deeply in these ‘song’ phrases. This evidence neatly fits the behavioural data, showing that phrases that have a strong rhythm and more of a melody are processed differently by the brain, which results in them being distorted from speech into song.

Although it is virtually impossible to know the true origin of music, Adam managed to make quite a convincing case that song is just speech with some ribbons on, and quite possibly did evolve from speech.

Find out more

Share

Computational modelling of the mind

This post was contributed by Nick Sexton, PhD student in the Department of Psychological Sciences

Prof Rick Cooper

Prof Rick Cooper

How can computer simulations help us understand the human mind? That was the main topic of the Rick Cooper Inaugural Lecture, in which Professor Cooper outlined 15 years of research on cognitive computational modelling.

Cognitive computational modelling boils down to designing computer simulations of how the mind processes information. While computers that appear to think in a human-like-way (whatever that means) are increasingly commonplace in our everyday lives – driverless cars, the Google Deepmind model which learns to play Atari games, and intelligent personal assistants, are all examples – the talk revealed that a more difficult challenge is not only to mimic (or improve on) human behaviour, but to produce it in the same way that humans do – using the same types of mental process.

For example, certain computer programs have succeded in being indistinguishable from humans on Alan Turing’s classic test of artificial intelligence: however, when one digs under the surface, it is readily apparent that their responses are generated in a not remotely human-like way.

So if modelling how the human mind actually works is tricky, how does one go about doing it? Cooper’s approach is to build on theories of how the mind works, from cognitive psychology, often pieced together through painstaking use of behavioural experiments on human participants. These theories, describing how the mind processes information, often resemble flow-chart-like schematics – but often the details are left vague.

This is where cognitive modelling comes in – a fully operational computational model must provide exact details on the inputs, outputs, and algorithms computed, at every stage of mental processing, so the modeller must fill in details that the theorist has left blank. It is a test of whether the psychological theory really is sufficient to explain what it purports to explain, and if not, suggest what details it might be missing.

One element that makes Cooper’s research stand out is his focus, not just on abstract tasks conducted in a sterile psychology or neuroscience lab, or even on a less defined realm of behaviour, as in the Atari game player – but on distinctively human, often startlingly everyday behaviour.

For instance, a large amount of what we consider normal human behaviour is routine – habitual actions, like preparing meals or hot drinks, dressing, commuting. One particular branch of Cooper’s modelling work has been on developing a computational theory of how the mind accomplishes routine actions with minimal attentional oversight, and how this mental apparatus can be applied to non-routine situations.

One model of routine everyday actions simulated preparing drinks. It manipulated objects in its (virtual) environment, like utensils (cups, knives, juicers) and resources (such as hot water, coffee, tea, milk, sugar, oranges )- to achieve an end goal – such as preparing coffee(milk no sugar). The model needed to account for normal human behaviour – successful preparation of the drink most of the time, with occasional lapses – sometimes forgetting to put milk in the coffee, or adding sugar when it wasn’t required.

So what is interesting about a model which prepares drinks (sometimes badly)?
Well, the model was also able to explain what happens when normal mental processes break down – say, in the event of brain damage. With certain setttings, the model not only simulated the lapses of neurotypical people, but also the more extreme lapses observed in
patients with particular types of brain damage – putting butter in the coffee, or forgetting to add water, say.

The model was also able to simulate the behaviour of patients with specific conditions – Ideational apraxic patients struggle to retain a sense of an object’s purpose – say, trying to use a fork to cut an orange. Patients with utilisation behaviour tend to perform actions
appropriate to a given object, but inappropriately to the current situation – take off your glasses and hand them to the patient, and they are liable to put them on.

Here, a cognitive model is rather more use than more everyday artificial intelligences which perform everyday tasks, such as Siri – because Siri might ‘think’ in a way completely differently to humans, there is no reason to believe that if we deliberately damage part of the program, she will produce behaviour typical of people with brain damage. However, because Cooper’s model was based on  neuropsychological theories where routine actions depend on the correct interaction of different cognitive processes – simulating damage to specific processes in the model was able to account well for the
differrent patterns of behaviour typical of different neural conditions.

This approach isn’t just useful for understanding what might be damaged in people unfortunate enough to suffer brain damage, then – it is also a powerful tool for trying to understand what role those cognitive processes play in the human mind when it is functioning normally, and whereabouts in the brain they might take place.

The hour-long talk gave a fascinating glimpse into how – as the knowledge gained from the brain and mind sciences continues to accelerate – computational cognitive modelling has an important role to play in drawing together different disciplines – taking cutting-edge research in psychology, neuroscience, and machine learning – showing how the individual pieces fit together, to give us a better glimpse of the overall picture of how our minds work.

Find out more

Share