Tag Archives: Department of Psychological Sciences

Attention Machines: The science of cinematic perception

This post was contributed by Sofia Ciccarone (master student of Cognitive Neuroscience and Neuropsychology, Birkbeck University of London)

It was exciting to be a part of this event, which took place in Birkbeck cinema in Gordon Square during Science Week.

Birkbeck CinemaThe people who participated not only had the opportunity to experience the amazing and capturing cinematography of The Fountain by Darren Aronofsky; they could also be both the participants and the researchers of a live experimental study.

The experiment was interested in how viewers’ attention changes throughout a movie. To this aim, audience’s attention was measured by locating their eye position on the screen. This was done by making the image disappear sometimes during the film and briefly substituting it with a flashing grid, which filled the whole cinema screen and contained a series of letters and number combinations.

The audience was asked to pay attention to this grid and to report (using their smartphones) the letter and numbers pairs (e.g. S76) they could identify among the other pairs contained in the grid. This procedure, which is known as crowdsourcing gaze data collection, is a method proposed in 2012 by Rudoy and others for collecting gaze direction from any number of participants simultaneously.

The eye movements of one volunteer from the audience were instead recorded using a portable eye tracker. The eye tracker was calibrated right before the start of the film and the participant sat in the front row of the cinema and enjoyed the film while her eye movements were being recorded.

After a shot practice trial, the audience’s eye movements were collected for the first part of the film. During the second half, while participants were allowed to watch the film without distractions, Dr Tim Smith and his team used the available time (48 minutes!) to analyse the answers submitted through the smartphones and the data recorded by the eye tracker.

After the film finished, Dr Tim Smith presented the results of the experiment. It was really surprising to find out that the two eye movement collection methods showed similar results: people mainly focused their attention on the centre of the screen. This is where the more frequently detected letter-number pairs were located. The gaze of the volunteer who wore the portable eye tracker also seemed to be mainly focussing on that area of the screen.

Why does this happen?

The composition of the shots, the camera movements, the staging and the editing of the scenes are some of the ways in which filmmakers direct viewers’ attention. As opposed to films shot in the past, modern TV and Hollywood cinema use a compositional style which involves rapid editing, bipolar extremes of lens length, wide-ranging camera movements and close shots.

For example, the scene in “The shop around the corner” (Esnst Lubitsch, 1940) where the two protagonists meet in the café, lasts 9 minutes and contains 20 shots lasting 27 seconds each. The same scene from a recent remake of this film, “You’ve got mail” (Nora Ephron, 1998), lasts 9 minutes and contains 134 shots of 4 seconds each.

This style causes the audience to have a unified experience of the film being watched, as it induces spectators to focus their attention on the centre of the screen, a type of behaviour defined as central tendency by Le Meur and others in 2007.

Find out more

Share

Infants, Down syndrome and the Alzheimer disease: A multidisciplinary approach

This post was contributed by Aline Lorandi, a visiting postdoctoral researcher under the supervision of Prof Annette Karmiloff-Smith, investigating the precursors of phonological awareness in Down Syndrome. She also is a collaborator in the infant stream of the London Down Syndrome Consortium (LonDownS), which investigates the links between Down syndrome and the Alzheimer disease

One of the premises of developmental neuroscience is based on the fact that, in order to understand certain phenotypes, it is crucial that we investigate their origins, that is, that we track the developmental trajectory that leads us to different sorts of behaviour, cognitive profiles, disorders, and diseases.

DNA StrandsWe must also acknowledge that the advances made by the field of developmental neuroscience allow us to take the debate between the contribution of genes and environment to another level: It is a fact that it is only possible to understand such contribution in a bilateral way, in which one modifies the other all the time.

With all that in mind, we can understand the curious title that Dr Esha Massand gave to her talk: ‘What can infants possibly tell us about Dementia?’ It seems a bit odd to think how studying babies can provide us any kind of relevant information about a condition typically related to ageing. Nevertheless, from the study of Down Syndrome arose the inspiration to establish the link between child development and Alzheimer’s disease.

The research described by Dr Massand is part of the LonDowS Research Consortium, involving different universities, which works in five sites: Genetics, mouse models, cells, adults, and infants.

The aim of the infant stream, according to Dr Massand, is to understand individual differences in infancy that may point to early signs of Alzheimer’s Disease. It is known that individuals with Down Syndrome have an extra copy of chromosome 21, and there is a gene in this chromosome, called APP gene, that produces a protein that, because of this extra chromosome, will be overexpressed in all individuals from the womb throughout development.

This APP gene produces plaques that are found in the brains of individuals with Alzheimer’s Disease. As the APP gene is overexpressed in Down Syndrome, it is very important to investigate its relationship with Alzheimer disease. One of the interesting facts is that, although all individuals with Down Syndrome will present, by the age of 30 onwards, the plaques in their brains, not all of them will develop signs of Alzheimer’s Disease.

Using a varied range of methodologies (eye tracking, sleep pattern measuring, EEG/ERP, behavioural tasks), Dr Massand and colleagues aim to understand how behaviour and neural responses may shed some light on whether it is possible to track some early biomarkers that can point to the onset of the disease in a developmental way. Among the cognitive and neural underpinnings, they are looking at several abilities, such as memory, attention, language, sleep fragmentation, mother/father/infant interactions, and many others. All those methodologies are very child-friendly.

Although preliminary, many interesting results already point to important individual differences, like the relationship between language and the gap-overlap/disengaging effect (the ability to disengage from one stimulus to look at another one, concomitantly or not).

Dr Massand’s team found that the fewer words a child understands and produces, the longer he or she takes to disengage from the stimulus presented in the task. Additionally, the disengaging effect was positively correlated to aggressive behaviour. That means that the higher the score that the child reached in the behaviour questionnaire (related, among other measures, to aggressive behaviour), the longer he or she took to disengage from the stimuli.

They also found a positive correlation between the ability to pay attention to novelties and detect them, to more sleep. Analysing several trials during a test to find the location of the objects, they also discovered that children with Down Syndrome may take longer to habituate to the objects and may take longer in the tasks: While typically developing children can detect a change of location of the objects in a first trial, observable by the duration of them looking at the screen in the eye tracking, children with Down syndrome do better – or more ‘typically’ – in a second trial, presenting more variability in the first trial than typically developing children. All these findings are related to individual differences that may be correlated to those who will be at risk of developing Alzheimer disease.

Exciting trends and lots more to do for Dr Esha Massand’s team! There are more data to collect, especially from controls, findings from EEG/ERP to analyse, which may point to underlying neural differences related to Alzheimer’s Disease, and the exciting combination with the data from the other streams (cells, mouse models, genetics, and adults) to explore.

As the questions from the audience show, this is the kind of research that makes us excited and curious about! Should the participants be followed longitudinally? How long do children take to get familiarised to the cap in the EEG tests? These and other questions about the relationships between the different cognitive abilities were answered by Dr. Massand, who also highlighted that the hope is to find those individual differences in adults as well, in order to seek a better understanding of the factors that might indicate early clinical signs of the Alzheimer’s Disease.

Find out more

Share

How the brain recognises faces

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer, Department of Biological Sciences 

The first of two evening lectures on the Wednesday of Birkbeck Science Week 2015 was given by Martin Eimer of the college’s Department of Psychological Sciences.

He, like the other Science Week lecturers, was introduced by the Dean of the Faculty of Science, Nicholas Keep; Professor Keep explained that Eimer, a native of Germany and a recently elected Fellow of the German Academy of Sciences, had built up his research lab at Birkbeck over the last fifteen years.

Language

His internationally recognised research concerns the relationship between brain function and behaviour in health and disease. The topic he selected for his lecture was a fascinating one: how our brains recognise human faces and what happens when this automatic process goes wrong.

Eimer began by outlining some reasons why we find faces so interesting to look at. When we look at a face we may be able to recognise that individual, either immediately or with difficulty, but – if our brains are working correctly – we will be able to tell what the person is feeling, or what they are looking at.

It seems that the facial expressions that are associated with basic emotions such as happiness, surprise, fear and disgust are common between most if not all cultures. And we also use faces to lip-read. People with hearing impairments are dependent on this, and learn to do it very well, but we all have some intrinsic lip-reading ability that we use automatically in noisy environments.

Next, he used perceptual demonstrations to illustrate that we process faces rather differently to other objects. If we look at a photo of a familiar or famous person that has been turned upside down we automatically think it looks odd, and we find the face hard to identify. This so-called ‘inversion effect’ is also seen with other objects but is much more pronounced with faces.

A stranger effect occurs if the photo of a face is altered so that only the eyes and mouth are upside down. This looks grotesque, but turning the altered photo upside down so that the eyes and mouth only are the right way up makes it look surprisingly normal. This was named the ‘Thatcher illusion’ by the scientists who discovered it in 1980, perhaps as an imaginative way of taking revenge for an early round of education cuts.

It is likely that we instinctively respond so differently to faces out of the normal upright orientation because our brains have an inbuilt ‘face template’. Even young infants respond to ‘face-like’ stimuli with two eyes, a nose and a mouth in approximately the right proportions and positions.

Face recognition, too, depends on small differences in these parameters between individuals (e.g. the height of the eyes above the nose and the distance between them). Contrast polarity is also important, and we find it much harder to identify face images if their contrast is inverted (as in a photographic negative). Interestingly, however, the task becomes easier if the eye region only is reverted to normal contrast. This suggests that we attach a particular importance to that region. It is also difficult to determine gaze direction if the contrast polarity around the eyes are inverted.

Eimer introduced another optical illusion in which half of each of the faces of George Clooney and Harrison Ford had been combined into a composite. The audience found it almost impossible to distinguish the two actors until the half-faces were separated. We had all instinctively formed a new face from the components and failed, for obvious reasons, to match it to an individual. This trick, which is known as holistic face processing, is also specific to faces.

The second half of the lecture dealt with the neuroscience of face recognition, and what happens when it goes wrong. When we look at a face (or any object) information from the image focused on the retina is initially transferred to a part of the back of the brain known as the primary visual cortex. It is then transferred to other parts of the brain, including the inferior temporal cortex, where objects are recognised.

Several types of experiments have been developed for measuring exactly what goes on in the brain. These include functional magnetic resonance imaging (fMRI), which generates brightly coloured images associated with changes in blood flow to parts of the brain, and electroencephalography (EEG) which records electrical activity on the scalp.

These techniques are complementary; EEG is faster but can only record signals from the surface of the brain. Between them, they have allowed scientists to identify several areas in the brain that are activated when faces, but not other objects, are perceived and a rapid, strong electrical impulse that seems to be a unique response to faces.

It is much easier to recognise the face of a familiar individual – family member, friend or celebrity – than to distinguish between the faces of unknown people. This task, however, is required in many professions: most often and most obviously passport officers and detectives, but also, for example, teachers at the beginning of each new school year. Some people are much better at doing this than others, but even the most skilled make mistakes, and the UK immigration service (and, no doubt, the equivalent bodies in other countries) is looking into ways of doing it automatically.

People at the other end of the spectrum – who find it particularly difficult to recognise faces – are said to have a condition called prosopagnosia, or ‘face blindness’. These people have a severe but very specific defect in recognising faces: their intellect and their vision are normal, and they can recognise individuals easily enough from their voice, gait or other cues.

This condition is divided into two types: acquired prosopagnosia, which arises after brain damage, and developmental prosopagnosia, which can be apparent from early childhood, without any obvious brain damage. The acquired type is typically more severe; the eponymous Man who Mistook his Wife for a Hat described in Oliver Sacks’ fascinating book suffered from this condition. The rapid brain response to faces is missing from an EEG of a person with acquired prosopagnosia, and other tests will show that the brain regions that are specifically associated with face processing have been damaged.

About 2% of the population can be said to have some degree of developmental prosopagnosia. There is no association with intelligence and it affects many successful professionals. Eimer showed part of a TV programme featuring an interview with a woman who is particularly badly affected. She explained the problems she has encountered throughout her life, ranging from following characters in films to telling her own daughter from other little girls with bunches in the school playground. Her father had also suffered from the condition, and she had been very relieved to receive a formal diagnosis.

The EEG patterns of individuals with developmental prosopagnosia are less different from normal than those of people with brain damage, but they are recognisable. Interestingly, differences in brain responses to upright as compared to inverted faces are not seen in people with developmental prosopagnosia.

Face recognition abilities form a continuum and many people who think of themselves as being ‘terrible’ at recognising faces will find that they are in the normal range. Eimer’s group has a website that includes an online test, the Cambridge Face Memory Test. Participants are asked to memorise a face and then pick it out from a group of three; the tests start easy but become more challenging. People with very high and very low scores will be invited to be involved in further research in the Brain and Behaviour Lab at Birkbeck

Interested? Find out more

Share

Exploring the hidden complexities of routine behaviour at Birkbeck’s Science Week

This post was contributed by Guy Collender, Communications Manager, Birkbeck’s Department of External Relations.

Dr Richard Cooper

Dr Richard Cooper at Birkbeck’s Science Week

How often do you forget to attach the relevant document when you are sending emails? When was the last time you accidentally put the coffee in the fridge instead of the milk? Or, more alarmingly, when did you last leave the nozzle of the petrol pump in your car when you drove off from the petrol station? (Yes, believe it or not, there is ample photographic evidence to prove the last point).

Such errors, made during routine tasks, were the centre of attention at a fascinating lecture, entitled The hidden complexities of routine behaviour, during Birkbeck’s Science Week. Dr Richard Cooper explained why it is important to understand routine behaviour, why mistakes are made during everyday tasks, and the implications for the rehabilitation of brain-damaged patients.

Benefits of routine behaviour
The presentation on 3 July began with a description of routine behaviour and its advantages. Dr Cooper, of Birkbeck’s Department of Psychological Sciences, defined routine tasks, such as dressing, grooming, preparing meals, and cleaning, as frequently performed tasks carried out in a stable and predictable environment. By automatically performing various stages in a routine task, people do not have to plan every action on a moment-by-moment basis. This, as Dr Cooper showed, saves the mental exertion associated with constant planning, and enables the brain to think about other things when performing routine tasks.

Difficulties associated with routine tasks
However, routine tasks are prone to error, especially following an interruption, and these mistakes may have “catastrophic consequences”, including vehicle collisions and industrial accidents. Dr Cooper said: “Routine behaviour is not something we can take for granted.”

The lecture continued with a list of different types of errors made while performing routine tasks. These include omission errors (leaving out a vital task), perseverative errors (repeating an action even though the goal has been achieved), and substitution errors (mixing up objects).

Dr Cooper showed how people with brain injuries are much more prone to making these mistakes. He said: “Neurological patients can have a much more difficult time.” They can suffer from a range of problems, including anarchic hand syndrome (where one hand performs involuntary movements), frontal apraxia (which leads to patients making sequential errors and substitution errors on a minute-by-minute basis), or ideational apraxia (which leads to the right action, but wrong place – such as trying to light the wrong end of a candle).

Devising solutions
Dr Cooper also referred to studies of brain-damaged patients in rehabilitation clinics and their performance of routine tasks in a controlled environment. He said: “Re-learning must focus on rote learning of the precise procedure, with no variation. Home environments should be designed to minimise distractions.”

Dr Cooper also hinted at future developments in this field as smart devices might be able to monitor the performance of routine tasks for certain errors. Hopefully the latest technology will be able to help reduce everyday problems in the years ahead.

Share