Cool views of molecular machines

Carolyn Moores, Professor of Structural Biology, writes about how academics in the Institute of Structural and Molecular Biology (ISMB) are using their new cryo-electron microscope to address big questions about antibiotic resistance, disease and life itself.

The state-of-the-art cryo-electron microscope at Birkbeck

Tucked discreetly in a corner of the Malet Street Extension building, a new state-of-the-art microscope is generating Terabytes of high-resolution imaging data. This £4M cryo-electron microscope (cryo-EM) is using Nobel Prize-winning technology to address a wide range of biologically and biomedically important questions. Researchers in the Institute of Structural and Molecular Biology (ISMB) are using it to study the molecular machines that are found within all living cells and that are essential for life. The ISMB – a joint research institute between Birkbeck and UCL – is internationally recognised as a centre of excellence for cryo-EM and for our research focused on diverse molecular machines.

In 2016, we were thrilled to be awarded a grant by Wellcome which, together with contributions from Birkbeck and UCL, funded the purchase of our Titan Krios microscope. Delivery day, earlier this year, was a big moment for the whole team. Prior to its arrival, we had been working hard with colleagues from Estates and external contractors to prepare a high-spec room for its arrival. Our microscope travelled in its own juggernaut from the Thermo Fisher factory in The Netherlands, with its ~3,300 parts carefully catalogued and packed in 21 crates. With nerves jangling, we all held our breath as the largest and heaviest part of the microscope was squeezed into the lift, travelled one floor to the basement without mishap, and edged its way out again. Constructed by dedicated engineers over the course of 8 weeks like a very big ship in a bottle, this 4.5 tonne, 4m tall monolith is now steadily producing up to 4TB of data a day.

Christmas came early – delivery of the cyro-electron microscope

Since its invention in the 1930s, electron microscopy has steadily improved in power and sophistication, and recent advances have moved this method to the forefront of structural biology. In particular, rapid freezing of samples to liquid nitrogen temperatures (~-195°C) allows sample preservation to be maintained inside the microscope, and provides some cryo-protection from the otherwise damaging effects of the electrons that are used for sample imaging. The cryo-EM field has undergone a technological revolution in the last ~5 years such that the application of this methodology has expanded enormously. The 2017 Nobel Prize in Chemistry was awarded to three pioneers in the field.

Our new microscope brings many benefits. First, it is very stable: since the ultimate goal of our experiments is to visualize the position and organization of all the atoms from which molecular machinery is built, microscope stability is vital. Coupled to this, the Titan Krios is designed to allow automated data collection such that, after a few hours of set up, data collection experiments can be left to run for several days without user intervention. Previous generation equipment required trained microscopists to sit at the microscope for as many hours as they could manage. Since our structural experiments require many thousands of sample images to be computationally combined to reveal their 3D structure, this is gruelling. Implementation of sophisticated data collection software is much more efficient and strongly preferred by the microscopists! In addition, although still requiring expertise and training, the Titan Krios is user-friendly while not compromising on its high-end capabilities. Around a dozen ISMB researchers have already received training and are busy collecting data for studies on topics as diverse as cancer, dementia, antibiotic resistance and malaria. Understanding the structures of biological molecules and assemblies reveals how they work and makes it possible to design drugs or treatments for diseases.

Installation in Birkbeck’s Malet Street building

With new opportunities come fresh challenges. Because the new microscope produces high-quality data – and a lot of it – so efficiently, our focus has now shifted to the problem of producing high-quality samples that can be fed into the Titan Krios imaging pipeline. This now represents a major bottleneck for many projects, both because of the time intrinsic to sample optimisation and because researchers need to learn the skills in order to undertake this optimisation. We are now focusing on ways to allow more projects and more researchers at UCL and Birkbeck to benefit from the cryo-EM “resolution revolution”.

. Reply . Category: Science

Shining a light on brain development

Since recording the first brain images of babies in Africa, Professor Clare Elwell, part of the Brain Imaging for Global HealTH (BRIGHT) network, of which Birkbeck is a member, has been leading a pioneering study to increase our understanding of early brain development. Clare discusses bringing a new imaging technology to a remote Gambian village, and how it could help babies suffering from malnutrition to reach their full potential.

Image credit Bill and Melinda Gates Foundation

Before they reach five years of age, one in four children across the globe are malnourished. There’s a lot of research showing the detrimental impact this has on their development. But we know very little about what’s going on inside their brains.

Conventional brain imaging techniques, like magnetic resonance imaging (MRI), require large, bulky and expensive scanners. Unfortunately, this technology is not available in resource-poor settings, often where malnutrition is most prevalent, and is not always suitable for very young babies.

Clare Elwell with an infant taking part in the BRIGHT study: Image credit Bill and Melinda Gates Foundation

The BRIGHT families

We’re recruiting the mothers before the babies are born, because some of our measures start within the first few days of the infants’ lives. It’s a huge commitment. We’re extremely grateful to all the families in our study, both in the UK and in The Gambia.

For some mothers this is their first baby; they have no idea what’s ahead of them, and yet they’ve agreed to enrol into a study where we’re going to be part of their lives for at least two years. It’s really thrilling to see how motivated and how dedicated they are to this project.

Hopes for the future

I think the BRIGHT study is a good example of where a solution had existed for a long time, but we just didn’t know about the problem. Once I realised that understanding infant brain development in Africa was a challenge, it’s been incredibly rewarding to bring in existing technology to help.

Infant with fNIRS imaging headset in Keneba: Image credit Ian Farrell

We’d like to continue to follow our cohort and to study them in the pre-school years, so that we have a continuous measure of their brain function. I’d also really like to see the MRC field station at Keneba become a centre of excellence for infant brain development studies. Capacity building has been a large part of our project and we have been training many local field staff to conduct a whole range of infant brain development assessments. This will help us use all the skills and the talent that’s already in Africa to learn how we can optimise the methods we are using; to better understand brain development in this population; and ultimately, to ensure that all infants reach their potential in leading happy and healthy lives.

The BRIGHT study is a collaborative project, made up of a team of researchers from UCL, Birkbeck University of London, the Medical Research Council Units in Cambridge and The Gambia at London School of Hygiene and Tropical Medicine, Cambridge University and Cambridge University Hospitals.

The project is part of the Global fNIRS initiative for the use of fNIRS in global health projects. This blog was first published by the Medical Research Council, and is reposted with their permission.

. Reply . Category: Science . Tags: , , , , ,

Using historical evidence to improve conservation science

In a new paper, Dr Simon Pooley from the Department of Geography argues that moving beyond historical ecology to draw on methods from environmental history is crucial for the field and allied sub-disciplines of ecology like restoration ecology and invasion biology.

As researchers turn increasingly to history as a resource for understanding the trajectory of a planet under unprecedented human pressure, we need to be clear about the methodological challenges of finding and using historical data.

Mainstream ecology recognises the importance of recovering long-term ecological records by applying rigorous methods to the study of charcoal, ice cores, pollen and tree rings. The uses of historical textual and graphic sources are seldom accorded analogous rigour.

Ecologists use scientific methods to learn from the past, drawing on resources developed by historical ecologists to source data and assemble it into time series, predicting fluctuations and trends in variables over time.

Recently, interest in the impacts of human actions on earth systems has led scientists to aggregate data on generalised human perturbations like greenhouse gas emissions or the industrial manufacture of fertilizers. They then compare these with variables tracking impacts on related earth systems like earth surface temperatures and ocean acidification.

For environmental historians, the resulting framings of relationships between human agency and earth systems are often too linear and generalised. Further, historians urge analysts to pay closer attention to the contextual factors shaping data production, rather than dip uncritically into ‘the past’ as a ‘goody bag’ of useful facts.

This paper is aimed at ecologists rather than historians; it aims to communicate some of the major considerations they should be aware of when collecting and interpreting historical data. These can be handily divided into four themes, summarised as questions:

  • Why, how, and by whom were the data collected?
  • How were the data measured (relating to time periods and spatial units in particular)?
  • What conceptual filters shaped data collection and interpretation?
  • What anomalies, exceptional events and human influences shaped data collection?

The paper notes that the purposes for which data were initially collected, and by whom they were collected, using which methods and tools, all importantly shape what is (and is not) collected and in what form. Long time-series often combine datasets shaped in different ways in accordance with all these factors. A simple example is that, beginning in the late sixteenth century, the Gregorian calendar was adopted sporadically across Europe over several centuries, and then in parts of Asia and elsewhere in the late 1800s and early 1900s. In each instance, days were won or lost as regional or national calendars were adjusted accordingly.

The reporting periods used by institutions vary (and change) and researchers compiling time series must take these changes, and ‘longer’ and ‘shorter’ years used to adjust to new reporting cycles, into account when incorporating their data.

And of course datasets should never be taken at face value: whale catch data published by Russia and Japan in the postwar period, for instance, should be treated with due scepticism.

In the 1970s, mathematical models were devised to estimate historical (baseline) saltwater crocodile populations in northern Australia, as a means for judging when conservation measures introduced early in that decade had resulted in an adequate recovery to pre-exploitation levels. However, when a series of attacks followed an unexpectedly quick population recovery, public opinion demanded a rethink of models which calculated that these populations were will still at only 2% of their historical size.

Northern Territory researchers consulted historical records and interviewed former hunters, discovering that pre-exploitation crocodile population densities had varied significantly from river system to river system. According to their revised estimates, populations were at 30-50% of pre-exploitation levels, sufficient to allow controlled sustainable use of crocodiles. This resulted in a policy shift which has proven to be a successful long-term management approach in the region.

The ways in which underlying theories and conceptual frameworks shape data collection are challenging to work out. Examples include powerful narratives about desertification and deforestation which shaped how colonial foresters and agricultural experts interpreted landscape change and management in the colonies in which they worked. Ideas about fire, forests and rainfall long shaped expert thinking about landscape change in India, Madagascar and many parts of Africa, and arguably still influence management interventions aimed at changing or preventing traditional practices by local peoples. In India, ideas about degradation through burning still guide management of savannas, which are arguably misclassified as degraded woodlands.

The four themes identified in my paper are discussed in the text in relation to a schematic figure. Of course, any such visualisation conceals considerable complexity, but it provides a useful framework for exploring some of the challenges. The paper gestures to some of these further complexities through examples from my monograph Burning Table Mountain: an environmental history of fire on the Cape Peninsula (Palgrave 2014; UCT Press 2015) and a recent paper on fire and expertise in South Africa (Environmental History 23:1, 2018). These examples show some ways in which published theories don’t always match the authors’ recommendations for management in the field, or the results of experimental science are ignored in the face of powerful narratives about ecological phenomena like fire, soil erosion and drought. Finally, they demonstrate how complex factors – social, economic, cultural – combine in diverse ways to shape whether and how scientifically informed policies are implemented.

The paper concludes by arguing that the use of historical approaches as summarised in its accompanying figure can help improve the accuracy of the data we draw on to do important conservation work like estimate the conservation status of ecosystems or species, or prioritise conservation action. At a deeper level, historical methods allow us to tease apart the many interacting factors which shape policy and management and influence their interrelations with other narratives, knowledge systems and priorities in particular contexts, over time.

The paper is available (open access) with supporting figure and references in Conservation Letters.

. Reply . Category: Science

Eyes in the back of their heads?

Is educational neuroscience all it’s cracked up to be? In a keynote speech at the London Festival of Learning, Professor Michael Thomas argues it can be used to provide simple techniques to benefit teaching and learning, and that in future machine learning could even give teachers eyes in the backs of their heads.

I could perhaps have been forgiven for viewing with some trepidation the invitation to address a gathering of artificial intelligence researchers at this week’s London Festival of Learning. At their last conference, they told me, they’d discussed my field – educational neuroscience – and come away sceptical.

They’d decided neuroscience was mainly good for dispelling myths – you know the kind of thing. Fish oil is the answer to all our problems. We all have different learning styles and should be taught accordingly. I’m not going to go into it again here, but if you want to know more you can visit my website.

The AI community sometimes sees education neuroscience mainly as a nice source of new algorithms – facial recognition, data mining and so on.

But I came to see this invitation as an opportunity. There are lots that are positive to say about neuroscience and what it can do for teachers – both now, and in the future.

Let’s start with now. There are already lots of basic neuroscience findings that can be translated into classroom practice. Already, research is helping us see the way particular characteristics of learning stem from how the brain works.

Here’s a simple one: we know the brain has to sleep consolidate knowledge. We know sleep is connected to the biology of the brain, and to the hippocampus, where episodic memories are stored. The hippocampus has a limited capacity – around 50,000 memories- and would fill up in about three months. During sleep, we gradually transfer these memories out of the hippocampus and store them more permanently. We extract key themes from memories to add to our existing knowledge. We firm up new skills we’ve learnt in the day. That’s why it’s important to sleep well, particularly when we’re learning a skill. This is a simple fact that can inform any teacher’s practice.

And here’s another: We forget some things and not others – why? Why might I forget the capital of Hungary, when I can’t forget that I’m frightened of spiders? We now know the answer – phobias involve the amygdala, a part of the brain which operates as a threat detector. It responds to emotional rather than factual experiences, and it doesn’t forget – that’s why we deal with phobias by gradual desensitisation, over-writing the ‘knowledge’ held in the amygdala. Factual learning is stored elsewhere, in the cortex, and operates more on a ‘use it or lose it’ basis. Again, teachers who know this can see and respond to the different ways that children learn and understand different things.

And nowadays, using neuroimaging techniques, we can actually see what’s going on in the brain when we’re doing certain tasks. So brain scans of people looking at pictures of faces, of animals, of tools and of buildings show different parts of the brain lighting up.

A figure representing human-rated similarity between pictures is available here:

The visual system is a hierarchy, with a sequence of higher levels of processing – so-called ‘deep’ neural networks. At the bottom level (an area called V1), brain activity responds to how similar the images are. It might respond similarly to a picture of a red car and a red flower, and differently to a picture of a blue car. Higher up the hierarchy, the brain activity responds to the different categories images are in (for example, in the inferior temporal region). Here, the patterns for a red car and a blue car will look more similar because both are cars, and different to the pattern for a red flower. This kind of hierarchical structure is now used in machine learning. It’s what Google’s software does – Is it a kitten or a puppy? Is John in this picture?

Again, teachers who know that different parts of the brain do different things can work with that knowledge. Brain science is already giving schools simple techniques – Paul Howard Jones at the University of Bristol has devised a simple three-step cycle to understand how the brain learns, based on what we now know: Engage; Build, Consolidate.

But there’s more – by bringing together neuroscience and artificial intelligence, we can actually build machines which can do things we can’t do. At a certain level, the machines become more accurate at doing things than humans are – and they can assimilate far greater amounts of information in a much shorter time.

I can foresee a day – and in research terms it isn’t far off – when teachers will be helped by virtual classroom assistants. They’ll use big data techniques – for example, collecting data anonymously from huge samples of pupils so that any teacher can see how his or her pupils’ progress matches up.

In future, teachers might wear smart glasses so they can receive real-time information – which child is having a problem learning a particular technique, and why? Is it because he’s struggling to overcome an apparent contradiction with something he already knows – or has his attention simply wandered? Just such a system has been showcased at the London Festival of Learning this week, in fact.

Of course, we have to think carefully about all this – particularly when it comes to data collection and privacy. But it’s possible that in future a machine will be able to read a child’s facial expression more accurately than a teacher can- is he anxious, puzzled, or just bored? Who’s being disruptive, who’s not applying certain rules in a group activity? Who’s a good leader?

This is not in any sense to replace teachers – it’s about giving them smarter information. Do you remember how you felt when your teacher turned to the blackboard with the words: ‘I’ve got eyes in the back of my head, you know?’ You didn’t believe it, did you? But in future, with a combination of neuroscience and computer science, we can make that fiction a reality.

This blog was first posted on the IOE London blog

. Reply . Category: Science . Tags: , , ,