Crystallography: past, present and future (Science Week 2014)

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer in Birkbeck’s Department of Biological Sciences

Prof Paul Barnes sets the scene for one of the experiments he carried out in the Crystallography lecture

The second of the Science Week lectures from the Department of Biological Sciences, which was presented on 2 July 2014, was a double act from two distinguished emeritus professors and Fellows of the College, Paul Barnes and David Moss. Remarkably, they both started their working lives at Birkbeck on the same day – 1 October 1968 – and so had clocked up over 90 years of service to the college between them by Science Week 2014.

The topic they took was a timely one: the history of the science of crystallography over the past 100 years. UNESCO has declared 2014 to be the International Year of Crystallography in recognition of the seminal discoveries that started the discipline, which were made almost exactly 100 years ago; a number of the most important discoveries of that century were made by scientists with links to Birkbeck.

The presenters divided the “century of crystallography” into two, with Barnes speaking first and covering the first 50 years. In giving his talk the title “A History of Modern Crystallography”, however, he recognised that crystals have been observed, admired and studied for many centuries. What changed at the beginning of the last century was the discovery of X-ray diffraction. Wilhelm Röntgen was awarded the first-ever Nobel Prize for Physics for his discovery of X-rays in 1896, but it was almost two decades before anyone thought of directing them at crystals. The breakthroughs came when Max von Laue showed that a beam of X-rays can be diffracted by a crystal to yield a pattern of spots, and the father-and-son team of William Henry Bragg and William Lawrence Bragg showed that it was possible to derive information about the atomic structure of crystals from their diffraction patterns. These discoveries also solved – to some extent – the debate about whether X-rays were particles or waves, as only waves diffract; we now know that all electromagnetic radiation, including X-rays, can be thought of as both particles and waves.

Von Laue and the Braggs were awarded Nobel Prizes for Physics in 1914 and 1915 respectively, and between 1916 and 1964 no fewer than 13 more Nobel Prizes were awarded to 18 more scientists for discoveries related to crystallography. Petrus Debye, who won the Chemistry prize in 1936, showed how to quantify the thermal motion of atoms as vibrations within a crystal. He also invented one of the first powder diffraction cameras, used to obtain diffraction patterns from powders of tiny crystallites. Another Nobel Laureate, Percy Bridgman, studied the structures of materials under pressure: it has been said that he would “squeeze anything he could lay his hands on”, often up to intense pressures.

Scientists and scientific commentators often argue about which of their colleagues would have most deserved to win the ultimate accolade. Barnes named three who, he said, could easily have been Nobel Laureates in the field of crystallography. One, Paul Ewald, was a theoretical physicist who had studied for his PhD under von Laue in Munich, and the other two had strong links with Birkbeck. JD “Sage” Bernal was Professor of Physics and then of Crystallography here; he was famous for obtaining, with Dorothy Crowfoot (later Hodgkin) the first diffraction pattern from a protein crystal, but his insights into the atomic basis of the very different properties of carbon as diamond and as graphite were perhaps even more remarkable. He took on Rosalind Franklin, whose diffraction patterns of DNA had led Watson and Crick to deduce its double helical structure, after she left King’s College, and she did pioneering work on virus structure here until her premature death in 1958.

Barnes ended his talk and led into Moss’s second half-century with a discussion of similarities between the earliest crystallography and today. Then, as now, you only need three things to obtain a diffraction pattern: a source of X-rays, a crystalline sample, and a recording device; the differences all lie in the power and precision of the equipment used. He demonstrated this with a “symbolic demo” that ended when he pulled a model structure of a zeolite out of a large cardboard box.

David Moss then took over to describe some of the most important crystallographic discoveries from the last half-century. His talk concentrated on the structures of large biological molecules, particularly proteins, and he began by explaining the importance of protein structure. All the chemistry that is necessary for life is controlled by proteins, and knowing the structure of proteins enables us to understand, and potentially also to modify, how they work.

Even the smallest proteins contain thousands of atoms; in order to determine the position of all the atoms in a protein using crystallography you need to make an enormous number of measurements of the positions and intensities of X-ray spots. The process of solving the structure of a protein is no different from that of solving a small molecule crystal structure, but it is more complex and takes much more time. Very briefly, it involves crystallising the protein; shining an intense beam of X-rays on the resulting crystals to produce diffraction patterns, and then doing some extremely complex calculations. The first protein structures, obtained without the benefit of automation and modern computers, took many years and sometimes even decades.

Thanks to Bernal’s genius, energy and pioneering spirit, Birkbeck was one of the first institutes in the UK to have all the equipment that was needed for crystallography. This included some of the country’s first “large” computers. One of the first electronic stored-program computers was developed in Donald Booth’s laboratory here in the 1950s. In the mid-1960s the college had an ATLAS computer with a total memory of 96 kB. It occupied the basements of two houses in Gordon Square, and crystallographers used it to calculate electron density maps of small molecules. Protein crystallography only “took off” in the 1970s with further improvements in computing and automation of much of the experimental technique.

Today, protein crystallography can almost be said to be routine. The first step, crystallising the protein, can still be an important bottleneck, but data collection at powerful synchrotron X-ray sources is extremely rapid and structures can be solved quite easily with user-friendly software that runs on ordinary laptops. There are now over 100,000 protein structures freely available in the Protein Data Bank, and about 90% of these were obtained using X-ray crystallography. The techniques used to obtain the other 10,000 or so, nuclear magnetic resonance and electron microscopy, are more specialised.

Moss ended his talk by describing one of the proteins solved in his group during his long career at Birkbeck: a bacterial toxin that is responsible for the disease gas gangrene. This destroys muscle cells by punching holes in their membranes, and its victims usually have to have limbs amputated to save their lives. Knowing the structure has allowed scientists to understand how this toxin works, which is the first step towards developing drugs to stop it. But you can learn even more about how proteins work if you also understand how they move. Observing and modelling protein motion in “real time” still poses many challenges for scientists as the second century of crystallography begins.

. Reply . Category: Science . Tags: , , ,

Redesigning Biology. Birkbeck Science Week 2014

This post was contributed by Dr Clare Sansom, Senior Associate Lecturer in Birkbeck’s Department of Biological Sciences

Dr Vitor Pinheiro (right) and Professor Nicholas Keep, Dean of the School of Science

Dr Vitor Pinheiro (right) and Professor Nicholas Keep, Dean of the School of Science

The first of two Science Week talks on Wednesday 2 July was given by one of the newest lecturers in the Department of Biological Sciences, Dr Vitor Pinheiro. Dr Pinheiro holds a joint appointment between Birkbeck and University College London, researching and teaching in the new discipline of synthetic biology. In his talk, he explained how it is becoming possible to re-design the chemical basis of molecular biology and discussed a potential application of this technology in preventing contamination of the natural environment by genetically modified organisms.

Synthetic biology is a novel approach that turns conventional ways of doing biology upside down. Biologists are used to a “reductionist” approach to their subject, breaking complex systems down into, for example, their constituent genes and proteins in order to understand them. In contrast, synthetic biology is more like engineering, a “bottom-up” approach that tries to assemble biological systems from their parts. Pinheiro introduced this concept using a quotation from the famous US physicist Richard Feynman: “What I cannot create, I cannot understand”. Synthetic biologists often use vocabulary that is more characteristic of engineers or computer scientists: words like “modules”, “device” and “chassis”.

All life on Earth is dependent on nucleic acids and proteins; the former store and carry genetic information, and the latter are the “workhorses” of cells. They are linked through the Central Dogma of Molecular Biology which states, put somewhat simplistically, that “DNA makes RNA makes protein”. The information that goes to make up the complexity of cells and organisms is held in DNA and “translated” into the functional molecules, the proteins, via its intermediate, RNA. The mechanism through which the biology we see now arose – evolution – is well enough understood, but it is not yet clear whether evolution had to create the biology we see today or if it is a kind of “frozen accident”. There is, after all, only one “biology” for us to observe. But synthetic biologists are trying to build something different.

DNA is made up from three chemical components and structured like a ladder: the rungs are made up of the bases that contain the information, and sugar rings and phosphate groups make up the steps. All three components can be chemically modified, affecting the physical properties and the potential for information storage of the resulting nucleic acid. Any modifications that do not disrupt the natural base-pairing seen in DNA and RNA can be exploited to make a nucleic acid that can exchange information with nature. And if the enzymes that in nature replicate DNA or synthesise RNA can also be exploited to synthesise and replicate these modified nucleic acids, that process will be substantially more efficient than chemical replication. Modification of different components presents different re-engineering challenges and different potential advantages. Sugar modifications are not common in biology and are expected to be harder than nucleobases to engineer. On the other hand, they are expected to increase the resistance to biological degradation of the modified nucleic acid. These synthetic nucleic acids have been generically termed “XNA”.

Pinheiro, as part of a European consortium, led the development of synthetic nucleic acid in which the natural five-membered sugar rings had been replaced by six-membered ones. They are more resistant than DNA to chemical and biological breakdown, and have low toxicity, but are poor substrates for the polymerases that catalyse DNA replication and RNA synthesis. Further, he has harnessed the power of evolution to create “XNA polymerases” through a process called directed evolution. In this, hundreds of millions of variant polymerases are created and those that happen to be better able to synthesise the selected XNA are isolated. The process is repeated until the best polymerases are identified or isolated polymerases have the required activity.

These synthetic nucleic acids, however, still cannot be involved in cell metabolism and this is a current research bottleneck that prevents the development of XNA systems in bacterial cells. An alternative route towards redesigning biology would be to modify how information stored in DNA and RNA is converted to proteins: redesigning and replacing the genetic code. The exquisite fidelity of the genetic code depends on another set of enzymes, tRNA synthetases, which connect each amino acid to a small “transfer” RNA molecule including its corresponding three-base sequence or codon. This allows the amino acid to be incorporated into the right place in a growing protein chain. In nature, almost all organisms use the same genetic code. Synthetic biologists, however, are now able to build in subtle changes so that, for example, a codon that in nature signals a stop to protein synthesis is linked to an amino acid, or one that is rarely used by a particular species is linked to an amino acid that is not part of the normal genetic code.

Any organism that has had its molecular biology “re-written” using XNA and non-standard genetic codes should be completely unable to exchange its information with naturally occurring organisms, and, therefore, would not be able to flourish or divide outside a contained environment: it could be described as being contained within a “firewall”.  It would therefore lack the risks that are associated with more conventionally genetically modified organisms: that it might compete with naturally occurring organisms for an ecological niche, or that modified genetic material might spread to them. If, or more likely when, these “genetically re-coded organisms” are released into the environment (perhaps to remove or neutralise pollutants) they will not be able to establish themselves in a natural ecological niche and will therefore pose negligible long-term risk. The more such organisms deviate from “normal” biology, the safer they will become.

. Reply . Category: Science . Tags: , , ,

The Legacy of William Morris (East London in Flux V)

This blog was contributed by Elisa Engel, Architect and Director of ehk! (engelhadleykirk limited). ehk! publishes a regular blog on its website. Click here to read.

William Morris Gallery. Credit: Nick Bishop, Overview

William Morris Gallery. Credit: Nick Bishop, Overview

East London in Flux, an event series organised by Fundamental Architectural Inclusion and Birkbeck, met at the William Morris Gallery on Wednesday 18 June, for the third event in the series. A fascinating guided tour of the collection was followed by tea, cake and debate in the museum’s café.

The William Morris Gallery, at Morris’s former home in Walthamstow, houses an exhibition on the designer, poet and socialist’s life and achievements, alongside changing exhibitions. The museum was remodelled in 2012, coinciding with the Olympics, and has since gone from strength to strength, winning the prestigious Museum of the Year award in 2013.

William Morris (1834-1896) is most famous for his involvement with the arts and crafts movement. By all accounts, throughout his life he battled with two sometimes conflicting ideals.

The Ideal Book room at the William Morris Gallery. © William Morris Gallery

The Ideal Book room at the William Morris Gallery. © William Morris Gallery

The first ideal, that of beauty, diverted him from the career in the clergy that he had been destined for. It led him to study art and develop an almost obsessive interest in the details of craft. William Morris was not content to design objects and work with craftsmen in delivering his vision. He insisted on becoming a master in every discipline he touched – to know all there was to know about dyeing fabrics and printing patterns, of weaving tapestries and printing books. It seems almost unimaginable how one person would fit his level of accomplishment, combined with his vast output in different disciplines, into one lifetime.

The second ideal, that of social justice, led him to stand at the street corners of London’s East End, overcoming his fear of public speaking, to rail against inequality and poor working conditions. In his workshops, he offered decent pay and development opportunities for his employees.

William Morris aimed to make his products available to the wider population – he famously said: ‘I do not want art for a few, any more than education for a few, or freedom for a few.’

However, this is where his two ideals seemed to collide. Given the meticulous craft that went into producing his company’s artefacts, they would always remain out of the financial grasp of the “common person”. He tried to counteract this by offering a range of objects large and small, to ensure that the moderately wealthy would be in a position to afford at least more minor items that embodied his aesthetics. Commissions for his company, however, came largely from wealthy clients for their refined country homes.

Following the tour of the gallery, the group sat down to discuss how William Morris would have viewed today’s world, and more specifically the changes that East London is experiencing right now. Many of his concerns appear to be surprisingly contemporary – most notably, growing income inequality and the struggle to combine quality design with ethical considerations about methods of production at prices that make objects affordable to every sector of society. A question that sparked much debate was: what would William Morris have made of Ikea and its planned housing development in the Lea Valley?

Black Horse Workshop

© Black Horse Workshop

One development he would have surely approved of is the recent emergence of shared craft spaces in London. Black Horse Workshop in Walthamstow, an easy walk away from the William Morris Gallery, is one such workshop that offers open access to a fully equipped wood and metal workshop for people wanting to reconnect with the making of things.

One can also easily hazard a guess at what he would have made of the sales pitch that the company that still bears his name employs on its website: “The original William Morris and Co: The luxury of taste”…

The East London in Flux evening at the William Morris gallery very much chimed with another event, held at the London School of Economics and organised by the Royal College of Art, the following night. This was a panel discussion featuring Alex de Rijke (of dRMM architects),  Oliver Wainwright (architecture critic at the Guardian) and Katie Lloyd-Thomas of Newcastle University under the title Kapital Architecture: Commodity.  The panel discussed how the role of the designer has changed. Increasingly, architects specify proprietary systems, and merely design the interface between them. This is just one example of how architects are complicit in reducing and narrowing their role in the construction process (while simultaneously aiming to widen their role into other areas, such as social policy). In this way, they are moving further and further away from Morris’s ideal of someone who is intimately involved in the making of things. Not everyone is following this trend, but it is only logical that there is a certain economy to working with proprietary systems instead of bespoke solutions.

But maybe this is not as much of a contradiction as it may at first appear – proprietary systems are not a natural resource, they are designed just as much as a wallpaper by William Morris is. Maybe what needs to happen in order to reconcile William Morris’s two ideals is for those involved in the design of our homes and cities on a larger scale to work much more closely with those behind the designs of the components that make up their physical fabric – and in this way once again to create objects and buildings that are designed in a much more holistic way.

At this year’s Architecture Biennale in Venice, the 14th International Architecture Exhibition is looking at the evolution of building components from bespoke architectural solutions to manufactured components.  As its curator, Rem Koolhaas,  says: “There are whole sections of my buildings that I have no control over. I simply don’t know what goes into the soffits of my buildings!”

It appears that it is not just us here in East London that people are pondering these questions – East London in Flux is dealing with very topical issues that are being discussed at a global level, forming part of a much wider debate.

East London in Flux continues on 16 July.

. Reply . Category: Arts . Tags: , , , ,

The evolutionary secrets of garden flowers described at Birkbeck’s Science Week

This post was contributed by Tony Boniface a member of the University of the Third Age.

Science Week logo

Science Week logo

On 3 July, Dr Martin Ingrouille, of Birkbeck’s Department of Biological Sciences, began his talk by pointing out that Darwin had studied plants for 40 years and had published books on pollination. However, Darwin knew nothing of genes and chromosomes and could not explain the rapid origin of flowering plants in the Cretaceous period.

Dr Ingrouille continued by emphasising that garden plants are sterile and exotic plants without their natural pollinators. They have been selected for showiness, many being artificial hybrids. He referred to Goethe, who stressed the essential unity of floral parts, which have all evolved from leaves.

Dr Ingrouille explained how genetic control, in its simplest form, consists of three classes of genes: A, B and C. Class A genes control sepals and petals, class B genes control petals and stamens, and class C genes control stamens and carpels. Mutations of these genes result in parts being converted into others.

Floral evolution in plants could have been the result of duplication of basic genes allowing one to perform its normal function while the other could give rise to a novel structure or function. New plant species have often arisen by chromosome doubling in a sterile hybrid as seen in the formation of Primula kewensis.

Dr Ingrouille then explained how much insight into plant evolution arose from the work of John Gerard (gardener to William Cecil), John Ray (author of the first modern text book of botany) and the Jussieus family (three generations of gardeners to the king of France). These people put plants into groups that were the first natural classification of the angiosperms.

Now DNA sequencing has resulted in a detailed understanding of the phylogeny or evolutionary history of these plants in which many of the families have survived such as the umbellifers and legumes but some such as the figwort family have been split. The result was the arrangement of the plants into two main groups namely the Eudicots, with three  grooves on their pollen grains, and the Basal Angiosperms, with only one groove. Within the Eudicots are the Core Eudicots including the Rosids and the Asterids whilst the Monocots are within the Basal Angiosperms. The first ancestor was Amborella trichopoda, a weedy shrub from New Caledonia in the Pacific – a place Dr Ingrouille hopes to visit on his retirement.

Dr Ingrouille finished  by urging his audience – all members of the University of the Third Age (a movement for retired and semi-retired people to come together and learn together) – to examine their garden plants in detail to look for the variations, which suggest their origins.

. Reply . Category: Science . Tags: , , ,