Constitutional crisis?

Robert Singh, Professor of Politics at Birkbeck, defends the US constitution at a time when many say it offers more problems than solutions. His ideas are explored further in his new book In Defense of the United States Constitution, available from Routledge. 

According to that eminent politics scholar, Morrissey (Spent the Day in Bed), we should stop watching the news, “because the news contrives to frighten you.” As far as politics in the United States goes, he surely has a point. Breathlessly excitable news coverage and learned academic pronouncements of the “death” of democracy together induce a sense of bewilderment, producing more heat than light about what ails America. And invariably this is traced to the ultimate political “original sin,” the US Constitution, faulty more by defective design than cack-handed execution.

Nowhere more is this true than the Trump presidency, whose macabre logic – feed a craven media that thrives on outrage with its daily dose of “controversy” – rarely fails to produce the desired ratings hit. Opinion surveys confirm the resulting disenchantment: in 2017, only 46 percent of Americans were satisfied with how democracy was working. In 2018, a mere 50 percent said their system was basically sound. 81 percent thought the Founders would be upset with the functioning of federal institutions while only 11 percent imagined they would be happy. Four in five Americans were either dissatisfied (60 percent) or angry (20 percent) at Washington.”

But is the Constitution, as many scholar-activists assert, the source of, rather than the remedy to, US problems, from gun violence to agitated air passengers invoking the “right” to travel with “emotional support animals”? Does Trump’s presidency again reveal its inherent fragility, proclivity to periodic crisis and the hollowness of eighteenth-century parchment promises? These altogether more problematic claims are triply doubtful.

First, we should separate politics from constitutionalism and beware the all-too-promiscuous use of the term “crisis.” A genuine constitutional crisis requires a disagreement about constitutional obligations that is impossible to resolve via constitutional means. The Civil War was the one undisputable such crisis in US history: an existential conflict posing a choice between alternatives that allowed no compromise. Others, from Watergate and Iran-Contra to Monicagate and today’s lurid charges, are more resolvable political crises with constitutional dimensions.

Second, while Trump has undermined multiple norms and conventions previously taken for granted, the extent to which these indict the constitutional design is questionable. Even if a Democratic House of Representatives begins impeachment proceedings against him in January 2019, the question of whether Trump’s alleged violations of statute law rise to the standard of “high crimes and misdemeanours” is ultimately a political, not a legal, one. Trump’s tempestuous encounter with the Constitution has proven only his most recent and important instance of serial infidelity. But it has damaged, not endangered, the republic.

Third, there exists a powerful case – which impeachment would vindicate, not repudiate – that “the system worked.” The political circus since January 2017 may have been compelling viewing in its car crash qualities, but it also demonstrates certain enduring structural strengths of the US design. Trump has not had his way on public policy, despite his own party controlling both houses on Capitol Hill. The courts have been able and willing to strike down laws and executive actions they deemed unconstitutional. Civil society remains a cacophonous and vibrant force.

None of this is to suggest that all is right with things constitutional. The Constitution is far from flawless and some modernizing fine-tuning would not go amiss, regarding the composition of the US Senate, the method of allocating votes in the Electoral College, and the amendment process. But it merits neither rubbishing nor romanticizing. A rare focus for unity in an otherwise fractious polity, the Constitution is not the source of today’s problems (the Second Amendment, for example, does not prohibit strong firearms regulation). Nor are constitutional “fixes” the solution. Radical change, where feasible, is mostly undesirable, and where desirable, mostly unfeasible. It is politics – above all, the deeply entrenched partisan polarization that preceded and will outlast the 45th president – that is responsible for contemporary maladies.

On most comparative metrics, the Constitution performs well and emerges much better than others. An effective constitution should provide a stable framework for government by channelling societal conflict into everyday politics, allow the expression in law and policy of majority preferences while safeguarding protections for individual rights and liberties, ensure the peaceful transfer of power, and permit the means of its own revision through amendments and interpretation. The Constitution meets these core requirements, and its own Preamble’s six objectives, now more fully than at any time in US history. It is not merely adaptive but “antifragile”: gaining strength from the tests to which it is periodically subject.

All of which suggests: Keep calm and carry on constitutionalizing. The republic has not been read the last rites. The Constitution has not been trampled under goose-steps. American democracy is not in its death throes. The news might frighten you, but the US Constitution should be a cause for enduring comfort rather than disquiet.

Share
. Reply . Category: Social Sciences History and Philosophy . Tags: , , , , ,

Burnout in the NHS: what happens when doctors become patients?

Dr Kevin Teoh, Lecturer in Organizational Psychology, discusses burnout and mental health trends in NHS consultants – which is the subject of a new paper, co-authored with Dr Atir Khan, Dr Juliet Hassard, and Dr Saiful Islam.

Not many days go by where there isn’t any discussion on the current state of our National Health Service (NHS) – whether it is increasing patient demands and numbers, concerns around funding, patient safety, or Brexit. However, as one of the largest employers in the world, the changes across the NHS can have significant ramifications for its workforce. In the press, there is increasing concern about how healthcare staff are coping in light of these changes.

One of the main topics of discussion is burnout, which consists of three components – emotional exhaustion, depersonalisation, and reduced personal accomplishment.

Emotional exhaustion refers to being emotionally drained and exhausted in the workplace, while depersonalisation is a psychological withdrawal from relationships and the development of negative and cynical feelings towards people. Reduced personal accomplishment represents a lack of work effectiveness due to emotional exhaustion and depersonalisation.

But why is burnout in the NHS so important? And what are the factors in the work environment that precede it? We sought to examine these questions in a sample of NHS consultants drawn from England, Scotland and Wales. Consultants represent the most senior and well-trained doctors in the healthcare workforce. Through their role as supervisors and educators, they are also pivotal in the development of the current and future healthcare workforce. Far less is also known about the working conditions of consultants, when compared to their junior doctor and nursing colleagues.

We focused on two aspects of work – work-related pressure and autonomy. The first reflects the workload and pressure that consultants are under, with considerable evidence from other sectors that link it with the health outcomes of workers. The latter actually reflects a potential positive work aspect as high-levels of job autonomy may be beneficial to consultants (and workers more generally). This is because job autonomy gives consultants the flexibility to manage their workload, work on tasks which they find more interesting and problem solve.

In addition to the working conditions and burnout relationship, we also wanted to see how these linked in with staff outcome measures, including the levels of depressive and anxiety symptoms that consultants were experiencing as well as their intention to retire early from the NHS.

The results paint a worrying picture. Out of the 593 consultants who took part, about a third of NHS consultants had poor-levels of psychological health, including emotional exhaustion (38.7%), depersonalisation (20.7%), anxiety symptoms (43.1%), and depressive symptoms (36.1%). These figures not only highlight that our consultants are struggling; they also suggest that poor mental health among consultants has increased in the years since they were last measured.

As expected, both aspects of work that we measured – (high) work related pressure and (low) job autonomy – predicted adverse psychological health. But what are the implications of poor working conditions and burnout among consultants in the NHS?

Our findings suggest that the impact on both severe mental health issues (i.e. symptoms of anxiety and depression) and an intention to retire early. This means that when consultants in the NHS experience high work-related pressure and job autonomy, the subsequent development of burnout could lead to more severe downstream issues.

From a psychological perspective, this does make sense. Consultants struggling in an environment where they have little autonomy and high work-related pressure requires energy and coping resources. Continual exposure then takes a toll on consultants’ psychological resources, which leads to burnout over time, as doctors feel exhausted and they depersonalise from the people around them. When doctors burn out, it exacerbates even further this demand on their psychological resources. What then happens is the manifestation of further mental and physical symptoms, such as the development of depressive and anxiety symptoms. Also, consultants may choose to leave that difficult work environment to protect their remaining psychological resources – which is why we see that when consultants report higher levels of burnout they are also more likely to intend to retire early.

While it is important to note that we did not diagnose actual anxiety and depression in our sample of consultants, the measurement of symptoms is strongly associated with it. Consultants struggling with such symptoms are more likely to be impaired in their performance, which is consistent with the growing research evidence base linking poor mental health among doctors with lower standards of patient care. Should the prevalence of depression and anxiety among consultants go up, it would inadvertently increase the demands on the NHS as doctors become patients themselves.

The intention to retire early is something that should be of concern to us all, especially as the NHS is currently facing a shortage of healthcare staff, including consultants. If consultants do go on to retire early, this would not only reduce the number of doctors in the NHS but would lead to a skills gap in the healthcare service. This might then generate a continuous downward spiral where consultants experiencing poor working conditions burnout, leading to the development of more severe mental health issues and/or early retirement from the NHS. In turn, this further increases the demands on the health service that impacts the remaining consultants, thereby continuing this cycle.

Managers and policymakers need to be aware of the current state of poor psychological health of NHS consultants. They also need to recognise that decisions and changes made to the working conditions of doctors (and healthcare staff in general) not only impact their burnout levels. Where it is not feasible to alleviate some of the work-related pressures, then ways to increase the level of consultants’ job autonomy should be considered.

These findings, along with some of our other research in this area, emphasise the need to address the systematic issues within the work environment that influence the working conditions of NHS consultants. Although more individual-focused interventions, such as resilience training, may have a role to play, on its own this is clearly insufficient. In the UK, there has been limited research looking at interventions to manage the working conditions of doctors. Nevertheless, growing research from elsewhere in the world provide some examples that may be relevant to the NHS here.

Ultimately, as potential users of the NHS, the mental health state of consultants and their working conditions really should be of concern to us all.

The full study, Psychosocial Work Characteristics, Burnout, Psychological Morbidity Symptoms And Early Retirement Intentions: A Cross-Sectional Study Of NHS Consultants In The UK, is published by BMJ Open.

Share
. Reply . Category: Business Economics and Informatics

Using historical evidence to improve conservation science

In a new paper, Dr Simon Pooley from the Department of Geography argues that moving beyond historical ecology to draw on methods from environmental history is crucial for the field and allied sub-disciplines of ecology like restoration ecology and invasion biology.

As researchers turn increasingly to history as a resource for understanding the trajectory of a planet under unprecedented human pressure, we need to be clear about the methodological challenges of finding and using historical data.

Mainstream ecology recognises the importance of recovering long-term ecological records by applying rigorous methods to the study of charcoal, ice cores, pollen and tree rings. The uses of historical textual and graphic sources are seldom accorded analogous rigour.

Ecologists use scientific methods to learn from the past, drawing on resources developed by historical ecologists to source data and assemble it into time series, predicting fluctuations and trends in variables over time.

Recently, interest in the impacts of human actions on earth systems has led scientists to aggregate data on generalised human perturbations like greenhouse gas emissions or the industrial manufacture of fertilizers. They then compare these with variables tracking impacts on related earth systems like earth surface temperatures and ocean acidification.

For environmental historians, the resulting framings of relationships between human agency and earth systems are often too linear and generalised. Further, historians urge analysts to pay closer attention to the contextual factors shaping data production, rather than dip uncritically into ‘the past’ as a ‘goody bag’ of useful facts.

This paper is aimed at ecologists rather than historians; it aims to communicate some of the major considerations they should be aware of when collecting and interpreting historical data. These can be handily divided into four themes, summarised as questions:

  • Why, how, and by whom were the data collected?
  • How were the data measured (relating to time periods and spatial units in particular)?
  • What conceptual filters shaped data collection and interpretation?
  • What anomalies, exceptional events and human influences shaped data collection?

The paper notes that the purposes for which data were initially collected, and by whom they were collected, using which methods and tools, all importantly shape what is (and is not) collected and in what form. Long time-series often combine datasets shaped in different ways in accordance with all these factors. A simple example is that, beginning in the late sixteenth century, the Gregorian calendar was adopted sporadically across Europe over several centuries, and then in parts of Asia and elsewhere in the late 1800s and early 1900s. In each instance, days were won or lost as regional or national calendars were adjusted accordingly.

The reporting periods used by institutions vary (and change) and researchers compiling time series must take these changes, and ‘longer’ and ‘shorter’ years used to adjust to new reporting cycles, into account when incorporating their data.

And of course datasets should never be taken at face value: whale catch data published by Russia and Japan in the postwar period, for instance, should be treated with due scepticism.

In the 1970s, mathematical models were devised to estimate historical (baseline) saltwater crocodile populations in northern Australia, as a means for judging when conservation measures introduced early in that decade had resulted in an adequate recovery to pre-exploitation levels. However, when a series of attacks followed an unexpectedly quick population recovery, public opinion demanded a rethink of models which calculated that these populations were will still at only 2% of their historical size.

Northern Territory researchers consulted historical records and interviewed former hunters, discovering that pre-exploitation crocodile population densities had varied significantly from river system to river system. According to their revised estimates, populations were at 30-50% of pre-exploitation levels, sufficient to allow controlled sustainable use of crocodiles. This resulted in a policy shift which has proven to be a successful long-term management approach in the region.

The ways in which underlying theories and conceptual frameworks shape data collection are challenging to work out. Examples include powerful narratives about desertification and deforestation which shaped how colonial foresters and agricultural experts interpreted landscape change and management in the colonies in which they worked. Ideas about fire, forests and rainfall long shaped expert thinking about landscape change in India, Madagascar and many parts of Africa, and arguably still influence management interventions aimed at changing or preventing traditional practices by local peoples. In India, ideas about degradation through burning still guide management of savannas, which are arguably misclassified as degraded woodlands.

The four themes identified in my paper are discussed in the text in relation to a schematic figure. Of course, any such visualisation conceals considerable complexity, but it provides a useful framework for exploring some of the challenges. The paper gestures to some of these further complexities through examples from my monograph Burning Table Mountain: an environmental history of fire on the Cape Peninsula (Palgrave 2014; UCT Press 2015) and a recent paper on fire and expertise in South Africa (Environmental History 23:1, 2018). These examples show some ways in which published theories don’t always match the authors’ recommendations for management in the field, or the results of experimental science are ignored in the face of powerful narratives about ecological phenomena like fire, soil erosion and drought. Finally, they demonstrate how complex factors – social, economic, cultural – combine in diverse ways to shape whether and how scientifically informed policies are implemented.

The paper concludes by arguing that the use of historical approaches as summarised in its accompanying figure can help improve the accuracy of the data we draw on to do important conservation work like estimate the conservation status of ecosystems or species, or prioritise conservation action. At a deeper level, historical methods allow us to tease apart the many interacting factors which shape policy and management and influence their interrelations with other narratives, knowledge systems and priorities in particular contexts, over time.

The paper is available (open access) with supporting figure and references in Conservation Letters.

Share
. Reply . Category: Science

Eyes in the back of their heads?

Is educational neuroscience all it’s cracked up to be? In a keynote speech at the London Festival of Learning, Professor Michael Thomas argues it can be used to provide simple techniques to benefit teaching and learning, and that in future machine learning could even give teachers eyes in the backs of their heads.

I could perhaps have been forgiven for viewing with some trepidation the invitation to address a gathering of artificial intelligence researchers at this week’s London Festival of Learning. At their last conference, they told me, they’d discussed my field – educational neuroscience – and come away sceptical.

They’d decided neuroscience was mainly good for dispelling myths – you know the kind of thing. Fish oil is the answer to all our problems. We all have different learning styles and should be taught accordingly. I’m not going to go into it again here, but if you want to know more you can visit my website.

The AI community sometimes sees education neuroscience mainly as a nice source of new algorithms – facial recognition, data mining and so on.

But I came to see this invitation as an opportunity. There are lots that are positive to say about neuroscience and what it can do for teachers – both now, and in the future.

Let’s start with now. There are already lots of basic neuroscience findings that can be translated into classroom practice. Already, research is helping us see the way particular characteristics of learning stem from how the brain works.

Here’s a simple one: we know the brain has to sleep consolidate knowledge. We know sleep is connected to the biology of the brain, and to the hippocampus, where episodic memories are stored. The hippocampus has a limited capacity – around 50,000 memories- and would fill up in about three months. During sleep, we gradually transfer these memories out of the hippocampus and store them more permanently. We extract key themes from memories to add to our existing knowledge. We firm up new skills we’ve learnt in the day. That’s why it’s important to sleep well, particularly when we’re learning a skill. This is a simple fact that can inform any teacher’s practice.

And here’s another: We forget some things and not others – why? Why might I forget the capital of Hungary, when I can’t forget that I’m frightened of spiders? We now know the answer – phobias involve the amygdala, a part of the brain which operates as a threat detector. It responds to emotional rather than factual experiences, and it doesn’t forget – that’s why we deal with phobias by gradual desensitisation, over-writing the ‘knowledge’ held in the amygdala. Factual learning is stored elsewhere, in the cortex, and operates more on a ‘use it or lose it’ basis. Again, teachers who know this can see and respond to the different ways that children learn and understand different things.

And nowadays, using neuroimaging techniques, we can actually see what’s going on in the brain when we’re doing certain tasks. So brain scans of people looking at pictures of faces, of animals, of tools and of buildings show different parts of the brain lighting up.

A figure representing human-rated similarity between pictures is available here:

The visual system is a hierarchy, with a sequence of higher levels of processing – so-called ‘deep’ neural networks. At the bottom level (an area called V1), brain activity responds to how similar the images are. It might respond similarly to a picture of a red car and a red flower, and differently to a picture of a blue car. Higher up the hierarchy, the brain activity responds to the different categories images are in (for example, in the inferior temporal region). Here, the patterns for a red car and a blue car will look more similar because both are cars, and different to the pattern for a red flower. This kind of hierarchical structure is now used in machine learning. It’s what Google’s software does – Is it a kitten or a puppy? Is John in this picture?

Again, teachers who know that different parts of the brain do different things can work with that knowledge. Brain science is already giving schools simple techniques – Paul Howard Jones at the University of Bristol has devised a simple three-step cycle to understand how the brain learns, based on what we now know: Engage; Build, Consolidate.

But there’s more – by bringing together neuroscience and artificial intelligence, we can actually build machines which can do things we can’t do. At a certain level, the machines become more accurate at doing things than humans are – and they can assimilate far greater amounts of information in a much shorter time.

I can foresee a day – and in research terms it isn’t far off – when teachers will be helped by virtual classroom assistants. They’ll use big data techniques – for example, collecting data anonymously from huge samples of pupils so that any teacher can see how his or her pupils’ progress matches up.

In future, teachers might wear smart glasses so they can receive real-time information – which child is having a problem learning a particular technique, and why? Is it because he’s struggling to overcome an apparent contradiction with something he already knows – or has his attention simply wandered? Just such a system has been showcased at the London Festival of Learning this week, in fact.

Of course, we have to think carefully about all this – particularly when it comes to data collection and privacy. But it’s possible that in future a machine will be able to read a child’s facial expression more accurately than a teacher can- is he anxious, puzzled, or just bored? Who’s being disruptive, who’s not applying certain rules in a group activity? Who’s a good leader?

This is not in any sense to replace teachers – it’s about giving them smarter information. Do you remember how you felt when your teacher turned to the blackboard with the words: ‘I’ve got eyes in the back of my head, you know?’ You didn’t believe it, did you? But in future, with a combination of neuroscience and computer science, we can make that fiction a reality.

This blog was first posted on the IOE London blog

Share
. Reply . Category: Science . Tags: , , ,