Author Archives: ubiard001

A systematic review of interventions to support adults with ADHD at work – Implications from the paucity of context-specific research for theory and practice.

 By Kirsty Lauder, Almuth McDowall & Harriet R Tenenbaum (2022)

Why is this topic important?

People with Attention Deficit Hyperactivity Disorder (ADHD), or ADHDers, can face workplace challenges that need supporting. Identifying the best support for ADHDers is important because the workplace is somewhere many adults spend their lives!

What is the purpose of this article?
We wanted to know what the evidence is for effective support to see if there is any research about ADHD and the workplace. One way to find out the best forms of support is to evaluate all the published academic research on a topic using a research method called a systematic review.

We found 143 published studies that evaluated support or ‘interventions’ for adult ADHDers. We looked at what was similar and different across all the studies and wanted to know:

  • where the research was conducted;
  • who the research participants were;
  • what kinds of support were evaluated;
  • what kind of support was most effective;
  • what support is relevant to the workplace.

What personal or professional perspectives do the authors bring to this topic?
The authors either identify as neurodivergent and/or have experience of working with people who identify as neurodivergent.

What did the authors find?
1/3 of studies were conducted in North America. The others were from Europe or Asia.

  • Most of the research participants were outpatients of ADHD Clinics, which means they get supported after getting an ADHD diagnosis from a psychiatrist.
  • 61% of the 143 studies evaluated medication and whether it reduces the core ADHD symptoms: inattention, hyperactivity, and impulsivity.
  • The remaining 39% of studies evaluated psychosocial support (training, cognitive behavioural therapy- CBT) or a combination of both medication and psychosocial support.
  • Medication is effective at reducing the core symptoms in the short term.
  • Psychosocial support is effective in improving emotional and social challenges.
  • A closer look at each study revealed the important components of effective support to be:
    • an increased awareness of what ADHD is between the ADHDer and their support network.
    • a good relationship with the medical professional working with the ADHDer.
    • inclusion in group sessions with other ADHDers.
  • No studies were conducted in the workplace or related to the workplace.
  • Some of the skills training and coaching support focused on work-related challenges like time management and performance.

What do the authors recommend?
The authors recommend more research on what effective workplace support is for ADHDers. The more research there is, the easier it will be for practitioners to rely on an evidence-base for decision-making.

The existing research, mapped in this study, shows us which strategies are most effective for ADHDers:

  • A combination of medication and CBT (cognitive behavioural therapy) or skills training/coaching.
  • Involving the ADHDer’s support network.
  • Learning about ADHD and its impact on individuals.
  • A good quality relationship with support professionals.

How will these recommendations help ADHDers now or in the future?
In the future, we can apply these ideas to the workplace to make sure that managers and co-workers are included in the awareness of and support for ADHD, and to ensure that the ADHDer has psychosocial and medical support available.

More information: 

Share

Intersectional stigma at work

This is a lay summary of Doyle, Nancy and McDowall, Almuth and Waseem, Uzma (2022) Intersectional stigma for Autistic people at work: a compound adverse impact effect on labor force participation and experiences of belonging. Autism in Adulthood.

Why is this an important issue?

Employment data show that autistic people find it harder to get and keep work. This study focuses on understanding if multiple identities and people’s background make a difference.

What is the purpose of this study?

We asked a group of Autistic people about gender and race, as well as being gay lesbian, bisexual, transexual or queer (LGBTQ). We asked where people live, their education, parents’ education and if they had any

diagnoses in addition to autism. We predicted that these things would have a negative effect on autistic employment rates. We thought they would also affect how autistic people felt at work.

What we did

An online survey was completed by 576 autistic people. We analyzed whether their identities and backgrounds made it more or less likely that they were in work. We then asked the 387 employed people within this group about their experiences at work. We compared their experiences by identity and background to see if these made a positive or negative difference.

What we found

We found that white Autistic people living in western countries such as the USA and Europe were more likely to have jobs. They were also more likely to jobs specifically designed for Autistic people. We found that women, non-binary and transgender autistic people felt less included at work. We also f

ound that feeling that someone cares is more important than any adjustments to work scheduling such as flexible working to support people.

What do these findings add to what was already known?

It is already known that autistic people are less likely to be in work than non-autistic people. This study shows that these overall numbers are masking important differences arising from gender, race and ethnicity.

What are the potential weaknesses in the study?

The survey was taken at one point in time, which doesn’t explain how these differences happened. Most people wh

o completed the study were highly educated. We didn’t have enough people from the non-western countries or communities of color. Therefore, the sample is not large or diverse enough to draw firm conclusions.

How will the study help Autistic people now or in the future?

We hope that the study inspires people to think about different identities and additional stigma for autism at work programs. We have provided a sample of baseline data from all over the world which shows a difference by location. Even though this is just a trend, it might spark more research looking at the crossover between autism, identities and backgrounds. It provides a starting point to help researchers who want to do longer studies that test interventions to improve autistic participation and experiences in work.

Further Information

Share

Determining the Real-World Value of Interventions in Field Research

Written by Dr Nancy Doyle.

Co-director of the Centre for Neurodiversity at Work, Dr Nancy Doyle is a Research Fellow at Birkbeck, Chartered Psychologist in organisational and occupational psychology, and the founder and owner of Genius Within CIC, a social enterprise dedicated to facilitating neurodiversity inclusion.

Real world data is essential

Applied field research is really difficult – data can be messy and full of contradictions. I realised in my doctoral research that data from a large field study didn’t make sense. I wanted to flip open the ‘black box’ of coaching (Nielsen & Randall, 2013) to understand how being coached could improve the work performance of dyslexic adults in the workplace. My pilot studies had shown a large increase in self-rated and manager-rated performance (Doyle & McDowall, 2015). Support for dyslexic adults is much needed as they are at significant increased risk of career limitations, unemployment and incarceration than the general population (Jensen et al., 2000; Snowling et al., 2000). I wanted to find out how coaching changes their self-beliefs, their stress levels and their behaviour

Real world data is hard to collect

So, I had before-, immediately after- and 3-months after coaching data from 67 dyslexic adults, split into three cohorts of wait-list control, one-to-one coaching and group coaching. I had a working memory score, a generalised self-efficacy score, a stress indicator and a workplace behaviour score for each. Bonferroni corrections for multiple comparisons (Perrett & Mundfrom, 2010) were somewhat disabling. All my intervention group means headed in the same direction – up! But the three x time, three x condition, four x dependent variable with the 67 people (down from 85 at the start) was just not powerful enough to get a conclusive result. My control group had practice effects (grrr) which waned by the third interval but ruined the time 2 analysis. My one-to-one coaching participants had a sustained uplift from time two to time three and my group coaching condition went up at time two and then up again at time three. I was none the wiser as to how coaching might improve the difficulties associated with dyslexia at work.

Real-world data is messy

We considered if the measures were faulty. The strongest result had come from using backward digit span in the Weschler Adult Intelligence Scale (Weschler, 2008). The group coaching condition had increased from an average of seven to eleven (the standardised score ranges from 1-19; the average is 8-12; practice effects are reported by the publisher to be 0.6). Yet this score was still not significant following Bonferroni corrections. The self-efficacy scores initially went backwards for the coaching conditions. We wondered if this was some sort of methodological artefact; or perhaps it reflected an increased self-awareness of struggles. However, they recovered by time three. Perhaps a more workplace-focused self-efficacy scale would be more effective? With the behavioural measures, these were designed by me and, though their reliability analyses were decent, we wondered if we should use an established scale of strategies. So, I decided to re-run the study. All studies were triple-blinded – testers didn’t know to which condition testees were assigned, coaches didn’t know test scores and I didn’t know which condition was which until after I had done the analysis.

You can imagine my delight, six months later, when I had almost identical results from my second cohort of 52 dyslexic adults (this time split into group coaching and control only). Control group practice effects at time two, persistent increases from the intervention group but not powerful enough to placate Bonferroni. So I undertook some ‘abductive reasoning’ (Van Maanen et al., 2007) to try and understand the results. This is when I noted a conundrum – a pattern in the data that shouldn’t be there if it was a straightforward null result.

Real-world people don’t respond in a homogenous way

Looking solely at the time three minus the time one scores (total distance travelled, or the “magnitude of the effect”) the means for each measure went in the same direction. Up for the intervention groups, slightly up for controls. But they were not correlated. How could this be? Why would there be consistency at the group level (as measured by the group means) analysis but no consistency at the individual level (correlation works by assessing the consistency of paired trajectories for each participant)? There is only one answer – the group means were masking significant disparities for individuals within each group. Now, this is where is gets technical. I tried a person-centred cluster analysis (Morin et al., 2018). In the working memory variable, I found distinct cohorts, a bi-modal distribution for the intervention group:

Some of them scored similarly to the control – a zero to small uplift, probably a practice effect. Others increased dramatically. In the other measures, I found a platykurtic distribution of improvement, some similar to the control, a bit ‘meh’, a bit more, increasing to quite reasonable and then quite large levels of improvement:

Group effect measures versus individual effects variance

But these were not the same people, which is why the correlations were not significant. In other words, some coachees had improved on working memory, some on levels of stress, some on self-efficacy and some on implementation of behavioural strategies. The coachees had taken what they wanted from the coaching, and not invested their personal development resources in the other mechanisms of change. The group level of analysis had wiped out variability in response-to-treatment and masked the impact of the coaching. This has implications for research, which is broadly dependent on the framework of null hypothesis significance tests. T-tests, ANOVAs, MANOVAs – all these depend on some sort of consistency within the group. Psychological research depends on isolating a potential variable, measuring it for each individual in a group, and crossing our fingers that the group will all behave in a similar enough way to achieve the hallowed ground of a significant p-value. But humans don’t behave in similar ways, even if they are broadly similar in age, diagnosis, employer, job role. I started wondering how many psychological approaches were ignoring the individual variability in treatment responses in favour of what works best for the dominant average, and ignoring the needs of those who don’t respond or respond negatively: mindfulness, I am looking at YOU (Farias & Wikholm, 2016).

Personalised pathways, group effect: meta-impact

We decided that there should be a way to understand whether or not an intervention has a good chance of working in some way for most rather than the one mechanism that will often work in the same way for many. To do this, I constructed a method for demarcating a significant improvement at the individual level which could be then re-aggregated at the group level across all the dependent variables. I deemed my participants to have improved if they improved to equal / more than one standard deviation above the average level of improvement for the cohort. This reduced the number of people who could be improved, marked a line in the sand for my platykurtic distributions and isolated the improvers in the bimodal distribution. When I had a binary yes/no score for improvers I could then add up how many improvers there were in the intervention groups and how many there were in the control groups. And bingo! The intervention groups produced significantly more improvers than the controls. This could be analysed using odds-ratio, ANOVA, t-tests or non-parametric equivalents (Doyle et al., 2022).

Going into my PhD viva with a novel statistical method of analysis was a risk. However, after a decent grilling, my examiners concurred that the method was empirically sound. Almuth and Dr Ray Randall, my external examiner, helped corral the study into a single paper. Getting it past journal reviewers was another matter! Those with statistical pedigree seemed affronted at the “arbitrary dichotomization” but offered several avenues for statistical exploration which I undertook, leading me to a place where I am way more familiar with mathematical reasoning than is comfortable for most social scientists! I enhanced the Maths and roped in a mathematician, Dr Kate Knight, to lay out the process in algebraic formulae. Job done? Nope. Those with field study experience loved the idea, but struggled with the Maths. Grr. Eventually, a multi-disciplinary journal, PLOSONE, found an editor and some anonymous reviewers who could see the pragmatic, realist need for expanding the methods available to field researchers and after a year of wrangling it was published on the 17th March 2022.

Real world data needs real world analytic method

What does this mean? My editor, Dr Ashley Weinberg, suggested that the meta-impact analysis of interventions has the potential to increase our understanding of psychological interventions in situ, giving boost to field researchers. There are still limitations. For example, we need to understand more about the cut-off point- the method needs to be replicated in tandem with qualitative study to explore whether it chimes with self-reports of experience and real world value. I know many research students and field researchers will empathise with my plight. There is a general sentiment in organisational psychology that we are hampered in research by participant attrition and low power, which leads us to design studies that have the most chance of a successful result, even though this limits us to basic designs or using large cohorts in ways that don’t match reality. My hope is that we can use meta-impact analysis to bring more ecological validity to our work as psychologists and embed nuance for individuals into study designs.

References

Dixon, R. A., & Hultsch, D. F. (1984). The Metamemory in Adulthood (MIA) instrument. Psychological Documents, 14(3).

Doyle, N., & McDowall, A. (2021). Diamond in the rough? An ‘empty review’ of research into ‘neurodiversity’ and a road map for developing the inclusion agenda. Equality, Diversity and Inclusion: An International Journal, published. https://doi.org/10.1108/EDI-06-2020-0172

Doyle, N.E., & McDowall, A. (2019). Context matters: A review to formulate a conceptual framework for coaching as a disability accommodation. PLoS ONE, 14(8). https://doi.org/10.1371/journal.pone.0199408

Doyle, N.E., Mcdowall, A., Randall, R., & Knight, K. (2022). Does it work ? Using a Meta-Impact score to examine global effects in quasi-experimental intervention studies. PLoS ONE, 17(3), 1–21. https://doi.org/10.1371/journal.pone.0265312

Doyle, N., & McDowall, A. (2015). Is coaching an effective adjustment for dyslexic adults? Coaching: An International Journal of Theory and PracticeCoaching: An, 8(2), 154–168. https://doi.org/10.1080/17521882.2015.1065894

Farias, M., & Wikholm, C. (2016). Has the science of mindfulness lost its mind ? British Journal of Psychology Bulletin, 40, 329–332. https://doi.org/10.1192/pb.bp.116.053686

Jensen, J., Lindgren, M., Andersson, K., Ingvar, D. H., & Levander, S. (2000). Cognitive intervention in unemployed individuals with reading and writing disabilities. Applied Neuropsychology, 7(4), 223–236. https://doi.org/10.1207/S15324826AN0704_4

King, E. B., Hebl, M. R., Morgan, W. B., & Ahmad, A. S. (2012). Field Experiments on Sensitive Organizational Topics. Organizational Research Methods, 16(4), 501–521. https://doi.org/10.1177/1094428112462608

McLoughlin, D., & Leather, C. (2013). The Dyslexic Adult. Chichester: John Wiley and Sons.

Morin, A., Bujacz, A., & Gagne, M. (2018). Person-Centered Methodologies in the Organizational Sciences : Introduction to the Feature Topic. 21(4), 803–813. https://doi.org/10.1177/1094428118773856

Nielsen, K., & Randall, R. (2013). Opening the black box: Presenting a model for evaluating organizational-level interventions. European Journal of Work and Organizational Psychology, 22(5), 601–617. https://doi.org/10.1080/1359432X.2012.690556

Perrett, J. J., & Mundfrom, D. J. (2010). Bonferroni Procedure. In N. J. Salkind (Ed.), Encyclopedia of Research Design (pp. 98–101). Sage Publications Ltd.

Santuzzi, A. M., Waltz, P. R., Finkelstein, L. M., & Rupp, D. E. (2014). Invisible disabilities: Unique challenges for employees and organizations. Industrial and Organizational Psychology, 7(2), 204–219. https://doi.org/10.1111/iops.12134

Snowling, M. J., Adams, J. W., Bowyer-Crane, C., & Tobin, V. A. (2000). Levels of literacy among juvenile offenders: the incidence of specific reading difficulties. Criminal Behaviour and Mental Health, 10(4), 229–241. https://doi.org/10.1002/cbm.362

Van Maanen, J., Sørensen, J. B., & Mitchell, T. R. (2007). The interplay between theory and method. Academy of Management Review, 32(4), 1145–1154. https://doi.org/10.5465/AMR.2007.26586080

Weschler, D. (2008). Weschler Adult Intelligence Scale version IV. Pearson.

More information: 

 

Share

How analysing co-creation during the Covid-19 pandemic offers insights on the simultaneous generation of academic, social and business value

Dr Muthu de Silva from the department of Management gives an overview of the findings of two recent Organisation for Economic Co-operation and Development reports, published with her co-authors, about the role co-creation played during the Covid-19 pandemic, and how it can shape innovation going forward.  

Co-creation is a mechanism of simultaneously generating academic, business and social value. During co-creation actors of the innovation ecosystem – such as businesses, universities, governments, intermediaries and society – act as collaborators to integrate their knowledge, resources, and networks to generate mutual benefits. The idea behind co-creation is that the joint efforts towards change or impact can lead to lasting and effective innovation.  

As an institution, Birkbeck is committed to delivering theoretically rigorous research with real-terms, practical impact, and a concept like co-creation is a really great way to facilitate this. Co-creating with non-academics enables academics to integrate needs and resources of both academic and non-academic communities, enhancing the reach and usefulness of their research.   

Over the years, I’ve published about 20 journal articles on the topic of co-creation and received eight best paper awards for these publications. In 2019, I was invited by the Working Party on Innovation and Technology Policy of the Organisation for Economic Co-operation and Development (OECD) to develop a conceptual framework on co-creation between science and industry. This meant publishing a high quality journal article and leading their 2021 – 2024 co-creation project that directly influences the strategies of innovation agencies, and ministries of 37 countries who belong to the OECD, and a wider audience that benefits from OECD publications.  

This work resulted in two reports and a journal article designed to influence innovation strategies of OECD member states. It has also resulted in leading another project regarding the importance of university and industry co-creation for a societal and economic green transition.  

Based on evidence gathered from 30 COVID-19 co-creation initiatives from 21 countries and three international cases, the two reports showed that co-creation was widely used to respond to the challenges raised by the COVID-19 pandemic. What was evident through the reports was that existing co-creation networks enabled the rapid emergence of new initiatives to address urgent needs, while digital technologies enabled establishing new – and, where necessary, socially distanced – collaborations.  

For instance, co-creation of medical innovation relied on substantially larger existing networks due to the complexity of medical discovery and manufacturing processes involved in developing these innovations. The COVID-19 Türkiye Platform, the transnational Exscalate4CoV, and the UK’s Oxford-AstraZeneca initiatives are examples of this. Digital tools were also used in numerous ways. As an example, the COVID Moonshot project which aimed to develop antiviral drugs against COVID-19 by identifying new molecules that could block SARS-CoV-2, involved three scientists who organised a hackathon inviting researchers/virologists to submit molecules, donations and assays (testing) via Twitter, resulting in over 4 000 submissions.  

Aside from funding initiatives, governments engaged actively in co-creation by granting access to their networks, advising on initiative goals and offering support to improve quick delivery.  The role of civil society was important as well, and the socially impactful nature of research and innovation was a motivating factor for engagement. For example, the Austrian COVID-19 Pop-up Hub initiative; the Federal Ministry for Climate Action, Environment, Energy, Mobility, Innovation and Technology co-developed the themes (Digital Health, Distancing, Economic Buffers and State Intervention) for public virtual discussion and participatory policy idea development taking place via the Hub.  

What emerged from the reports, were the following lessons for the design and implementation of future policy programmes for co-creation:   

  • Purpose is the strongest driver of co-creation; incentives to support co-creation should go beyond facilitating access to funding.  
  • Crisis-specific programmes may not be needed out of the crisis, but networks and infrastructures should be strengthened during “normal” times. 
  • There is room for building new collaborations between researchers and producers to accelerate innovation during “normal” times.  
  • Policy should support wider development and use of digital tools for co-creation.  
  • New approaches should be leveraged more to tap into the large pool of diverse and readily available capacities in the economy.  
  • Governments’ involvement in co-creation activities as network builders can help speed up solutions; enhanced agility in their operations should be encouraged.  
  • Public engagement in co-creation can help market uptake of new solutions. 

  Further information 

 

Share