Responsible research evaluation

Louise Ross is the Research & Impact Development Officer for the Department of Law at Birkbeck. In this post she explores the opportunities and challenges in measuring the impact of research and critiques a couple of examples of research metrics in relation to the work of one academic, Professor Mike Hough

Resisting seduction …. 

Simplicity is very seductive when things are complicated, nuanced, overwhelming or important.  In the context of reporting research findings in a non-academic output, you can forgive a bit of over-simplification, perhaps a small loss of sensitivity, a disregard of the least important of the arguments, in the interests of a stronger narrative. Can’t you? 

The answer (as with most things in life, I realised in middle age) is: it depends. If your “stronger narrative” means your research findings are read and disseminated by more of the stakeholders who could use it to effect change, and taken up more enthusiastically, then such “simplification losses” can be an acceptable trade-off.  But there is a point beyond which these losses aren’t tolerable, when you feel you’ve compromised too far. Where that boundary is, is a judgement call that I’m sure all researchers have had to make when translating their findings for a non-academic audience. 

Image of figures in a crowd

A similar judgement call made by Birkbeck’s Impact Officers and others, relates to the use of various publication metrics. The notion that we can easily measure productivity, impact and research quality is seductive, but erroneous (despite the claims of some of these metrics!). Metrics need to be approached with caution, or “responsibly” as we express it in Birkbeck’s policy. Please be reassured that Birkbeck’s impact officers understand the limitations, conditions and prohibitions of these measures! 

I’m going to discuss two examples, the h-index (a citation measure); and something created to complement this kind of traditional citation metric, Altmetrics (which identify and measure the online attention garnered by a research output). I’ve approached both very cautiously and would characterise them as a source of illumination and ideas, especially Altmetrics. 

I honestly haven’t considered citation measures for any of my Law School colleagues in my role, with the exception of Professor Mike Hough, now Emeritus Professor. And in his case, only because the level of his citations was specifically mentioned in his 2018 retirement lecture. For those of you who don’t know him, Professor Hough is a criminologist, with a long and esteemed career producing policy-oriented research.  

I say esteemed, not because of some metric, but because Professor Hough was the recipient of the 2021 Outstanding Achievement Award of the British Society of Criminology (for his long contribution bringing together academic and policy research); and the 2020 European Criminology Award of the European Society of Criminology (for his lifetime contribution to criminology.)  Although at Hough’s retirement lecture, no less than the Dean of our School of Law, Professor Stewart Motha did refer to Mike’s citation data: “Just one measure of Mike’s scholarly influence is that from 2006 to 2010, he was the second most cited author in the British Journal of Criminology, and the eleventh most cited across five international journals.” 

Mike put this in context when responding, by estimating that of the 300 odd publications he had produced over his career, fifty had been cited by other academics around fifty times; and of those, 25 had been cited a lot more than that. I did enjoy his next comment, made with wry self-deprecation “But what’s quite bad is that 100 of them have been barely cited at all; no references – absolutely disappeared off the face of the earth. And I wonder why I bothered with those 100. I mean I could have bunked off work every third day and nobody would have noticed. And so, I’ve sacrificed quite a lot of my career on the altar of unread research.” 

So, there’s one drawback of citation metrics.  Perfectly sound research might not get cited. Even Mike Hough couldn’t explain why.   

Even with one-third of his research “a flower born to blush unseen, and waste its sweetness on the desert air”*, Mike’s h-index is currently an impressive 70 (so in line with his 2018 assessment). The h-index measures both volume and citation together, a score of 70 meaning an author has at least 70 papers each cited at least 70 times.  If Mike had produced fewer papers, or they were cited less, his score would thus be lower. You can find Mike’s score on his Google Scholar profile page.

Image of the earth with lit up network

Does it do any good to compare scores? I’ve said Mike’s score is impressive, but that’s only because I already had the context that he is both a productive author (having a relatively large number of outputs, more than one hundred in BIROn for example) and a highly cited one [thanks to this citation analysis, Most-Cited Scholars in Criminology and Criminal Justice 1986-2010, Cohn E.G, Farrington D.P & Iratzoqui, A., Springer (2014)].  Without this context, I wouldn’t have known how his score compares to other criminologists.   

I definitely would not attempt to compare scores with researchers from other disciplines; because practices that affect citations differ so much. Just consider the attribution of authorship. In the humanities and social sciences, authorship is attributed to those who wrote the item; in many other disciplines, authorship can be attributed to all those (possibly dozens or even hundreds) who made a contribution to the work (including data collection, data analysis, or methodology). I believe a 2015 physics paper (from the teams operating the Large Hadron Collider) had a record-breaking 5,154 authors.  

If I looked at the h-index of the eleventh most cited physicist in Google Scholar (because Mike was the eleventh most cited criminologist, that’s the extent of my shonky logic), the individual in question is Hongjie Dai, a nanotechnologist and applied physicist at Stanford University, who has a h-index of 203. What does that mean? Probably nothing except to confirm that scientists will almost certainly have higher h-index scores because they are listed as authors in more papers. 

So, I characterise the h-index as interesting contextual information only, subject to lots of limitations and health warnings, and on my radar only because Mike’s citation data was already the subject of discussion as an indicator of esteem. It’s quite possible that more colleagues will be spotlighted by citation analyses such as that carried out by Cohn et al, which might prompt me to see what their h-index is, but I can’t envisage a scenario where I would look at it otherwise.   

Altmetrics is a different proposition. I have actually used this metric to inform my work. Without giving away too many trade secrets, I used it to identify which policy-makers are discussing a particular piece of research i.e., who it had reached. Obviously the first step is to ask the researcher themselves who their policy contacts are, but sometimes a paper has an independent life and momentum of its own and reaches parts other research cannot reach (to co-opt the famous Heineken slogan). It may be in some global body’s library or resource bank, or on their agenda, unbeknownst to the original authors.  Altmetrics links search results to the DOI of the output (a unique identifier, yay), so it caters for others’ poor practices such as omitting authors’ names, shortening the title, or misattributing institutional affiliations (e.g., University of London, rather than Birkbeck).  

Many outputs in BIROn now have their Altmetric score on their landing page, it is widely used by publishers.  Look for the multi-coloured doughnut.   Professor Hough’s article Why do people comply with the law? Legitimacy and the influence of legal institutions has an Altmetric score of 159, including three policy sources.  Relatively few outputs have any kind of significant Altmetric score, so don’t worry if your outputs aren’t scoring like this. 

Bibliometric analysis seems to be a growing field of study; and the impact agenda is here for the foreseeable future. It seems likely therefore that citation data, and other metrics that promise to identify and track impact are going to continue to be a feature of the impact officers’ landscape.  But be reassured that we are not so easily seduced by metrics promising to “measure impact” for us – ha! We expect only some limited useful insights and, even then, we will be meanly calculating whether it’s worth the cost of our time to wandering through these foothills.  

*Elegy Written in a Country Churchyard, Thomas Gray (1751).

If you’re interested in responsible research metrics and evaluation, do join us for London Open Research Week, when on Wednesday 27th October 4 –5pm Andrew Gray will present on Responsible metrics: developing an equitable policy and Stephen Curry will look at The intersections between DORA, open scholarship, and equity

View the full programme of events and book a place here. All events will be held online using MS Teams. 

Connected people image Gordon Johnson. Faceless crowd image by Clker-Free-Vector-Images. Earth image by Gerd Altmann.

Open Science and ECRs

In 1823, the College’s founder Dr George Birkbeck set out his vision: “now is the time for universal benefits of the blessings of knowledge”.  That statement continues to underpin the mission and culture of the institution. Today, Open Science is a movement to make all scholarly research open and accessible to academia and society more widely.

Oh, the Humanities?

However, Open Science is a bit of a tricky title; it sounds very focused on the STEM subjects. Maybe we should be using Open Research or Open Scholarship, as the movement is intended to encompass the humanities as well the sciences.

We often encounter similar problems with Open Data; sometimes it’s hard to see what the data is in the Arts. Original materials? Images? A sculpture?

Open Access cuts across both Humanities and the Sciences, and there is a greater acceptance in all disciplines. Open Data and Open Access (while the most well publicised) are just part of Open Research which also encompasses IP, governance, and ethics. For now, most have settled on referring to it as Open Science, so that’s the way we’ll be discussing this movement.

How did it all start?

It can be argued that the history of Open Science is as long as publishing. Open Science can be seen as the natural evolution of scientific publishing in journals, where in the 1600s the publishing of the outcomes of research began.

Image of early journal

In these early journals, lone scientists published their work and this remains the foundation of much science and discourse today. Modern journals still publish academic results, and often also expect that the data and methods are shared too. When you look back to the very beginning and to where we are today, progress toward Open Science is clear.

However, there is still much to do. Transformative agreements, for example, are now placing a very costly paywall between both academics and end users, with some institutions simply unable to afford them. Open Access publishing itself is often prohibitively expensive for the Gold route.

The future

Early Career Researchers (ECRs) are often seen as champions of Open Research (this is not to say that establisher researchers are against the movement). ECRs are usually in the first 10 years of research having completed their PhD, they might not have permanent roles, and have may little to no experience of funding application. Yet they have driven much of the open science movement championing at a grass roots level and in self-organized communities.

Can we hope that as these ECRs progress through their career they carry the Open Science agenda with them?

George Birkbeck himself set out on an academic career in 1799 at the age of only 23, providing free classes for working-class men in Glasgow.

Image of George Birkbeck

So it makes sense that as Birkbeck’s founder supported the “universal benefits of the blessings of knowledge”, the university now supports Open Science and ECRs, through events like Open Access Week and Fellowships for Early Career Researchers.

As part of this year’s Open Access Week, we have teamed up with other London institutions to host a series of online events. On Tuesday 26th October 2021, AJ Boston and Madeleine Pownall will be discussing Open Research and ECRs.

Open image: CC BY-NC-SA 2.0 – Tom Magliery. Journal image: public domain (Wikipedia).

George Birkbeck image from the Birkbeck image collections.

It matters how we open knowledge

BIROn (Birkbeck Institutional Research Online) is Birkbeck’s open access repository. Its goal is to increase the visibility and reach of the College’s research by making it available across the web, in as close to its final form as possible.

But open access does not always mean accessible to all. According to gov.uk, “at least 1 in 5 people in the UK have a long term illness, impairment or disability. Many more have a temporary disability.”

This statistic indicates that ~20% of the potential audience for Birkbeck’s research may not be able to read it as easily as many of us take for granted. Open access and accessibility don’t always intersect.

The theme of this year’s Open Access Week, the week beginning 25 October, is: “It matters how we open knowledge”. This has never been truer than in the context of accessibility. If it’s significantly more difficult for some to read open access research than others, can it really be called open?

In September 2020, Birkbeck and our partners at CoSector (who host and manage the repository) collaborated on an accessibility statement for BIROn. The statement outlines the progress made to ensure BIROn now meets many of the requirements of the Web Content Accessibility Guidelines (WCAG 2.1), an internationally recognised set of recommendations for improving web accessibility.

However, there is still work to do.

The EPrints software we use for BIROn is a “core” build from CoSector which is then adapted for different institutions depending on their needs. It is this core build (not the modified version specific to Birkbeck) which was tested using the WAVE accessibility tool. Planned upgrades to the core build aim to address the areas where the site is still unable to meet requirements. You can see what needs work in the roadmap.

The second element

Although the BIROn web site is now much closer to offering an optimised experience for users with accessibility needs, the full-text files it hosts are a more complex matter.

BIROn was established in 2007, and contains some materials which are even older, including PDFs which do not meet accessibility and archiving standards. Files originate from a huge variety of sources and come in many formats; they may contain abbreviations, formulae, tables, and images which screen readers and other assistive technologies struggle to interpret. Bringing every file up to standard will be a massive, resource-intensive task.

What are others doing?

The repository team at the University of Kent have blogged about their experiences with checking accessibility on their repository, KAR. The blogs include helpful insights into tools such as Lighthouse (which is built into Chrome). When we ran a check on BIROn’s accessibility in Lighthouse, it scored 100% for both desktop and mobile iterations (see figure below).

Image showing the 100% score achieved by the Lighthouse check of BIROn

As Kent’s blog outlines, this was just the beginning of the process. The KAR repository contains almost three times the number of records in BIROn, so the challenges for full-text file accessibility are even more acute.

The Kent team therefore created a button for users to request an accessible version of a repository file. Depending on the document, this might mean significant work is required, with a thesis taking up to three weeks and requiring the input and expertise of the original author. However, some files can be converted in less than ten minutes. The KAR team also discovered that the vast majority of the initial requests for accessible versions were because the requester was unclear about what was being offered; ultimately, just two of the first 64 requests needed to be actioned.

You can read about the challenges the KAR team faced on these insightful blogs. Here at Birkbeck, we are also beginning to face up to these challenges, mindful of our legal duty to anticipate the changes that need to be made and not just react to them.

Padlock image by Kerstin Riemer from Pixabay