A UK university faces criticism for the responsible use of research metrics, after using information on research earnings and scientists’ publications to identify dozens of “risky” jobs.
Critics say the use of parameters in such a decision is inappropriate as they tend to focus on a small part of an academic’s work. They add that the institution at the center of the line – the University of Liverpool – used a citation-based metric that is designed to assess large groups of researchers, rather than individuals.
The university has defended the way it uses the metrics and says these weren’t the only factors it took into account when making the decision.
The debate highlights a more general unease about the use of parameters in science, as more data is collected to assess the quality of researchers’ work. Some say these quantitative performance measures focus too much on publication materials while failing to recognize other types of work, including teaching, committee work, and peer review.
A range of factors
The University of Liverpool plans to cut dozens of jobs in its health and life sciences faculty as part of a reorganization. In January and February, the university informed 47 researchers that their jobs were in jeopardy.
In a statement to NatureThe university’s press team, according to the university, that a five-year average of research income was used to identify researchers whose jobs might be at risk, and that “a series of factors likely to remove colleagues from the pool of people potentially at risk were then considered, including the contribution of positive citation measures, if applicable ”. The university declined to specify what these parameters were.
The statement states that other indicators have been considered alongside these parameters, including “authorship of impact case studies, management contribution and membership in external bodies”, and that circumstances potentially mitigating measures – including the impacts of the COVID-19 pandemic, parental leave and reduced hours due to family responsibilities – have been taken into account.
However, an email seen by Nature which was sent to staff at the University of Liverpool by the local branch of the University and College Union, which represents academics across the UK, says managers have identified employees at risk of dismissal at the using two key indicators and did not take other aspects of their daily work into account.
The email, which quotes an academic paper, says managers have established a “quality baseline” by examining staff performance “against key metrics, particularly focusing on research income and the quality of the individual results ”. The two metrics used were a five-year average of research income relative to that of researchers at similar universities, and a score called domain-weighted citation impact, which measures how often research papers are cited by compared to the rest of the articles in their field. .
Elizabeth Gadd, head of research policy at Loughborough University in the UK, says the field-weighted citation impact measure is not suitable for evaluating the work of individual researchers. “It is only stable for large sets of publications, for example 10,000 documents or more,” she explains.
Patricia Murray, a molecular physiologist at the University of Liverpool who runs no risk of redundancy, was so dismayed by the apparent misuse of the measures that she emailed her colleagues to rally support for those affected. This resulted in an open letter to the university leadership, which was signed by more than 400 researchers at its university and elsewhere. The letter said the settings used were “particularly problematic” and that their use endangered the jobs of “the most collegial faculty members, managing technology facilities and serving on committees that keep our departments running smoothly and effectively. institutes ”.
“Evaluating staff solely on the basis of quantitative measures is never acceptable, no matter what type of measure is used,” he adds.
The outcry also prompted organizations that advocate responsible use of settings to contact the institution.
These include the Declaration on Research Evaluation (DORA), to which the University of Liverpool is a signatory. The statement says institutions should not use metrics “to assess the contributions of an individual scientist, or in hiring, promotion or funding decisions.” On its website, the university states that by enrolling in DORA, it is committed to avoiding using journal-based metrics to make decisions, to recognizing the value of all relevant research findings. and to be explicit about the criteria used to assess university productivity. .
A spokesperson for DORA said the organization has had discussions with the university which remain confidential to allow “a free and honest exchange of information and views.” They declined to say whether they were satisfied with the institution’s response.
The authors of the Leiden Manifesto, another statement on the responsible use of measurements, have written to the university’s vice-chancellor, Janet Beer, to raise concerns. “We see the application of quantitative metrics in mass redundancy as a major threat to recent initiatives on responsible research metrics,” wrote bibliometrists Ismael Rafols, Ludo Waltman and Sarah de Rijcke, of the University of Leiden. in the Nederlands.
The letter, dated February 21, describes how the parameters can be skewed in favor of certain research subjects or certain ages, and says that this “may violate the basic principle of equal treatment in employment” . This is the first time to their knowledge that metrics have been used to select researchers for university job cuts in Europe, the authors add. Rafols says the group has yet to receive a response.
The public dispute shows how much of the discussion about metrics is gaining ground, says James Wilsdon, a research policy researcher at the University of Sheffield, UK, and a member of the UK Forum for Responsible Research Metrics (FFRM), whose the members also discussed the situation with Liverpool. A letter from the FFRM to the institution calls for a “fair and evidence-based resolution” to the problem.
“While this is clearly a horrible situation for the researchers concerned, it also reflects in some ways the growing maturity and integration of debates about measurement and evaluation,” says Wilsdon.