Evaluation

Research assessment : criteria and indicators

Published on

DORA, Leiden Manifesto: Principles implemented by Hercéres in its research evaluation missions.

The San Francisco Declaration on Research Assessment (DORA), published in 2012, and the Leiden Manifesto of 2015 both set out to improve evaluation practices, particularly by drawing attention to the misuse of certain bibliometric indicators in the processes of recruitment, promotion and individual evaluation of researchers.

Both texts reveal that a number of stakeholders in research systems continue to make use of two indicators which have been roundly criticised by the scientometric community. DORA draws particular attention to the journal impact factor, or JIF. The method used to calculate this indicator makes it innately biased towards certain publications, and it is not immune to manipulation. Furthermore, it fails to take into account the differences in research practices which exist between different disciplines and sub-disciplines, at the risk of introducing a further source of bias to comparisons between researchers and research units.

The Leiden Manifesto is more concerned with the H index, launched in 2005 by physicist Jorge Hirsch and spreading rapidly thereafter. The aim of this composite indicator was to take into account both the number of works published by researchers and their scientific impact. In reality, the seductive simplicity of this indicator owes much to the fact that it makes the number of publications the dominant variable, failing to surmount the difficulties inherent to measuring two variables with a single indicator.

DORA and the Leiden Manifesto do more than simply criticise the existing indicators. Both documents contain recommendations regarding the use of scientometric indicators, particularly for assessment purposes. Although we at Hcéres specialise in the evaluation of research institutions, not individuals, we see a clear affinity between our work and the principles contained in these texts. They are the same principles which guide our research evaluation activities.

  1. Peer review is a fundamental principle of the evaluation practices and processes which underpin academic publishing. In using peer reviews, Hcéres abides by international standards and complies with the requirements of transparency, collegiality and equality of treatment. Our research assessment process includes a response phase allowing the subjects of the evaluation to express their opinion on their evaluation.
  2. Hcéres has also chosen a policy of multi-criteria evaluation. This goes far beyond the use of a few indicators or metrics based on just one facet of scientific output (journal articles). Instead, we take a much more comprehensive approach to evaluating the results obtained by research entities in the accomplishment of their various missions.
  3. The Hcéres methodology places particular importance on the qualitative dimension of the evaluation of research entities. For research units, our method prioritises self-evaluation in the early phases of the evaluation process. In their self-evaluation files, researchers are asked to look at their whole bibliography of articles, books, chapters and conference addresses, to select the 20% which they feel to be most significant and to explain this choice. They are also asked to pinpoint a number of important milestones in their work and explain the (epistemological, theoretical, methodological, economic, societal, organisational etc.) reasons which make these the most significant highlights of their work during the period in question. The file also includes a SWOT analysis, focusing on their five-year plans.

Resources