Evaluating university research: Same performance indicator, different rankings
Giovanni Abramo and
D’Angelo, Ciriaco Andrea
Journal of Informetrics, 2015, vol. 9, issue 3, 514-525
Abstract:
Assessing the research performance of multi-disciplinary institutions, where scientists belong to many fields, requires that the evaluators plan how to aggregate the performance measures of the various fields. Two methods of aggregation are possible. These are based on: (a) the performance of the individual scientists or (b) the performance of the scientific fields present in the institution. The appropriate choice depends on the evaluation context and the objectives for the particular measure. The two methods bring about differences in both the performance scores and rankings. We quantify these differences through observation of the 2008–2012 scientific production of the entire research staff employed in the hard sciences in Italian universities (over 35,000 professors). Evaluators preparing an exercise must comprehend the differences illustrated, in order to correctly select the methodologies that will achieve the evaluation objectives.
Keywords: Research evaluation; Productivity; Bibliometrics; Italy (search for similar items in EconPapers)
Date: 2015
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (8)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1751157715000462
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:infome:v:9:y:2015:i:3:p:514-525
DOI: 10.1016/j.joi.2015.04.002
Access Statistics for this article
Journal of Informetrics is currently edited by Leo Egghe
More articles in Journal of Informetrics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().