Revisiting the scaling of citations for research assessment
Giovanni Abramo,
Tindaro Cicero and
D’Angelo, Ciriaco Andrea
Journal of Informetrics, 2012, vol. 6, issue 4, 470-479
Abstract:
Over the past decade, national research evaluation exercises, traditionally conducted using the peer review method, have begun opening to bibliometric indicators. The citations received by a publication are assumed as proxy for its quality, but they require standardization prior to use in comparative evaluation of organizations or individual scientists: the citation data must be standardized, due to the varying citation behavior across research fields. The objective of this paper is to compare the effectiveness of the different methods of normalizing citations, in order to provide useful indications to research assessment practitioners. Simulating a typical national research assessment exercise, he analysis is conducted for all subject categories in the hard sciences and is based on the Thomson Reuters Science Citation Index-Expanded®. Comparisons show that the citations average is the most effective scaling parameter, when the average is based only on the publications actually cited.
Keywords: Research evaluation; Bibliometrics; Citations; Scaling (search for similar items in EconPapers)
Date: 2012
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (39)
Downloads: (external link)
http://www.sciencedirect.com/science/article/pii/S1751157712000272
Full text for ScienceDirect subscribers only
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:eee:infome:v:6:y:2012:i:4:p:470-479
DOI: 10.1016/j.joi.2012.03.005
Access Statistics for this article
Journal of Informetrics is currently edited by Leo Egghe
More articles in Journal of Informetrics from Elsevier
Bibliographic data for series maintained by Catherine Liu ().