Large-scale assessment of research outputs through a weighted combination of bibliometric indicators
Alberto Anfossi (),
Alberto Ciolfi (),
Filippo Costa (),
Giorgio Parisi () and
Sergio Benedetto ()
Additional contact information
Alberto Anfossi: National Agency for the Evaluation of Universities and Research Institutes (ANVUR)
Alberto Ciolfi: National Agency for the Evaluation of Universities and Research Institutes (ANVUR)
Filippo Costa: National Agency for the Evaluation of Universities and Research Institutes (ANVUR)
Giorgio Parisi: Università “La Sapienza” di Roma
Sergio Benedetto: National Agency for the Evaluation of Universities and Research Institutes (ANVUR)
Scientometrics, 2016, vol. 107, issue 2, No 19, 683 pages
Abstract:
Abstract The paper describes a method to combine the information on the number of citations and the relevance of the publishing journal (as measured by the Impact Factor or similar impact indicators) of a publication to rank it with respect to the world scientific production in the specific subfield. The linear or non-linear combination of the two indicators is represented on the scatter plot of the papers in the specific subfield in order to immediately visualize the effect of a change in weights. The final rank of the papers is therefore obtained by partitioning the two-dimensional space through linear or higher order curves. The procedure is intuitive and versatile since it allows, after adjusting few parameters, an automatic and calibrated assessment at the level of the subfield. The derived evaluation is homogeneous among different scientific domains and can be used to address the quality of research at the departmental (or higher) levels of aggregation. We apply this method, that is designed to be feasible on a scale typical of a national evaluation exercise and to be effective in terms of cost and time, to some instances of the Thomson Reuters Web of Science database and discuss the results in view of what was done recently in Italy for the Evaluation of Research Quality exercise 2004–2010. We show how the main limitations of the bibliometric methodology used in that context can be easily overcome.
Keywords: Bibliometric evaluation; Institutional rankings; Evaluation processes; University policy (search for similar items in EconPapers)
Date: 2016
References: View references in EconPapers View complete reference list from CitEc
Citations: View citations in EconPapers (11)
Downloads: (external link)
http://link.springer.com/10.1007/s11192-016-1882-9 Abstract (text/html)
Access to the full text of the articles in this series is restricted.
Related works:
This item may be available elsewhere in EconPapers: Search for items with the same title.
Export reference: BibTeX
RIS (EndNote, ProCite, RefMan)
HTML/Text
Persistent link: https://EconPapers.repec.org/RePEc:spr:scient:v:107:y:2016:i:2:d:10.1007_s11192-016-1882-9
Ordering information: This journal article can be ordered from
http://www.springer.com/economics/journal/11192
DOI: 10.1007/s11192-016-1882-9
Access Statistics for this article
Scientometrics is currently edited by Wolfgang Glänzel
More articles in Scientometrics from Springer, Akadémiai Kiadó
Bibliographic data for series maintained by Sonal Shukla () and Springer Nature Abstracting and Indexing ().