Use of metrics to evaluate researchers
A long history…
Peter Jacso, one of the best experts in STM abstract databases, gives his opinion… In his latest publication, he compared 3 tools: Web of Science (WoS), Scopus and Google Scholar (GS).
A few findings and opinions:
- it is quite likely that more and more administrators will request librarians and other information professionals to churn out metrics-based research evaluation ranking lists about individuals, departments, and colleges
- I am in favor of using metrics-based evaluation. (…) However, because of the shortcomings of these special databases for evaluating individual researchers (as opposed to citation-based subject searching), I am also very much against replacing peer-based evaluation by bibliometric, scientometric and/or informetric indicators in ranking individual researchers, groups of researchers, institutions and countries by the traditional bibliometric indicators (total number of citations, average number of citations per publications), and the new ones alone that combine the quantitative and qualitative measures in a single number, such as the original h-index and its many, increasingly more refined variants
- I have also concerns about the level of search skill and the time needed from librarians and other information professionals to engage –…- in the very time consuming and sophisticated procedures. (…) Still, even such a highly qualified group can leave some methodological issues unexplained, make mistakes in the search process and/or in the compilation of data and/or in the data entry process
- Google-Scholar based metrics: The reason for this indifference is that the hit counts and the citation counts delivered by Google Scholar are not worth the paper they are printed on. Its metadata remain to be a metadata mega mess (Jacso, 2010), and its citation matching algorithm is worse than those of the cheapest dating services
Jacso, Peter. Savvy Searching. Online Information Review, 34 (6) pp. 972-982.