Science Intelligence and InfoPros

Little things about Scientitic Watch and Information Professionnals

Posts Tagged ‘evaluation

Thomson Research in View: a new tool for academia performance’s evaluation

leave a comment »

Thomson Reuters has launched a research management system called Research In View to provide universities with a comprehensive view of institutional performance

 The solution aggregates and standardises data from disparate sources to provide a unified database and analytic interface for managing, searching and reporting on university activities and performance.

Developed by the company’s research analytics business in consultation with university administrators worldwide, it aims to enable universities anywhere in the world to track and associate people with projects, strategic goals and with traditional measures such as classes taught or published journal articles.

http://researchanalytics.thomsonreuters.com/researchinview/

My opinion: to compare with Elsevier SciVal?

See also: Information World Review, 28/02/2011
http://www.iwr.co.uk/academic-and-humanites/3010772/Thomson-Reuters-launches-%E2%80%98Research-in-View-for-universities

Advertisements

Written by hbasset

March 3, 2011 at 9:29 pm

Use of metrics to evaluate researchers

leave a comment »

A long history…

Peter Jacso, one of the best experts in STM abstract databases, gives his opinion… In his latest publication, he compared 3 tools: Web of Science (WoS), Scopus and Google Scholar (GS).

A few findings and opinions:

  • it is quite likely that more and more administrators will request librarians and other information professionals to churn out metrics-based research evaluation ranking lists  about individuals, departments, and colleges
  • I am in favor of using metrics-based evaluation. (…) However, because of the shortcomings of these special databases for evaluating individual researchers (as opposed to citation-based subject searching), I am also very much against  replacing peer-based evaluation by bibliometric, scientometric and/or informetric indicators in ranking individual researchers, groups of researchers, institutions and countries by the traditional bibliometric indicators (total number of citations, average number of citations per publications), and the new ones alone that combine the quantitative and qualitative measures in a single number, such as the original  h-index and its many, increasingly more refined variants
  • I have also concerns about the level of search skill and the time needed from librarians and other information professionals to engage –…- in the very time consuming and sophisticated  procedures. (…) Still, even such a highly qualified group can leave some methodological issues unexplained,  make mistakes in the search process and/or in the compilation of data and/or in the data entry process
  • Google-Scholar based metrics: The reason for this indifference is that the hit counts and the citation counts delivered by Google Scholar are not worth the paper they are printed on. Its metadata remain to be a metadata mega mess (Jacso, 2010), and its citation matching algorithm is worse than those of the cheapest dating services

 

Jacso, Peter. Savvy Searching. Online Information Review, 34 (6) pp. 972-982.

Written by hbasset

January 23, 2011 at 4:25 pm