Science Intelligence and InfoPros

Little things about Scientitic Watch and Information Professionnals

Posts Tagged ‘Bibliometrics

Abuses of bibliometrics and the slavish adoption of Impact Factor

leave a comment »

A nice piece by editors of Reproductive Biomedicine Online:

” (…) with this easy access to databases and papers come problems: notably the increased risk of deliberate or accidental plagiarism (…) and the fact of information overload. This latter problem has resulted in what can be seen as an abreaction: the entrenchment of the ‘prestige journal’ into which a young scientist must get their paper come what may – much of the data incomprehensibly compacted, and figures often too small or cropped to be of evidential value. These ‘prestige’ journals build and thrive financially through the increased significance of a development complementary to bibliographic databases: the bibliometric analysis.

 Bibliometric analyses attempt to measure the impact of a journal’s published material that can then reinforce its prestige – or its incentive to play the ‘bibliometric boosting game’. Several such metrics are around, each with their own characteristic strengths and weaknesses. And, like the journals that these metrics claim to rank, the various metrics have acquired their own ‘prestige’ value: although on what basis, other than historical longevity, is unclear. Thus, the most sought (and feared) metric is the Impact Factor or IF (…)

A fundamental issue is that citation indices assume that if a paper is cited it is because it is useful. In reality, papers are cited for many reasons. For example, negative citations dispute results or theories by citing a paper critically. Other papers appear in citation lists simply because they have been cited previously rather than actually read – a practice facilitated by the very electronic publication that boosts the rise of metrics. This practice will tend to be self-reinforcing – squeezing out more pertinent or ‘better’ papers, and even propagating ‘myths’. Self-citation can boost one’s own IF as well as one’s own ego. We suspect that it is a rare author who could honestly claim to have generated anew each reference list – and indeed have read every paper afresh each time it is cited. (…)

All of this might not matter were it not for the recent bureaucratic obsession of institutions, funding bodies and government bodies with ‘objective metrics’for ranking ‘performance’. This obsession has led to the increasing abuse of metrics as a surrogate for the scholarly value of work. Individual students, researchers and journal editors then are pressured to collude with this value system to make metrics in general, and the IF in particular, tyrannical despots that do few of us much good and distort publishing and citation practices. (…)

The IF, despite its flaws, seems here to stay for the foreseeable future, but the range of alternative metrics described above is available to us as editors. For this reason, we have decided with our publisher that henceforth from the July issue the journal will publish our data for the following metrics: the Impact Factor, the Scimago Journal Rank, the Source Normalized Impact per Paper, the Eigenfactor and the H-index (…). We are implementing this policy to encourage critical thought and discussion about metrics and to discourage the slavish adoption of IF as the only valid way in which to assess ‘quality’.

Martin H. Johnson, Jacques Cohen , Gedis Grudzinskas. The uses and abuses of bibliometrics.  Reproductive BioMedicine Online, 2011, Vol. 24, pp. 485-486. http://dx.doi.org/10.1016/j.rbmo.2012.03.007

Written by hbasset

May 11, 2012 at 4:42 pm

Posted in Journals

Tagged with ,

To read: Is Google Scholar useful for bibliometrics?

with one comment

Aguillo, I.F.
Is Google Scholar useful for bibliometrics? A webometric analysis
(2012) Scientometrics, 91 (2), pp. 343-351.

Abstract

Google Scholar, the academic bibliographic database provided free-of-charge by the search engine giant Google, has been suggested as an alternative or complementary resource to the commercial citation databases like Web of Knowledge (ISI/Thomson) or Scopus (Elsevier). In order to check the usefulness of this database for bibliometric analysis, and especially research evaluation, a novel approach is introduced. Instead of names of authors or institutions, a webometric analysis of academic web domains is performed. The bibliographic records for 225 top level web domains (TLD), 19,240 university and 6,380 research centres institutional web domains have been collected from the Google Scholar database. About 63. 8% of the records are hosted in generic domains like. com or. org, confirming that most of the Scholar data come from large commercial or non-profit sources. Considering only institutions with at least one record, one-third of the other items (10. 6% from the global) are hosted by the 10,442 universities, while 3,901 research centres amount for an additional 7. 9% from the total. The individual analysis show that universities from China, Brazil, Spain, Taiwan or Indonesia are far better ranked than expected. In some cases, large international or national databases, or repositories are responsible for the high numbers found. However, in many others, the local contents, including papers in low impact journals, popular scientific literature, and unpublished reports or teaching supporting materials are clearly overrepresentedGoogle Scholar lacks the quality control needed for its use as a bibliometric tool; the larger coverage it provides consists in some cases of items not comparable with those provided by other similar databases.

http://www.springerlink.com/content/lrug235244u112rg/?MUD=MP

 

Written by hbasset

April 18, 2012 at 5:03 pm

Posted in literature

Tagged with ,

Bibliometrics can be fun!

leave a comment »

With his world map of scientific collaboration, Oliver Beauchesne, from the US-Canada based Science-Metrix,  has built nice visualizaion of science collaboration.

It is based on Scopus data. For those interested in looking at how scientists are connected geographically, a number of companies already promise to help map the geographic reach of an individual or a discipline. These include Springer’s AuthorMapper, Transinsight’s GoPubMed and BioMedExperts.

Van Noorden, Richard. Picture post: world map of scientific collaboration. The Great Beyond (Nature), Posted on January 27,2011.
http://blogs.nature.com/news/thegreatbeyond/2011/01/picture_post_world_map_of_scie_1.html

Written by hbasset

February 8, 2011 at 9:02 pm

New design for Research Trends

leave a comment »

Research Trends, the bibliometric newsletter published by Elsevier and based on Scopus data, has been moved to a nice new WordPress platform.

It offers a fresh look and some social features: ratings of articles, share into an impressive range of social tools (excpet into 2collab which is pretty funny for an Elsevier product!!), etc…

http://www.researchtrends.com/

Written by hbasset

January 28, 2011 at 4:44 pm

Use of metrics to evaluate researchers

leave a comment »

A long history…

Peter Jacso, one of the best experts in STM abstract databases, gives his opinion… In his latest publication, he compared 3 tools: Web of Science (WoS), Scopus and Google Scholar (GS).

A few findings and opinions:

  • it is quite likely that more and more administrators will request librarians and other information professionals to churn out metrics-based research evaluation ranking lists  about individuals, departments, and colleges
  • I am in favor of using metrics-based evaluation. (…) However, because of the shortcomings of these special databases for evaluating individual researchers (as opposed to citation-based subject searching), I am also very much against  replacing peer-based evaluation by bibliometric, scientometric and/or informetric indicators in ranking individual researchers, groups of researchers, institutions and countries by the traditional bibliometric indicators (total number of citations, average number of citations per publications), and the new ones alone that combine the quantitative and qualitative measures in a single number, such as the original  h-index and its many, increasingly more refined variants
  • I have also concerns about the level of search skill and the time needed from librarians and other information professionals to engage –…- in the very time consuming and sophisticated  procedures. (…) Still, even such a highly qualified group can leave some methodological issues unexplained,  make mistakes in the search process and/or in the compilation of data and/or in the data entry process
  • Google-Scholar based metrics: The reason for this indifference is that the hit counts and the citation counts delivered by Google Scholar are not worth the paper they are printed on. Its metadata remain to be a metadata mega mess (Jacso, 2010), and its citation matching algorithm is worse than those of the cheapest dating services

 

Jacso, Peter. Savvy Searching. Online Information Review, 34 (6) pp. 972-982.

Written by hbasset

January 23, 2011 at 4:25 pm

Why Scopus has introduced SciMago JR and SNIP

with 2 comments

This paper introduces two journal metrics recently endorsed by Elsevier’s Scopus: SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP). SJR weights citations according to the status of the citing journal and aims to measure journal prestige rather than popularity.

It presents the main features of the two indicators, comparing them one with another, and with a journal impact measure similar to Thomson Reuters’ journal impact factor (JIF).

The journal impact factor, developed by Eugene Garfield as a tool to monitor the adequacy of coverage of the Science Citation Index, is probably the most widely used bibliometric indicator in the scientific, scholarly and publishing  community. However, its extensive use for purposes for which it was not designed has raised a series of criticisms, all aiming to adapt the measure to the new user needs

In January 2010, Scopus endorsed two such measures that had been developed by their partners and bibliometric experts SCImago Research Group, based in Spain (…), and the Centre for Science and  technology Studies (CWTS), based in Leiden, Netherlands, (…). The two metrics that were endorsed are SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP).

Compared to other main fields, life sciences and health sciences tend to reveal the highest SJR and RIP values. Compared to the basic, JIF-like RIP (raw impact per paper), SJR tends to make the differences between journals larger, and enhances the position of the most prestigious journals, especially – though not exclusively – in life and health sciences.

The fact that Scopus introduced these two complementary measures reflects the notion that journal performance is a multi-dimensional concept, and that there is no single ‘perfect’ indicator of journal performance.
 
Additional resources:
www.journalmetrics.com

www.scimagojr.com

www.scimagojr.com

Lisa Colledge, Félix de Moya‐Anegón, Vicente Guerrero‐Bote, et al.  SJR and SNIP: two new journal metrics in Elsevier’s Scopus. Serials: The Journal for the Serials Community.Volume 23, Number 3 / November 2010. Pages: 215 – 221

http://dx.doi.org/10.1629/23215

Written by hbasset

December 19, 2010 at 10:15 am

Citation data confronted to Experts judgments

leave a comment »

When compared to human judgments by Experts, automated citation data by Scopus, Web of Science and Google Scholar are bot so bad, after all.

This paper studies the correlations between peer review and citation indicators when evaluating research quality in library and information science (LIS). 42 LIS experts (Bar-Ilman, Jasco, Tenopir…) , provided judgments of the quality of research published by 101scholars.

citation data from Scopus was more strongly correlated with the expert judgments than was data from GS,which in turn was more strongly correlated than data from WoS

Li,J., et al. Ranking of library and information science researchers : Comparison of data sources for correlating citation data, and expert judgments. Journal of Informetrics (2010), doi:10.1016/j.joi.2010.06.005

Written by hbasset

August 4, 2010 at 7:32 pm