Science Intelligence and InfoPros

Little things about Scientitic Watch and Information Professionnals

Posts Tagged ‘Impact Factor

Beyond the Impact Factor: altmetric and open access articles

leave a comment »

The impact of academic research has long been measured using citations, mainly with the Journal Impact Factor being used to assess individual publications within it, it has been observed. However, the Impact Factor is a journal level – not an article level – metric and, as academic publishing and the surrounding discussion move increasingly onto the web, new tools to track and assess the impact of individual scientific publications have emerged.

These web-based approaches are starting to offer an article-level perspective of the way research is disseminated, discussed and integrated across the web. The hope is that a broader set of metrics to complement citations will eventually give a more comprehensive view of article impact, and help to make the most relevant and important publications discoverable to individuals, based on their interests.

Altmetric.com is seen as one of a growing number of web-based tools taking a novel approach to the assessment of scholarly impact – it aggregates the mentions on Twitter and social media sites, and coverage in online reference managers, mainstream news sources and blogs to present an overview of the interest a published article is receiving online.

To take impact factor to the article level, open access publisher BioMed Central has reportedly added the Altmetic.com ‘donut’ to the about page of published articles. The donut will display for articles receiving coverage which has been tracked by Altmetric.com, along with an article score.

BioMed plans to keep adding to this range of metrics and indicators as they continue to expose a fuller image of research impact.

http://blogs.openaccesscentral.com/blogs/bmcblog/entry/assessing_research_impact_at_the

Written by hbasset

May 29, 2012 at 7:23 pm

Posted in Journals

Tagged with , ,

Abuses of bibliometrics and the slavish adoption of Impact Factor

leave a comment »

A nice piece by editors of Reproductive Biomedicine Online:

” (…) with this easy access to databases and papers come problems: notably the increased risk of deliberate or accidental plagiarism (…) and the fact of information overload. This latter problem has resulted in what can be seen as an abreaction: the entrenchment of the ‘prestige journal’ into which a young scientist must get their paper come what may – much of the data incomprehensibly compacted, and figures often too small or cropped to be of evidential value. These ‘prestige’ journals build and thrive financially through the increased significance of a development complementary to bibliographic databases: the bibliometric analysis.

 Bibliometric analyses attempt to measure the impact of a journal’s published material that can then reinforce its prestige – or its incentive to play the ‘bibliometric boosting game’. Several such metrics are around, each with their own characteristic strengths and weaknesses. And, like the journals that these metrics claim to rank, the various metrics have acquired their own ‘prestige’ value: although on what basis, other than historical longevity, is unclear. Thus, the most sought (and feared) metric is the Impact Factor or IF (…)

A fundamental issue is that citation indices assume that if a paper is cited it is because it is useful. In reality, papers are cited for many reasons. For example, negative citations dispute results or theories by citing a paper critically. Other papers appear in citation lists simply because they have been cited previously rather than actually read – a practice facilitated by the very electronic publication that boosts the rise of metrics. This practice will tend to be self-reinforcing – squeezing out more pertinent or ‘better’ papers, and even propagating ‘myths’. Self-citation can boost one’s own IF as well as one’s own ego. We suspect that it is a rare author who could honestly claim to have generated anew each reference list – and indeed have read every paper afresh each time it is cited. (…)

All of this might not matter were it not for the recent bureaucratic obsession of institutions, funding bodies and government bodies with ‘objective metrics’for ranking ‘performance’. This obsession has led to the increasing abuse of metrics as a surrogate for the scholarly value of work. Individual students, researchers and journal editors then are pressured to collude with this value system to make metrics in general, and the IF in particular, tyrannical despots that do few of us much good and distort publishing and citation practices. (…)

The IF, despite its flaws, seems here to stay for the foreseeable future, but the range of alternative metrics described above is available to us as editors. For this reason, we have decided with our publisher that henceforth from the July issue the journal will publish our data for the following metrics: the Impact Factor, the Scimago Journal Rank, the Source Normalized Impact per Paper, the Eigenfactor and the H-index (…). We are implementing this policy to encourage critical thought and discussion about metrics and to discourage the slavish adoption of IF as the only valid way in which to assess ‘quality’.

Martin H. Johnson, Jacques Cohen , Gedis Grudzinskas. The uses and abuses of bibliometrics.  Reproductive BioMedicine Online, 2011, Vol. 24, pp. 485-486. http://dx.doi.org/10.1016/j.rbmo.2012.03.007

Written by hbasset

May 11, 2012 at 4:42 pm

Posted in Journals

Tagged with ,

More fraudulent papers in high Impact-Factor publications

leave a comment »

This study reports evidence consistent with the ‘deliberate fraud’ hypothesis. The results suggest that papers retracted because of data fabrication or falsification represent a calculated effort to deceive. It is inferred that such behaviour is neither naıive, feckless nor inadvertent

Authors of fraudulent retracted papers appear to target journals with a high Impact Factor.

The results of this study show unequivocally that scientists in the USA are responsible for more retracted papers than any other country.These results suggest that American scientists are significantly more prone to engage in data fabrication or falsification than scientists from other countries.

The idea that certain authors may be deliberately trying to deceive should make journal editors and general readers profoundly cautious.



Grant Steen.  Retractions in the scientific literature: do authors deliberately commit research fraud? J Med Ethics 2011;37:113-117. http://jme.bmj.com/content/37/2/113.abstract

Written by hbasset

February 15, 2011 at 8:00 pm

Why Scopus has introduced SciMago JR and SNIP

with 2 comments

This paper introduces two journal metrics recently endorsed by Elsevier’s Scopus: SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP). SJR weights citations according to the status of the citing journal and aims to measure journal prestige rather than popularity.

It presents the main features of the two indicators, comparing them one with another, and with a journal impact measure similar to Thomson Reuters’ journal impact factor (JIF).

The journal impact factor, developed by Eugene Garfield as a tool to monitor the adequacy of coverage of the Science Citation Index, is probably the most widely used bibliometric indicator in the scientific, scholarly and publishing  community. However, its extensive use for purposes for which it was not designed has raised a series of criticisms, all aiming to adapt the measure to the new user needs

In January 2010, Scopus endorsed two such measures that had been developed by their partners and bibliometric experts SCImago Research Group, based in Spain (…), and the Centre for Science and  technology Studies (CWTS), based in Leiden, Netherlands, (…). The two metrics that were endorsed are SCImago Journal Rank (SJR) and Source Normalized Impact per Paper (SNIP).

Compared to other main fields, life sciences and health sciences tend to reveal the highest SJR and RIP values. Compared to the basic, JIF-like RIP (raw impact per paper), SJR tends to make the differences between journals larger, and enhances the position of the most prestigious journals, especially – though not exclusively – in life and health sciences.

The fact that Scopus introduced these two complementary measures reflects the notion that journal performance is a multi-dimensional concept, and that there is no single ‘perfect’ indicator of journal performance.
 
Additional resources:
www.journalmetrics.com

www.scimagojr.com

www.scimagojr.com

Lisa Colledge, Félix de Moya‐Anegón, Vicente Guerrero‐Bote, et al.  SJR and SNIP: two new journal metrics in Elsevier’s Scopus. Serials: The Journal for the Serials Community.Volume 23, Number 3 / November 2010. Pages: 215 – 221

http://dx.doi.org/10.1629/23215

Written by hbasset

December 19, 2010 at 10:15 am

US Scientists more likely to publish “fake” in high impact-factor journals

leave a comment »

Based on papers retracted from PubMed in the latest decade, a “study noted that the highest number of retracted papers was written by US first authors, accounting for one third of the total. One in three of these was attributed to fraud“.

According to the study, the fakes were more likely to appear in leading publications with a high ‘impact factor’.

Press release, here

Written by hbasset

November 17, 2010 at 6:23 pm

Posted in Journals

Tagged with , ,

American studies are more “positive” with pressure

leave a comment »

Researchers worldwide produce more than 1.4 million scientific articles each year.

A new european study shows that the ever-growing pressure to produce publishable results can adversely impact the quality of scientific research.

It was found that researchers report more ‘positive’ results for their experiments if they are based in US states where academics publish more frequently.

A cause of particular concern is the growing competition for research funding and academic positions, which, combined with an increasing use of bibliometric parameters to evaluate careers (e.g. number of publications and the impact factor of the journals they appeared in), pressures scientists into continuously producing “publishable” results“.

Like all human beings, scientists are confirmation-biased (i.e. tend to select information that supports their hypotheses about the world), and they are far from indifferent to the outcome of their own research: positive results make them happy and negative ones disappointed“.

He found that authors working in more ‘productive’ states were more inclined to support the tested hypothesis regardless of their research domain and whether or not funding was allocated to them. His research findings also reportedly hint that academics who carry out research in more competitive and productive environments are more likely to make their results look more ‘positive’.

The conclusions could be applied to all scientifically advanced countries, says the study, adding that policies that rely excessively on productivity measures might be lowering the quality of research.

Fanelli D (2010) Do Pressures to Publish Increase Scientists’ Bias? An Empirical Support from US States Data. PLoS ONE 5(4): e10271.

http://www.plosone.org/article/info:doi/10.1371/journal.pone.0010271

Written by hbasset

April 26, 2010 at 5:20 pm

Impact Factor still dominates

leave a comment »

Whatever initiatives to compete the famous Impact Factor owned by Thomson, IF still dominates small world of the Research evaluation, says an article in IWR.

Journal impact factors (IFs) have become a status symbol in the world of research. With libraries facing severe budget cuts in the recession, IF scores can help decide which journals remain essential to a collection. (…)

More important than the IF number itself is the ranking position it gives a journal, enabling the identification of high-impact or must-have journals. In a survey, 49% of respondents believed the IF was an “important” factor when judging a journal and 26.7% said it was a “very important” factor.

Venkatraman, Archana. Impact factors dominate citation metrics. Information World Review. Online (10th of September 2009): http://www.iwr.co.uk/information-world-review/analysis/2249258/journals-cherish-status-symbol

Written by hbasset

October 12, 2009 at 7:23 pm

Posted in Journals

Tagged with

Follow

Get every new post delivered to your Inbox.

Join 38 other followers