Archive for the ‘Tools’ Category
Open access (OA) publisher BioMed Central has launched a new semantically-enriched search tool, Cases Database, which aims to enhance the discovery, filtering and aggregation of medical case reports from many journals. OA to journal articles published under Creative Commons licences, which permit text mining, enable the literature to be reused as a resource for scientific discovery
More than 11,000 cases from 100 different journals are reportedly available to be freely searched with Cases Database.
Cases Database uses text mining and medical term recognition to filter peer reviewed medical case reports and provide a semantically enriched search experience. The database offers structured search and filtering by condition, symptom, intervention, pathogen, patient demographic and many other data fields, allowing fast identification of relevant case reports to support clinical practice and research. Registered users can save cases, set up e-mail alerts tonew cases matching their search terms, and export their results. Cases Database will be free to access and is expected to be of particular interest to practicing clinicians, researchers, lecturers, drug regulators, patients, students and authors.
To read this nice post by the KraftyLibrarian:
If you haven’t heard about the Mayan civilzation’s calendar predicting the end of the world on December 21, 2012, then you have been living under a rock. Personally I believe the Mayans were on to something. Instead, I believe the end of the world will happen on January 1, 2013. Why?
As of January 1st NCBI will no longer support Internet Explorer 7 and all the hospitals that haven’t upgraded will begin to have problems searching PubMed. (…)
Read the full article at:
KraftyLibrarian. Internet Explorer, PubMed and the End of the Year. Posted on 12 December 2012, Available from: http://kraftylibrarian.com/?p=2153
Brilliantly introduced by Robin Neidorf, there is a white paper this month at Freepint that is worth reading . It explains how is could be risky for a company to rely only on free news service.
“Free sources of news are increasingly used in the enterprise as “good enough” for most purposes. However, there are times when “good enough” isn’t enough, and it’s essential for a researcher to know when those are… and to have the right tools to hand. (…)
Information professionals report to us that they know premium news providers offer better search, more targeted results, more flexible output options and a host of other features that save them and their client time. (…)
There are plenty of times when “good enough” is distinctly not enough.
Read the full story at:
Neidorf, Robin. News Diligence: When “Good Enough” Just Isn’t, Freepint, 28th of November 2012. Available from: http://web.freepint.com/go/features/69549
Europe PubMedCentral = 25 million abstracts from Medline + 2 million OA full-text articles + Agricola + biological Patents + thesis + clinical guidelines…
Unlike PubMed Central, Europe PMC provides a single point of access to not only full-text articles but additionally the abstracts available through PubMed. The Europe PMC interface also offers novel features and functionality, including links to other relevant content, integrated text and data mining tools and grant reporting services through Europe PMC plus.
According this study related to Social Sciences publications, Google Scholar provides “vastly larger citation counts than either Scopus or Web of Science when all results are taken into account, but only slightly larger counts when only scholarly journals are considered“….
The study also deals with citation counting issue, saying that “ it is relatively easy to falsify citing references to research and create “search engine spam” which artificially inflates citation countswithin Google Scholar. While it is unclear as to whether this is occurring deliberately and if so, towhat extent, it remains an issue which should engender cautious use of search engine citation data“.
As a conclusion the study says that “ Google Scholar may not be as reliable as either Scopus or Web of Science as a stand-alone source for citation data“
Elaine M. Lasda Bergman. Finding Citations to Social Work Literature: The Relative Benefits of Using Web of Science, Scopus, or Google Scholar. The Journal of Academic Librarianship, Available online 23 October 2012
Elsevier today announced the integration of Roche propriety reaction information within Reaxys, which will run on Roche’s infrastructure and inside the Roche firewall to provide high performance and security. Roche chemistry information will be securely searchable and discoverable by Roche scientists through the Reaxys user interface. The incorporation and discoverability of Roche proprietary information in Reaxys is anticipated to significantly improve Roche scientists’ productivity.
With this development Roche researchers will be able to launch a single search in Reaxys across integrated internal data and experimental data published in journals and patents, with results unified and organised in a context directly relevant to the researcher workflow. The announcement comes after many months of collaboration between teams from Roche and Reaxys.
Source: STM publishing news, 2nd of October 2012; Available from:
An interesting study to read:
“ RoI can be defined as a performance measure used to quantify and evaluate the efficiency of an investment in library resources or to compare efficiency among different investments. While it may seem simply to be a question of money in versus money out, the real difficulty of expressing the overall value of this resource for an institution comes from many contributing factors:
- Time saved by library staff and researchers
- Convenience of constant access and online search capabilities
- Effect on research output and teaching
- Physical space saved in the library by using electronic resources
RoI can be articulated by libraries to provide justification for ongoing development of collections within an institution and to ensure that current resources may be prioritized in terms of the value they provide to the institution as a whole. (…)
For librarians and administrators working to meet competing demands with limited resources, understanding the value of eBooks will continue to be of great importance for the foreseeable future. The ability to evaluate the most cost-effective and beneficial scholarly content allows for librarians to prioritize resources for their patrons and demonstrate the ongoing value of the library for their institution. (…)
eBooks are anticipated to become as hugely influential over print publications as eJournals have been in the last decade. (…)
Our interviews showed that evaluating usage data is the most common and obvious method for evaluating RoI. Other factors affecting value such as time spent processing records and marketing eBooks to users are often more difficult to quantify. User surveys are a common tool for providing much more context for how library patrons are interacting with eBooks and their perceived value. (…)
eBooks are used much more for individual chapters rather than an entire book... (…)
With the move to electronic content of all kinds, a shift has occurred in the role of librarians themselves. For instance, much more time is being spent on technical issues than 10 years prior. Librarians are now required to have computer expertise, and are charged with training their patrons on how to best make use of these electronic resources to maintain the value of the content…
eBooks are relatively new, compared with 15 years of eJournals, but are likely to continue a rapid rate of adoption in the coming years. The industry is in its early development though, and it will likely be a few years before the percentage of book collections have migrated from print to electronic at the same level journals have. Once this happens, faculty and student usage is likely to increase dramatically…
Read the full paper on:
Scholarly eBooks: understanding the Return on Investment for libraries. White paper, Springer, 2012. 9 p.
A few words on the soft revolution that might happen on the search giant…
The Google blog announces, at last, the release of some developments (known as Google Graph) that were studied by the R&D of Mountain View for years.
“The Knowledge Graph enables you to search for things, people or places that Google knows about—landmarks, celebrities, cities, sports teams, buildings, geographical features, movies, celestial objects, works of art and more—and instantly get information that’s relevant to your query. This is a critical first step towards building the next generation of search, which taps into the collective intelligence of the web and understands the world a bit more like people do. (…)
1. Find the right thing
Language can be ambiguous—do you mean Taj Mahal the monument, or Taj Mahal the musician? Now Google understands the difference, and can narrow your search results just to the one you mean—just click on one of the links to see that particular slice of results:
2. Get the best summary
With the Knowledge Graph, Google can better understand your query, so we can summarize relevant content around that topic, including key facts you’re likely to need for that particular thing. For example, if you’re looking for Marie Curie, you’ll see when she was born and died, but you’ll also get details on her education and scientific discoveries:
3. Go deeper and broader
Finally, the part that’s the most fun of all—the Knowledge Graph can help you make some unexpected discoveries. You might learn a new fact or new connection that prompts a whole new line of inquiry.
We’ve always believed that the perfect search engine should understand exactly what you mean and give you back exactly what you want. And we can now sometimes help answer your next question before you’ve asked it, because the facts we show are informed by what other people have searched for.
We’ve begun to gradually roll out this view of the Knowledge Graph to U.S. English users. It’s also going to be available on smartphones and tablets…
Singhal, Amit. Introducing the Knowledge Graph: things, not strings. Google Blog, 16th of May 2012. Available from: http://googleblog.blogspot.fr/2012/05/introducing-knowledge-graph-things-not.html [Accessed 30th of May 2012]
“Since its launch in 2001 Wikipedia has seen incredible growth worldwide, counting more than 21 million articles published in around 280 languages (including nearly 4 million articles in English) in 2012 (1).
Wikipedia has grown in size (number of Wikipedia entries/articles have been increasing over time) and is showing high reliability: a recent study (2) of historical entries found 80% accuracy for Wikipedia, compared to 95-96% for other sources. This means that for the entries checked in the study, Wikipedia contain on average only about 15% more errors than other sources including traditionally perceived authoritative sources such as Encyclopaedia Britannica. The research found that this difference was negligible. Adding to this Wikipedia’s ease of access and wide coverage of topics explains why for many people it has become the first port of call for instant general knowledge on a variety of subjects. (…)
What is perhaps surprising is that Wikipedia appears to be increasingly used by scholars for their research. (…)
More interestingly, there has also been a dramatic increase in the number of publications referring to Wikipedia as a source. The aforementioned recently published study limited the search results to mentions of Wikipedia as a reference title, but extending the search to all reference fields reveals much wider use even with restrictions to scholarly content published in journals . CAGR was an unbelievable 88% per annum since the first paper in 2002 to the 4006 papers published in 2011. Focusing on the past 5 years (2007-2011) CAGR was still impressive at more than 31% per annum.
Huggett, Sarah. The influence of free encyclopedias on science. Research Trends, March 2012. Available from: http://www.researchtrends.com/issue-27-march-2012/the-influence-of-free-encyclopedias-on-science/ [Accessed 23rd April 2012]