Published online before print April 20, 2009, doi:10.1073/pnas.0903307106
Progress in science is driven by the publication of novel ideas and experiments, most usually in peer reviewed journals, but now days increasingly just on the internet. We all have our own ideas of which are the most influential journals, but is there a simple statistical metric of the influence of a journal? Most scientists would immediately say Impact Factor-IF, which is published online in Journal Citation Reports as part of the ISI Web Of Knowledge.
The Impact factor is the average number of citations in a year given to those papers in a journal published in the previous 2 years. But what, for example, is the most influential of the 3 following journals: A, which published just 1 paper a year and has a stellar IF of 100; B, which published 1,000,000 papers per years and has a dismal IF of 0.1 but 100,000 citations; or C which publishes 5,000 papers a year with an IF of 10? Unless there is a very odd distribution of citations in B, A has a paradigm shifting paper like the Waston and Crick DNA structure, C is likely to be the most influential journal. Clearly neither IF nor total number of citations is, per se, the metric of the overall influence of a journal.
Bibliometricians have introduced various scales of ranking journals; some based on publications, some based on usage as well, including the internet, using social networking analysis. Bollen et al. recently concluded that no single indicator adequately measures impact and IF is at the periphery of 39 scales analyzed. But there is a new parameter, the Eigenfactor, which attempts to rate the influence of journals. The Eigenfactor ranks journals in a manner similar to that used by Google for ranking the importance of Web sites in a search. To quote from www.eigenfactor.org/methods.htm:-
The Eignefactor algorithm corresponds to a simple model of research in which readers follow chains of citations as they move from journal to journal. Imagine that a researcher goes to the library and selects a journal article at random. After reading the article, the researcher selects at random one of the citation from the article. She then proceeds to the journal that was cited, reads a random article there, and selects a citation to direct her to her next journal volume. the researcher does this ad infinitum.
The Eigenfactor is now listed by Journal Citation Reports. In practice, there is strong correlation between Eigenfactors and the total number of citations received by a journal. A plot of the 2007 Eigenfactors for the top 200 cited journals against the total number of citations shows some startling results.:
Three journals have by far and away the most overall influence on science: Nature, PNAS and Science, closely followed by the Journal of Biological Chemistry.
The terrible legacy of IF is that it is being used to evaluate scientists, rather than journals, which has become of increasing concern to many of us. Judgment of individuals is, of course, best done by in depth analysis by expert scholars in the subject area. But, some bureaucrats want a simple metric. My experience of being on international review committees is that more notice is taken of IF when they do not have the knowledge to evaluate the science independently.
An extreme example of such behavior is an institute in the heart of the European Union that evaluates papers from its staff by having a weighting factor of 0 for all papers published in journals with IF<5 and just a small one for 5<IF<10. So, publishing in the Journal of Molecular biology counts for naught, despite its being at the top of areas such as protein folding.
All journals have a spread of citations, and even the best have some papers that are never cited plus some fraudulent papers and some excruciatingly bad ones. So, it is ludicrous to judge an individual paper solely on the IF of the journal in which it is published.
Fortunately, PNAS has both a good IF and a high reliability because of its access to so many expert National Academy of Sciences member- Editors. If a paper has to be judged by a metric, then it should by the citations to it and not to the journal. The least evil of the metrics for individual scientists is the h-index, which ranks the influence of a scientist by the number of citations to a significant number of his or her papers; an h of 100 would mean that 100 of their publication have been cited at least 100 times each. In terms of a “usage metric, Hirsch’s h-index paper is exceptional in its number of downloads( 111,126 downloads versus 262 citations since it was published in November 2005).
While new and emerging measure of scientific impact are developed, it is important not to rely solely on one standard. After all, science is about progress, which is ultimately assessed by human judgment.
No comments:
Post a Comment