In June, Clarivate (formerly Thomson Reuters) will release the Journal Citation Report (JCR), an annual summary of the citation performances of more than ten thousand academic journals. While the JCR includes a variety of benchmark performance indicators, most users are focused on just one metric — the Journal Impact Factor.

Designed as a tool for measuring and ranking the performance of journals within a field, the Impact Factor is now over 40 years old. In recent years, other citation-based metrics have been developed to complement, or compete with, the Impact Factor.

indicators

The purpose of this post is to provide a brief summary of the main citation indicators used today. It is not intended to be comprehensive, nor is it intended to opine on which indicator is best. It is geared for causal users of performance metrics and not bibliometricians. No indicator is perfect; the goal of the table below is simply to highlight their salient strengths and weaknesses.

These citation indicators are grouped based on the design of their algorithm: The first group (Ratio-based indicators) are built on the same model as the Impact Factor, by dividing citations counts by document counts. The second group (Portfolio-based indicators) calculates a score based on a ranked set of documents. The last group (Network-based indicators) seeks to measure influence within a larger citation network.

A good indicator simplifies the underlying data, is reliable in its reporting, provides transparency to the underlying data, and is difficult to game. Most importantly, a good indicator has a tight theoretical connection to the underlying construct it attempts to measure. Any one of these properties is deserving of its own blog post, if not entire book chapter.

Last, any discussion of performance indicators invites strong opinions that are indeed important, but largely tangential, to the metrics themselves, such as their misuse, abuse, social, cultural, and political implications. While these discussions are necessary, I’d like to keep comments focused on the indicators themselves: Did I miss (or misrepresent) any important indicators? Does one indicator capture the underlying value construct better than another? What would make an indicator more reliable, more transparent, or more difficult to game?


Ratio-based indicators

Impact Factor: Total citations in a given year to all papers published in the past 2 years divided by the total number of articles and reviews published in the past 2 years. PROS: Simple formula with historical data. CONS: 2-year publication window is too short for most journals; numerator includes citations to papers not counted in denominator. PRODUCER: Clarivate (formerly Thomson Reuters), published annually in June.

Impact Factor (5-yr): 5-year publication window instead of 2. PROS: Preferred metric in fields in which citation lifecycle is long, e.g., social sciences. PRODUCER: Clarivate, published annually in June.

CiteScore: Total citations in a given year to all documents published in past 3 years divided by the total number documents published in the past 3 years. PROS: Does not attempt to classify and limit by article type; based on broader Scopus dataset; free resource. CONS: Biased against journals that publish front matter (editorials, news, letters, etc.). PRODUCER: Elsevier, based on Scopus data, updated monthly.

Impact per Publication (IPP): Similar to Impact Factor with notable differences: 3-yr. publication window instead of 2; includes only citations to papers classified as article, conference paper, or review; based on broader Scopus dataset. PROS: Longer observation window; citations are limited to those documents counted in the denominator. CONS: Like the Impact Factor, defining correct article type can be problematic. PRODUCER: CWTS, Leiden University based on Scopus data, published each June.

Source-Normalized Impact per Paper (SNIP): Similar to IPP but citations scores are normalized to account for differences between scientific fields, where field is determined by the set of papers citing that journal. PROS: Can compare journal performance across fields. CONS: Normalization makes the indicator less transparent. PRODUCER: CWTS, Leiden University for Elsevier, published each June.


Portfolio-based indicators

h-index: A measure of the quantity and performance of an individual author. An author with an index of h will have published h papers, each of which has been cited at least h times. PROS: Measures career performance; not influenced by outliers (highly cited papers). CONS: Field-dependent; ignores author order; increases with author age and productivity; sensitive to self-citation and gaming, especially in Google Scholar. PRODUCER: First described by Hirsch, many sources calculate h-index values for individual authors.

h-5: A variation of the h-index that is limited to articles published in the last 5 years. Used by Google Scholar to compare journal performance. PROS: Enables younger authors to be compared with older authors. CONS: Field-dependent; ignores author order; sensitive to self-citation and gaming, especially in Google Scholar. For journals, h-5 is biased toward larger titles. Google Scholar also reports h5-median, which is intended to address size bias. PRODUCER: Google Scholar. Published annually in June.


Network-based indicators

Eigenfactor: Measures the influence of a journal on an entire citation network. Calculation of scores is based on eigenvector centrality, computed through iterative weighting, such that citations from one journal have more influence than another. PROS: Offers a metric that more closely reflects scientific influence as a construct. CONS: Computationally complex, not easily replicable, and provides the same result for most journals as more simple methods (e.g. Impact Factor). PRODUCER: Clarivate (formerly Thomson Reuters), published annually in June.

SCImago Journal Rank (SJR): Like Eigenfactor but computed upon the Scopus database. PRODUCER: Elsevier, published annually in June. A detailed explanation and comparison of Eigenfactor and SJR is found here.

Relative Citation Ratio (RCR): A field-normalized citation metric for articles based on NIH’s PubMed database. A field is defined by the references in the articles co-cited with the paper of interest. For example, if Article A is co-cited by Articles B, C, and D, then Article A’s field is defined by the references contained within Articles B, C, and D. PROS: Allows each article to be defined by its own citation network rather than relying on external field classification. CONS: Sensitive to interdisciplinary citations and multidisciplinary journals. The RCR is dependent upon the Impact Factor for weighting journals listed in references. PRODUCER: NIH.

Fuente