Skip to Main Content
It looks like you're using Internet Explorer 11 or older. This website works best with modern browsers such as the latest versions of Chrome, Firefox, Safari, and Edge. If you continue with this browser, you may see unexpected results.

Scholarly Communications

A guide created to discussion numerous aspects of scholarly communication, such as: open access publishing, copyright and creative commons licensing, scholarly profiles, and the h-index

Impact Factor: what it is, and what it isn't

Defined:

From Wikipedia:
Calculation

In any given year, the impact factor of a journal is the number of citations, received in that year, of articles published in that journal during the two preceding years, divided by the total number of articles published in that journal during the two preceding years:[1]

 

{\displaystyle {\text{IF}}_{y}={{\text{Citations}}_{y-1}+{\text{Citations}}_{y-2} \over {\text{Publications}}_{y-1}+{\text{Publications}}_{y-2}}}

 

For example, Nature had an impact factor of 41.456 in 2014:[2]

 

{\displaystyle {\text{IF}}_{2014}={{\text{Citations}}_{2013}+{\text{Citations}}_{2012} \over {\text{Publications}}_{2013}+{\text{Publications}}_{2012}}={29753+41924 \over 860+869}=41.456}

 

This means that, on average, its papers published in 2012 and 2013 received roughly 41 citations each in 2014. Note that 2014 impact factors are reported in 2015; they cannot be calculated until all of the 2014 publications have been processed by the indexing agency.

New journals, which are indexed from their first published issue, will receive an impact factor after two years of indexing; in this case, the citations to the year prior to Volume 1, and the number of articles published in the year prior to Volume 1, are known zero values. Journals that are indexed starting with a volume other than the first volume will not get an impact factor until they have been indexed for three years. Occasionally, Journal Citation Reports assigns an impact factor to new journals with less than two years of indexing, based on partial citation data.[3][4] The calculation always uses two complete and known years of item counts, but for new titles one of the known counts is zero. Annuals and other irregular publications sometimes publish no items in a particular year, affecting the count. The impact factor relates to a specific time period; it is possible to calculate it for any desired period. For example, the Journal Citation Reports (JCR) also includes a five-year impact factor, which is calculated by dividing the number of citations to the journal in a given year by the number of articles published in that journal in the previous five years.[5][6]

Use

The impact factor is used to compare different journals within a certain field. The Web of Science indexes more than 11,500 science and social science journals.[7]

Journal impact factors are often used to evaluate the merit of individual articles and individual researchers. This particular use of impact factors was summarised by Hoeffel:[8]

Impact Factor is not a perfect tool to measure the quality of articles but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation. Experience has shown that in each specialty the best journals are those in which it is most difficult to have an article accepted, and these are the journals that have a high impact factor. Most of these journals existed long before the impact factor was devised. The use of impact factor as a measure of quality is widespread because it fits well with the opinion we have in each field of the best journals in our specialty.

...

In conclusion, prestigious journals publish papers of high level. Therefore, their impact factor is high, and not the contrary.

As impact factors is a journal-level metric, rather than an article or individual level metric, this use is controversial. Garfield agrees with Hoeffel,[9] but warns about the "misuse in evaluating individuals" because there is "a wide variation [of citations] from article to article within a single journal".[10]

Some companies are producing false impact factors. According to an article published in the United States National Library of Medicine, these include Global Impact Factor (GIF), Citefactor, and Universal Impact Factor (UIF).[11]

 

  1. Jalalian, M (2015). "The story of fake impact factor companies and how we detected them". Electronic Physician. 7 (2): 1069–72. doi:10.14661/2015.1069-1072. PMC 4477767. PMID 26120416.
  2. "Journal Citation Reports: Impact Factor". Retrieved 2016-09-12.
  3. "Nature". 2014 Journal Citation Reports. Web of Science (Science ed.). Thomson Reuters. 2015.
  4. "RSC Advances receives its first partial impact factor". RSC Advances Blog. 24 June 2013. Retrieved 16 July 2018.
  5. "Our first (partial) impact factor and our continuing (full) story". news.cell.com. 30 July 2014. Archived from the original on 7 March 2016. Retrieved 21 May 2015.
  6. "JCR with Eigenfactor". Archived from the original on 2010-01-02. Retrieved 2009-08-26.
  7. "ISI 5-Year Impact Factor". APA. Retrieved 2017-11-12.
  8. "Every journal has a story to tell". Journal Citation Reports. Clarivate Analytics. Retrieved 2019-03-15.
  9. Hoeffel, C. (1998). "Journal impact factors". Allergy. 53 (12): 1225. doi:10.1111/j.1398-9995.1998.tb03848.x. ISSN 0105-4538. PMID 9930604.
  10. Garfield, Eugene (2006-01-04). "The History and Meaning of the Journal Impact Factor". JAMA. 295 (1): 90–3. doi:10.1001/jama.295.1.90. ISSN 0098-7484. PMID 16391221.
  11. Eugene Garfield (June 1998). "The Impact Factor and Using It Correctly". Der Unfallchirurg. 101 (6): 413–414. PMID 9677838.

Text is available under the Creative Commons Attribution-ShareAlike License

From Wikipedia: The San Francisco Declaration on Research Assessment (DORA) intends to halt the practice of correlating the journal impact factor to the merits of a specific scientist's contributions. Also according to this statement, this practice creates biases and inaccuracies when appraising scientific research. It also states that the impact factor is not to be used as a substitute "measure of the quality of individual research articles, or in hiring, promotion, or funding decisions".[1]

The declaration originated from the December 2012 meeting of the American Society for Cell Biology, and was published on May 13, 2013, signed by more than 150 scientists and 75 scientific organizations.[1][2] The American Society for Cell Biology states that, as of 30 May 2013, there were more than 6,000 individual signatories to the declaration and that the number of scientific organizations "signing on has gone from 78 to 231" within two weeks.[3] As of 14 December 2017, the number of individual signatories has risen to over 12,800 and the number of scientific organizations to 872.[4] Some organization signatories in 2017 include the British Library, Nature Research, BioMed Central, Springer Open and Cancer Research UK.[5]

 

1. Van Noorden, Richard (May 16, 2013). "Scientists join journal editors to fight impact-factor abuse". Nature News Blog.

2. Alberts, Bruce (May 17, 2013). "Impact Factor Distortions". Science. 340 (6134): 787. doi:10.1126/science.1240319. PMID 23687012.

"Dora - ASCB". ASCB. Retrieved 2017-05-01.

3.Fleischman, John (May 30, 2013). "Impact Factor Insurrection Catches Fire with Over 6,000 Signatures and Counting". The American Society for Cell Biology.[permanent dead link]

4. Dora - ASCB". ASCB. Retrieved 14 December 2017.

5. "Dora - ASCB". ASCB. Retrieved 2017-05-01.

Text is available under the Creative Commons Attribution-ShareAlike License