Incidents associés
Recently our philosophy faculty at Jagiellonian University in Kraków, like many institutions around the world, introduced a ranking of journals based on Elsevier's Scopus database to evaluate the research output of its employees for awards and promotions. This database is also used by our institution in the hiring process.
The database provides three main measures: CiteScore, SJR, and SNIP. CiteScore counts the citations received in four-year periods (e.g. 2020-2023) by texts published in this span and divides this figure by the number of papers published in the same interval. SJR and SNIP -- which our institution uses to rank journals -- are more complicated, with their full algorithms not publicly available.
We checked the Scopus philosophy list and discovered three journals published by Addleton Academic Publishers -- which we had never heard of -- are in the top 10 of the 2023 CiteScore ranking: Linguistic and Philosophical Investigations (3rd on the list of 806 philosophy journals indexed by Scopus in 2023), Review of Contemporary Philosophy (5/806), and Analysis and Metaphysics (6/806). All three also are in the top 100 of the 2023 SJR ranking.
Addleton publishes two other journals indexed by Scopus: Knowledge Cultures (67/806 in CiteScore Philosophy ranking and 8/1,106 in Literature and Literary Theory ranking) and Contemporary Readings in Law and Social Justice*, *which isindexed as a social science journal (33/1025 in Law). The publisher initially had nine journals in Scopus, but four of them have now been removed because of "Publication Concerns," according to the most recent Scopus update on May 5. Early in May the company announced on their website that Linguistic and Philosophical Investigations, Review of Contemporary Philosophy, Analysis and Metaphysics, and Contemporary Readings in Law and Social Justice will have a new publisher: Auricle Global Society of Education and Research, based in India.
How was it possible to get into the Scopus top 10 in philosophy? The trick is simple: The Addleton journals extensively cross-cite each other. For example, of 541 citations to Linguistic and Philosophical Investigations used to calculate the 2023 CiteScore, 208 come from journals published by Addleton. Additional citations come mostly from Frontiers and MDPI journals.
These journals are filled with automatically generated papers, all using the same template, extensively using buzzwords such as "blockchain," "metaverse," "deep learning," "immersive visualization," "neuro-engineering technologies," and "internet of things." Most papers claim to examine the recently published literature on these topics by "a quantitative literature review of the main databases." They also claim to analyze initially (always!) between 170 and 180 articles that satisfied the undisclosed "eligibility criteria."
The papers claim that after quality checks with tools called AMSTAR, AXIS, MMAT, ROBIS, etc., the authors decided to focus (always!) on between 30 and 35 articles. Then there are the phrases. "ROBIS assessed the risk of bias in systematic reviews," "AXIS evaluated the quality of cross-sectional studies," and "The quality of academic articles was determined and risk of bias was measured by MMAT" appear in Google Scholar 270 times as of early May, solely in journals published by Addleton.
Although our quick search showed that some authors have real affiliations, mostly in Romania, Slovakia, and the Czech Republic, a substantial share of authors and their affiliations seem to be fake -- for example authors at "The Cognitive Labor Institute in New York," "The Sustainable Industrial Networks Research Unit at CLI in Springfield, IL, USA," and "The Center for Sensing and Computing Technologies, Bradford at ISBDA" in England, all with "aa-er.org" email addresses.
That domain -- also used by some of the editors in chief -- belongs to the American Association for Economic Research, which on its website shares an address -- 30-18 50th Street, Woodside, New York, NY, 11377 -- with Addleton and seems to be a random house in the New York City borough of Queens.
Authors also use fake grant numbers allegedly from fake institutions -- "Grant GE-1420897 from the Internet of Things Sensing Infrastructures Research Unit, Newport, Wales;" "Grant GE-1764317 from the Cyber-Physical Process Monitoring Systems Laboratory, Norwich, England;" "Grant GE-1823847 from the Internet of Things Sensing Networks Research Unit, Plymouth, England." The same editorial board serves for three journals, with 10members who are dead.
When we informally alerted some colleagues involved in introducing these rankings at our institution, we met with indifference. The presence of these fake journals on the relevant lists is apparently perceived to have negligible consequences. However, the lists as used in the employee evaluation process -- for example, to nominate researchers for yearly awards -- have firm percentile cutoffs. And the fact that three fake journals are among the leaders in the Scopus rankings has the practical consequence that three honest journals which should have received the top score from the perspective of our local evaluation have been pushed to the lower tier.
We have contacted three "authors" with "aa-er.org" email addresses, asking them to send their papers, but they have not replied. We also contacted various members of the journals' editorial boards. One -- Liz Jackson of Hong Kong -- said she was invited to be on the board but has since not been asked to do anything. She said she would ask to have her name removed. One of us (LW) also contacted some apparently real authors from the University of Craiova to discuss the content of their papers. None has replied.
Rankings based on Scopus frequently serve universities and funding bodies as indicators of the quality of research, including in philosophy. They play a crucial role in decisions regarding academic awards, hiring, and promotion, and thus may influence the publication strategies of researchers. A recent philosophy article that provides a meta-ranking of philosophy journals claimed "CiteScore is the one [measure] that correlates most strongly with all four meta-rankings. Tentatively this might be a reason to prefer CiteScore as a source of information on a journal if it has no other rankings." Our findings show that research institutions should refrain from the automatic use of such rankings.
Tomasz Żuradzki and Leszek Wroński are professors at the Institute of Philosophy & Interdisciplinary Centre for Ethics at Jagiellonian University in Kraków.