Fake peer reviews, fake identities, fake accounts, fake data: beware!
How did key aspects of science reach the status of manipulated, or fake?
Hopefully, for most scientists, science represents a path of exploration where the unknown/unexplored is the core challenge ahead. Sadly, in many respects, science has evolved from an exploratory model to a business model. It has become, in many instances, the verification tool for technologies, products and innovations that then establish corporate profits. Firstly, science evolved from being a relatively financially conflict-free zone of independent intellectual achievement where intellectual centers provided the necessary grants to complete basic research, to an important evaluation tool for so many products in our modern globalized societies (1). Secondly, scientists have implicitly evolved, or have had to evolve, from mere information seekers with a nerdy image in society, to important marketing agents of what they produce and discover. Pure science is becoming extinct because science is being treated as a marketing tool and driven by society’s demands. Information and intellect are copyrighted or patented, big data, peer review and open access are increasingly commercialized, and knowledge is no longer free to create, or divulge. When science is not driven by real academic and intellectual incentives, but instead by false incentives, this promotes an ambience of false discovery, or cheating. It also introduces business- and marketing-like values such as political correctness, and thus a considerably cold state of apathy, into academia. In the modern publishing era, metrics, which have zero academic value, but have tremendous marketing value for publishers, have come to dominate academics, and the altmetrics trend is not abating (2,3).
There is a sector of scientists and academics that can appreciate that these false incentives are unhealthy, but they are forced by the institutions in which they research, to be productive. In many countries around the world, such as in China, academic output is measured not only by the number of papers that are published, but more importantly by the impact factor (IF) score of the journal in which they publish (4). In these cases, scientists are financially rewarded for their productivity, with elaborate equations to calculate the monetary payment received annually when a particular IF score is achieved. Salaries, grants and promotions are thus at stake, and competition inevitably increases. In some cases, this negative climate of competition downplays innovation, and instead promotes cheating. Cheating, i.e., fraud, can take on many forms, including data falsification, the creation of false identities for authors or peer reviewers, fake submission accounts (5,6), manipulated authorship—including ghosts and guest authorship (7)—and, more recently, pseudonymous identities (8).
I was invited by the AME Medical Journal (AMJ) to offer a commentary on Qi et al.’s paper (6) and to share of my experience in a bid to offer useful practical advice. Briefly, what that paper reveals is that fake peer reviews have been detected in journals with and without an IF, and in journals across most of the main for-profit publishers. More importantly, and setting political correctness aside, for AMJ’s editors and authors, the vast majority of fake peer reviews were from researchers from China (74.8%), followed by researchers from South Korea, Iran and Pakistan. That study also revealed that, within China, the greatest number of fake peer reviews came from Taiwan, but values were clearly strongly skewed because the case from Taiwan involved a citation and authorship ring, thus affecting a string of authors across multiple journals and publishers. What these studies (5,6) show is that scientists from countries that are heavily involved with the monetization of the IF have inculcated a climate of extreme stress and pressure, causing a segment of the academic community to cheat.
Advice to AMJ editors and authors on how to detect and curtail academic fraud
It’s all too easy to label a specific country or region as being fraudulent if one simply looks at the data, for example in (6). But this would not reflect the desire, most likely, of the vast majority, to complete honest research and to publish papers that reflect honest data. It is evident that no researcher from a country would want to endorse cheating and fraud, except for citation rings but it is unclear if fraud is institutionalized, or maybe even nationalized in some cases in a bid to reach global rankings. Increasing evidence is showing that a handful of countries may be engaged in country-wide fraud, suggesting that the existence of academic fraud may in fact be cultural. And this has serious implications for global academia because if a fraudulent scientist from country A with a preponderance of cheating seeks a position as a researcher, for example as a post-doc, in country B, where cheating is not tolerated, then social, intellectual and research values may be placed at serious risk, and corrupt those of country B. Consequently, fraud in any one country will evidently affect any researcher across the globe, and so solutions to curb, and ultimately eliminate, fraud, need to be sought. In this world where research output is easily transmissible between journals, through cited literature, fraud may be easily propagated, through inexperience, or ignorance.
Literature that is so fraught with errors that it is no longer reliable, or trustworthy, is retracted (9), and yet, there is substantial evidence to suggest that retracted literature continues to be cited (10,11). So, my first piece of advice to authors is to check the original source of literature to verify if a paper has been retracted, or not. If yes, then that study should not be cited. Similarly, editors and journal-appointed peer reviewers should verify that the information contained in the reference list is valid, and not retracted. So, a certain amount of citation-related abuse can be considerably curbed by careful examination of the original source of literature and citation verification.
Increasing fraud and abuse by authors of the publishing platform has also resulted in a sudden increase in checks and verification steps, i.e., the system is becoming increasingly militarized to ensure veracity (12). In essence, this is not a bad thing, but the system should have been tightened decades ago, before the first metric was introduced into academic publishing, the IF, and before the game of cheating began to take hold over the past few years. To some extent, the publishers are responsible for creating a system that was easily abused, while fraudulent authors created a claustrophobic environment in which academia now has to perform and excel with reduced freedoms, rights and expression (13). Suppression of authors’ rights and the aggressive imposition of publisher-created values may in fact exacerbate fraud as fraudulent academics seek new ways to cheat the system. So, instead of assuming a vertical position of power in the publishing industry to dominate intellect and certify its marketing value, “ethical” entities such as COPE (the Committee of Publication Ethics) need to seek the involvement of authors, rather than their arrogant exclusion (14), as equals, peer review has to be less exploitative and more balanced, fair and rewarding (15), while the risks of fraud in science as a risk to all parties must involve a wider campaign of education about these issues and frank and open discussion about them, rather than treating them as taboos.
So how does one check the validity of authorship? The precise intellectual or academic contribution in multi-author papers is extremely difficult—possibly impossible—to quantify or verify. As a default requirement, the definition of authors’ contributions, and declarations of conflicts of interest (COIs) can only go so far into verifying authentic authorship. This makes the authorship criteria widely used by the ICMJE (International Committee of Medical Journal Editors) useful, but toothless, and thus open to interpretation and abuse (16). So, authorship verification may be the greatest challenge that AMJ faces, especially for Chinese researchers that benefit financially from authorship position, for example, as corresponding author, or from academic degrees, where first authorship plays a key role in receiving a MSc or PhD degree, or where authorship itself may be enough to receive grants, better positions, or respect from peers and the local community. In contrast, fake peer reviewers could be fully curtailed if sufficient checks and balances are in place even before peer review begins: (I) check that the name and corresponding email of a peer reviewer are valid, so if a famous scientist who has a universityname.edu email account is suddenly presented with a Gmail or Yahoo account, then be suspicious and immediately reject that choice; (II) make sure that peers are truly peers and that they know their rights and their responsibilities (17); (III) ensure that the peer reviewers do not have any COIs with the authors, or with the editors; (IV) conduct double-blind peer review, but consider making the peer review reports open access (i.e., open peer review) to maximize accountability of authors and peer reviewers at the post-publication peer review (PPPR) stage (18); (V) do not allow stings, hoaxes to be published, or authors with fake or pseudonymous or anonymous identities (19); (VI) be prepared to accommodate PPPR into the publishing model, but be careful of possible COIs and hidden agendas of the science watchdogs, including an anti-science stance (20).
Acknowledgements
Funding: None.
Footnote
Provenance and Peer Review: This article was commissioned by the editorial office, AME Medical Journal. The article did not undergo external peer review.
Conflicts of Interest: The author has completed the ICMJE uniform disclosure form (available at http://dx.doi.org/10.21037/amj.2017.02.10). The author has no conflicts of interest to declare.
Ethical Statement: The author is accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.
Open Access Statement: This is an Open Access article distributed in accordance with the Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International License (CC BY-NC-ND 4.0), which permits the non-commercial replication and distribution of the article with the strict proviso that no changes or edits are made and the original work is properly cited (including links to both the formal publication through the relevant DOI and the license). See: https://creativecommons.org/licenses/by-nc-nd/4.0/.
References
- Teixeira da Silva JA. Who owns science, owns society. Maejo Int J Sci Technol 2011;5:S1-S10.
- Teixeira da Silva JA, Bernès S. Clarivate Analytics: continued
omnia vanitas impact factor culture. Sci Eng Ethics 2017; In Press. [Crossref] [PubMed] - Teixeira da Silva JA, Memon AR. CiteScore: a cite for sore eyes, or a valuable, transparent metric? Scientometrics 2017; In Press. [Crossref]
- Teixeira da Silva JA, Ruan C, Yu XInternational collaboration, et al. scientific ethics and science writing: Focus on China. Asian Australasian J Plant Sci Biotechnol 2013;7:38-45.
- Teixeira da Silva JA. On the abuse of online submission systems, fake peer reviews and editor-created accounts. Persona Bioética 2016;20:151-8. [Crossref]
- Qi X, Deng H, Guo X. Characteristics of retractions related to faked peer reviews: an overview. Postgrad Med J 2016; [Epub ahead of print]. [PubMed]
- Teixeira da Silva JA, Dobránszki J. Multiple authorship in scientific manuscripts: ethical challenges, ghost and guest/gift authorship, and the cultural/disciplinary perspective. Sci Eng Ethics 2016;22:1457-72. [Crossref] [PubMed]
- Teixeira da Silva JA. Are pseudonyms ethical in (science) publishing? Neuroskeptic as a case study. Sci Eng Ethics 2017; in press. [Crossref] [PubMed]
- Teixeira da Silva JA. Retractions represent failure. J Educ Social Res 2016;6:11-2.
- Teixeira da Silva JA, Bornemann-Cimenti H. Why do some retracted papers continue to be cited? Scientometrics 2017;110:365-70. [Crossref]
- Teixeira da Silva JA, Dobránszki J. Highly cited retracted papers. Scientometrics 2017;110:1653-61. [Crossref]
- Teixeira da Silva JA. The militarization of science, and subsequent criminalization of scientists. J Interdisciplinary Med 2016;1:214-5. [Crossref]
- Al-Khatib A, Teixeira da Silva JA. What rights do authors have? Sci Eng Ethics 2017; in press. [Crossref] [PubMed]
- Teixeira da Silva JA. COPE requires greater consistency and accountability. Med J Soc Sci 2017;8:11-3.
- Teixeira da Silva JA, Katavić V. Free editors and peers: squeezing the lemon dry. Ethics Bioethics 2016;6:203-9. [Crossref]
- Teixeira da Silva JA, Dobránszki J. How authorship is defined by multiple publishing organizations and STM publishers. Accountability in Research 2016;23:97-122. [Crossref] [PubMed]
- Teixeira da Silva JA. Responsibilities and rights of authors, peer reviewers, editors and publishers: a status quo inquiry and assessment. Asian Australasian J Plant Sci Biotechnol 2013;7:6-15.
- Teixeira da Silva JA. Debunking post-publication peer review. Int J Educ Inf Technol 2015;1:34-7.
- Al-Khatib A, Teixeira da Silva JA. Stings, hoaxes and irony breach the trust inherent in scientific publishing. Publishing Res Quart 2016;32:208-19. [Crossref]
- Teixeira da Silva JA. Science watchdogs. Academic J Interdisciplinary Studies 2016;5:13-5.
Cite this article as: Teixeira da Silva JA. Fake peer reviews, fake identities, fake accounts, fake data: beware! AME Med J 2017;2:28.