Introduction: Why Peer Review Needs a Rethink in the Age of AI
In scholarly publishing, peer review is a gatekeeper for quality and credibility. The current landscape of rising manuscript submissions, limited reviewer pools, and pressure for faster decisions poses a challenge to the existing publishing process.
Artificial intelligence (AI) helps enhance the current peer review process, enabling quicker initial screening, manuscript triage, reviewer selection, content analysis, and quality control. Reviewers are leaning toward AI tools for sorting submissions, identifying potential reviewers, and checking formatting. A 2024 Wiley survey reported that 19 percent of researchers used AI for peer review.
Recognizing how AI can complement human judgment can ensure that peer review maintains its integrity, rigor, reliability, and relevance while moving at blitzkrieg speed in the publishing landscape.
The Rise of AI in Peer Review
AI integration brings increased efficiency, enhanced consistency, reduced bias, and high-volume handling. There’s been a rise of AI tools in peer review for the following purposes:
- Editorial workflows
- Reviewer matching
- Formatting checks
- Grammar refinement
AI tools are equipped to manage heavy volumes while maintaining quality, consistency, and data fidelity. Reviewers can focus on nuanced judgment of critical areas like the novelty, significance, and ethics of submissions. But overreliance on AI risks reducing human insight.
AI can aid human reviewers with plagiarism checks, language refinement, and reviewer recommendations but not replace them. Publishers can use AI to maintain protocols and a balanced workflow, but human oversight is required for content analysis.
It is crucial to maintain the integrity of academic publishing when integrating AI into the peer review process with human supervision. At Amnet, we evaluate different AI models and have integrated AI into our end-to-end publishing platform for assistance in certain areas. As AI continues to evolve, the development and testing of AI integration are ongoing processes in our publishing life cycle.
Ethical Challenges: Transparency, Bias, and Accountability
AI isn’t neutral; algorithms trained on historical data risk perpetuating or amplifying biases, posing a hurdle to authors, institutions, or research topics. AI tools–assisted peer review can lack transparency and accountability, raising red flags.
Data security is another key area requiring human oversight. Sensitive research data, intellectual property rights, privacy rights, and data rights when left in the hands of AI, pose a security risk. AI systems can be compromised, leading to confidentiality breaches. Publishers need to enforce strict data protection protocols, interact with AI on encrypted channels, update security practices, and ensure compliance with data privacy regulations.
The use of large language models (LLMs) has sparked debate over privacy, security, fairness, and accountability. The “black box” nature of these tools makes it challenging to understand their decision-making processes.
To resolve the ethical dilemmas surrounding AI integration, the peer review community should engage in open dialogue about AI use. Key action steps that publishers can adopt for the ethical safety of AI integration are as follows:
- Governance frameworks
- Diversity in training data sets
- Algorithm audits
- Detection practices for bias and fairness
- Retention of human oversight in critical decisions
Disclosure Norms for AI Use
Transparent communication of AI usage in peer review is crucial for maintaining community trust. Nature Portfolio and ICMJE have set the ball rolling with official policies on AI use in 2024. Some policies require authors to declare AI tools used at submission, updating reviewer and author guidelines. Training editors to evaluate AI-assisted decisions and maintaining audit logs to ensure accountability and traceability are recommended. Increasingly adopting AI for routine tasks such as matching manuscripts with the relevant scope of the journal, which desk editors currently do, is one such example.
Current best practices include the following:
- Communicating how AI is trained and applied
- Documenting measures to prevent algorithmic bias
- Establishing robust data protection protocols
Building AI Literacy for Editors and Reviewers
AI in peer review isn’t just about automation; it demands informed human oversight. A 2025 Nature survey found that 40 percent of researchers considered AI-generated peer reviews as being helpful or more helpful than human ones, while 42 percent still found them to be less helpful than many human reviews. This split reveals a need for training editors and reviewers not only to use AI but to interpret its limitations as well.
AI literacy must involve training reviewers to identify fabricated references and using checklists to ensure that AI outputs don’t override expert judgment and research quality. Additionally, early-career researchers can be paired with experienced counterparts for mentorship. Peer review is evolving, but human oversight remains the foundation to accomplish the following:
- Critically evaluate AI outputs
- Detect potential biases
- Verify recommendations
Preserving Trust and Fairness in Scholarly Publishing
One of AI’s most promising roles is enhancing fairness in peer review but only if it’s designed with care. A huge challenge for publishers is the diversity of the reviewer pool and effectively matching papers with the right reviewer. AI tools are becoming more efficient in post-publication peer reviews.
A study found that peer reviewers were more than twice as likely to reject a manuscript when the author was relatively unknown (65 percent) versus when the author was a Nobel laureate (23 percent). This highlights an ingrained bias that AI could reduce with equitable training.
For example, AI can do the following:
- Identify junior researchers, women, and scholars from underrepresented backgrounds for peer review.
- Reduce identity-based bias through blind review systems.
- Highlight reviewer contributions across demographics through recognition systems.
Here, at Amnet, our publishing platform features an AI-powered reviewer database with a diverse reviewer pool spanning geographies and identities. The final decision rests with a human reviewer while maintaining inclusivity. The integration of AI with human governance ensures accountability, transparency, and equity.
Shaping the Future of Peer Review in the AI Era
The future of peer review rests in the balance between AI integration and human expertise. AI tools offer a path forward by assisting with triage, compliance checks, and preliminary assessments as submission volumes and reviewer fatigue rise. However, human judgment remains essential. AI requires continuous training, improvement, and algorithm refinement based on community-led feedback from authors, reviewers, and editors to address biases and improve accuracy. Education, transparency, and collaboration are key factors that will drive AI integration with peer review.
This is more than just a trend. This is a signal for a rethink.
Source
- https://www.sciencedirect.com/science/article/pii/S3050577125000167?via%3Dihub
- https://www.nature.com/articles/d41586-025-00894-7
- https://asm.org/articles/2024/november/ai-peer-review-recipe-disaster-success
- https://www.ajmc.com/view/how-medical-journals-are-grappling-with-ai-in-peer-review
- https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html#four
- https://jkms.org/DOIx.php?id=10.3346/jkms.2025.40.e92
- https://www.nature.com/articles/d41586-025-00894-7
- https://academic.oup.com/healthaffairsscholar/article/2/5/qxae058/7663651