Report: Researchers Hide AI Prompts in Papers to Manipulate Reviews

Report: Researchers Hide AI Prompts in Papers to Manipulate Reviews
Above: One of the buildings at KAIST (Korea Advanced Institute of Science and Technology) in Seoul, South Korea on June 5, 2018. Image copyright: Jonas Gratzer/LightRocket/Getty Images

The Spin

Narrative A

This practice exposes a fundamental flaw in academic publishing where reviewers are already breaking the rules by using AI despite explicit bans. The hidden prompts simply serve as a defensive measure against reviewers who shirk their professional duties by automating what should be careful human evaluation. Authors are justified in pushing back against a deeply problematic system.

Narrative B

There's no excuse for hiding AI prompts in research papers. This is a blatant example of prompt injection and an attempt to game the system, undermining the credibility of peer reviews. While AI misuse in reviews is a problem, these authors’ actions are crossing a clear ethical line. They’re rigging reviews and further compromising academic integrity.

Narrative C

This is a symptom of a much larger issue. With research output skyrocketing, the peer review system is stretched thin. AI can help manage this flood of information, but it shouldn't replace human oversight. From AI-generated images causing retractions to the rise of "paper mills," quality control in publishing is under threat. New guidelines are urgently needed to ensure AI enhances, not replaces, proper review and academic rigor.

Metaculus Prediction


The Controversies



Articles on this story