A new AI coach for scientists has been shown to significantly improve the quality of peer reviews, making them clearer and more helpful for authors. Peer review is essential to ensuring the integrity of scientific publications, but many researchers are dissatisfied with the quality of the feedback they receive. Common complaints include vague, short, and unhelpful reviews. For example, in a survey of 11,800 researchers, only 55.4% of respondents reported being satisfied with the quality of the feedback. The problem is exacerbated by the sheer volume of papers, which has left reviewers feeling overwhelmed.
But help for stressed-out reviewers may be at hand. A team of researchers has developed the Review Feedback Agent, a system that uses five large language models to scan reviews and provide private feedback to reviewers before the authors see them. They trained their AI reviewer by carefully prompting existing large language models, as they explain in a paper published in Nature Machine Intelligence.
The researchers tested their system in the paper review cycle before ICLR 2025, a leading conference in deep learning and machine learning. They randomly assigned around 20,000 reviews to receive AI feedback shortly after they were written. These automated “reviews of the reviews” were then sent back to the human reviewers as private feedback. Another 20,000 were placed in a control group that received no feedback at all.