Peer Review: tools, training, and integration

Research integrity, artificial intelligence (AI), and peer review have become inextricably linked. The same technologies that can be used to generate content are now being put to use to safeguard it and to help identify large-scale ‘bad actors’ in the scholarly research system according to Nancy Roberts, Maverick’s  Head of Technology & Content.

Fortunately, publishers have scalable, affordable ways to check the integrity of papers. Some of the solutions involve technology, whereas others involve developing red flags or alerts that are specific to a publisher’s content and editorial review criteria. All require a degree of education and commitment to integrate these approaches into the peer review process, which can become challenging when faced with the need to increase productivity and the volume of scientific output.

Tools

Papermills have become adept at meeting the need for volume, but there are ways to detect them that can be built into workflows. Editorial desk checks that use advanced LLM-type systems embedded in the context can greatly speed up time to first decision. These tools are not perfect, however, and vary in their accuracy and appropriateness depending on the database on which they are based. Tools can also be used to perform quality checks and monitor journal and editorial board performance.

They can also help identify plagiarism, data and image manipulation, tortured or incongruent language, and authorship and institutional inconsistencies. Systems to support peer review and research integrity rely on the people operating them

Training

Training and education of researchers, editors, and authors is key to helping identify research integrity problems. Creating a dashboard that is specific to a specialty or journal can save time and help focus on those issues that are substantive and require attention. Maverick Affiliate Associate, Maria Machado, PhD, who has served as a researcher, peer reviewer, and auditor, points out in a recent post that the most common red flags linked to paper mill activity were scope mismatch between the study and the target journal and inappropriate citations. Irrelevant, hidden, or “sneaked” citations are a growing concern, as they manipulate journal-level metrics and the indicators most commonly used to measure publisher impact. Applying transparent citation and metadata deposition policies could mitigate this issue.

In a report recently released by Elsevier, Insights 2024: Attitudes toward AI, a survey of 3,000 researchers and clinicians indicated both groups see AI as having the greatest potential in accelerating knowledge discovery, increasing work quality, and saving costs. However, to maximize the use of AI, both groups were clear that specific concerns about assurance of quality content, trust, and transparency need to be addressed before integrating AI tools into their daily work.

Research integrity should be an integral part of the research life cycle, and all those involved should undergo professional development to identify issues at each stage of the process. Education can reinforce researchers’ capabilities and enhance the value of their work by ensuring they are aware of best practices. This, in turn, will result in better submissions and more well-informed researchers and authors. Likewise, editor training can reinforce the need to be alert to certain aspects of the review process that can help them identify issues early and avoid problems that in the long run reflect badly on their journals and scholarly publishing.

Integration

Many peer review processes do not have an integrated workflow to support research integrity.  Automation of the process in a fully integrated system can be a valuable component of the peer review process. Most systems do not offer this type of environment, but it can be achieved to the benefit of all of those involved. Automated tools integrated within the manuscript submission system would enable studies to include appropriate funding, conflict of interest, or data availability statements, written in language compliant with reporting standards.

The scholarly publishing environment is increasingly complex but there are tools and techniques, supported by automation, that can help ensure its continued health and integrity.

Learn More

Maverick can help with automating peer review systems to be more robust and tailored to a journal or specialty. We also develop bespoke custom dashboards that help editors and peer reviewers look for substantive issues that could alert them to a problem. We provide training to editors on research integrity issues and we offer a series of webinars on integrating researcher education into the research life cycle. For more information, contact your Maverick representative or info@maverick-os.com.

By Rebecca Rinehart, Maverick CEO and Head of US Operations

Rebecca Rinehart is a publishing professional with over 40 years’ experience in all aspects of scientific, technical, and medical publishing in all formats, including books, journals, periodicals, and digital. She has served in senior publishing management roles in major medical societies and associations as well as corporate publishers.

Further Reading

Maverick Peer Review Support service sheet

Peer review from a researcher’s perspective

The role of AI in research integrity

A researcher’s perspective on research integrity

When good intentions drive bad behavior

Image manipulation – we can tell when you’re faking it.

Download Maverick’s Research Integrity service sheet

Top