Free service by the University of Göttingen

Use Cases

AI Disclosure in Peer Review: What Reviewers and Editors Should Report

A practical guide for reviewers and editors who use AI tools during peer review and need clear, honest disclosure.

Peer review needs AI transparency too

Many researchers now use AI tools to draft text, summarize papers, check language, or suggest questions.

That trend does not stop at manuscript writing. It reaches peer review.

Reviewers and editors face a harder question than authors do. They do not just need to ask, "Did I use AI?" They also need to ask, "Did that use expose confidential material, shape my judgment, or break a journal rule?"

That is why AI disclosure in peer review deserves its own guidance.

If you want a quick overview of AI Usage Cards first, read What Are AI Usage Cards?. If you already know the basics, this guide shows how to apply them to peer review work.

Peer review creates a special risk

A submitted manuscript is not public.

Peer review often includes unpublished data, methods, figures, and ideas. Review reports may also contain sensitive judgments about novelty, rigor, and publication merit.

When a reviewer pastes that material into a third-party AI tool, two problems can follow.

First, the reviewer may disclose confidential content to an outside system.

Second, the reviewer may let the tool shape the review in ways that the journal, editor, or authors cannot see.

This is why many publishers and journals restrict or ban certain AI uses in peer review. Rules differ, so reviewers should always check the venue's policy. Our article on AI Disclosure Policies by Major Journals offers a helpful starting point.

The core rule is simple

Do not treat AI in peer review like AI in private note-taking.

Peer review carries duties of confidentiality, fairness, and accountability. If you use AI, you still own the review. You still need to protect the manuscript. You still need to explain your process if the journal asks.

That means disclosure should cover three facts:

  1. What tool you used
  2. What task the tool performed
  3. Whether any confidential manuscript content entered the tool

Those facts help editors judge whether the use stayed within policy.

When AI use in peer review may be acceptable

Some uses create less risk than others.

For example, a reviewer may use a local writing assistant that runs on their own device to improve grammar in a draft report. If the journal allows it, that use may pose limited confidentiality risk.

A reviewer may also use AI to generate a checklist of general review criteria without sharing manuscript text. Again, this may be allowed under some policies.

The pattern matters. Lower-risk use avoids uploading confidential content and does not delegate scientific judgment.

You should still disclose it.

A short statement can do the job:

I used an AI writing assistant to improve the grammar and clarity of my peer review report.
I did not upload manuscript text, figures, tables, or supplementary files to the tool.
All evaluative judgments and recommendations are my own.

When AI use in peer review crosses a line

Some uses should raise concern at once.

If a reviewer uploads the full manuscript to a public chatbot and asks for a review, that reviewer may breach confidentiality. If a reviewer asks an AI system to judge novelty, detect fatal flaws, or recommend acceptance, that reviewer may shift core expert judgment to the tool.

That is not a minor drafting aid. That changes the review process itself.

Editors should treat those cases with care. They need to know what happened, what content the reviewer shared, and whether the journal's policy allows any part of that use.

A reviewer should never assume that "everyone does it" will excuse undisclosed AI assistance.

Reviewers should document process, not just outcome

A weak disclosure says, "I used AI."

That tells the editor almost nothing.

A useful disclosure says what the reviewer did, what they shared, and what they refused to delegate. It records the boundary between assistance and judgment.

That is where an AI Usage Card helps. It gives reviewers and editors a clean format for documenting tool name, model, purpose, inputs, outputs, human oversight, and limits.

If you are unsure whether your use needs disclosure, read Do I Need to Disclose AI Usage in My Paper?. The same logic applies here, but peer review adds confidentiality duties.

A practical disclosure template for reviewers

A reviewer disclosure should stay short, factual, and specific.

You can adapt this language:

AI use during peer review:
I used [tool name, version if known] for [task].
I did not provide confidential manuscript text, figures, tables, reviewer identities, or supplementary material to the tool.
The tool assisted only with [language editing, organization, general checklist generation].
I independently assessed the manuscript and wrote the substantive review.

If confidential content did enter the tool, the reviewer should say so plainly:

AI use during peer review:
I uploaded limited excerpts from the manuscript to [tool name] to [task].
This use involved confidential submission material.
The tool output informed my drafting process, but I made all evaluative judgments myself.
I am disclosing this so the editor can assess compliance with journal policy.

That second example may reveal a policy breach. Still, honest disclosure gives editors a chance to act.

Editors need a policy they can enforce

Many journals mention AI use by authors but say little about reviewers and editors.

That gap creates confusion.

Editors should define three things in plain language:

What reviewers may do

State whether reviewers may use AI for language editing, note organization, or general checklists.

If the journal allows these uses, say so directly.

What reviewers may not do

State whether reviewers may upload manuscripts, tables, figures, supplements, or decision letters into external tools.

If the answer is no, write no.

What reviewers must disclose

Require a short statement in the confidential comments to the editor or in a structured reviewer form.

A policy that no one can apply will fail. A policy with clear yes and no rules gives reviewers a fair standard.

Editors may use AI too

Editors also use AI tools.

They may draft decision letters, summarize reviewer comments, classify submissions, or spot reporting gaps. Those uses also deserve scrutiny.

An editor holds power over review flow and publication decisions. That means editor AI use can affect fairness, consistency, and trust.

Editors should document the same facts that reviewers should document:

  • tool used
  • purpose
  • whether manuscript content entered the system
  • degree of human oversight
  • role of AI output in the final decision

If the editor uses AI only to polish wording in a decision letter, that differs from using AI to rank reviewer credibility or predict acceptance.

The bigger the role in judgment, the stronger the need for disclosure and policy review.

Confidentiality comes before convenience

This is the point many people miss.

The easiest AI workflow is often the riskiest one. Copying a manuscript into a chatbot saves time, but it may conflict with reviewer duties and journal policy.

Convenience does not cancel confidentiality.

If you cannot confirm how a tool handles data, retention, training, or access, do not upload manuscript content. Keep your review process inside the boundaries that the journal and your role require.

This principle also helps graduate students and early-career researchers who review with a supervisor. If a trainee helps with a review, the same confidentiality rules apply to them.

An AI Usage Card makes disclosure faster

Researchers often avoid disclosure because they think it will take too long.

An AI Usage Card solves that problem. It gives you one place to record the tool, task, inputs, outputs, and safeguards. You can then adapt that record for a journal form, editorial office request, or internal documentation.

This approach helps when policies change across venues. It also helps if an editor later asks, "What exactly did you do with this tool?"

For examples of how structured disclosure works, see AI Usage Cards Examples and Templates.

A sample peer review AI Usage Card section

Here is a short section that a reviewer or editor could adapt:

\section*{AI Use in Peer Review}
Tool: Claude 3.5 Sonnet
 
Purpose: Language editing and organization of my draft review report
 
Inputs: My own draft sentences only. I did not upload manuscript text, figures, tables, supplementary files, or reviewer correspondence.
 
Outputs: Rephrased sentences and suggested report structure
 
Human oversight: I reviewed all suggestions, rejected some, and made all substantive evaluations myself.
 
Limits: The tool did not assess novelty, validity, significance, or publication recommendation.

This format works well because it separates tool support from expert judgment.

Peer review trust depends on visible boundaries

Peer review already asks a lot from researchers. It asks for expertise, fairness, discretion, and time.

AI does not remove those duties.

If you use AI in peer review, set a clear boundary. Do not expose confidential content unless policy allows it. Do not hand scientific judgment to the tool. Document what you did in plain language.

That protects authors. It protects editors. It protects you.

If you want a simple way to document AI assistance, create your card now with the AI Usage Card generator.

Generate Your AI Usage Report

Create a standardized AI Usage Card for your research paper in minutes. Free and open source.

Create Your AI Usage Card