Do I Need to Disclose AI Usage in My Paper?
In most cases, yes. This guide explains when AI disclosure is expected, where to put it, and how to write it without slowing down your submission.
Yes. In most cases, disclose it.
If you used AI in a way that shaped your manuscript, analysis, code, figures, or literature handling, you should disclose it.
That answer covers most researchers, and it matches where publisher policy has gone. Major publishers now expect some form of transparency about generative AI use. ACM requires disclosure when authors use generative AI to generate content such as text, images, tables, or code. Elsevier requires authors to disclose generative AI use in manuscript writing, and it asks authors to describe AI used as part of research methods in the Methods section. Nature Portfolio allows some writing assistance but asks authors to declare that use in the manuscript. Wiley, Taylor & Francis, and PLOS also require disclosure of AI use in manuscript preparation or article content. (acm.org)
So if you are asking, "Do I really need to mention ChatGPT, Copilot, Claude, Gemini, Grammarly AI, or an AI literature tool?", the safe answer is yes.
If you need venue-specific rules, check our AI disclosure policies by journal. If you want the short version of the whole idea, start with What Are AI Usage Cards?.
Why journals now expect disclosure
Readers need to know how you produced the paper in front of them.
That sounds basic, but it matters more with AI than with older software. A reference manager stores citations. A spreadsheet runs formulas you set up. A generative model can draft prose, rewrite claims, summarize papers badly, suggest code that looks right and fails quietly, or produce an image with no clear provenance.
Editors worry about two things.
First, they worry about accuracy. Nature Portfolio tells authors not to take AI-generated text at face value and says human authors remain responsible for the final manuscript. Elsevier says the same in different words. Authors, not tools, carry responsibility for correctness. (nature.com)
Second, they worry about accountability. AI cannot be an author because it cannot take responsibility for the work. That rule now appears across major publishers and societies, including ACM, Nature Portfolio, Wiley, Taylor & Francis, and Elsevier. (acm.org)
Disclosure solves both problems. It tells the reader what happened, where AI entered the workflow, and where the human authors took over.
When you should disclose
Use this test.
If AI did more than act like a basic spellchecker, disclose it.
That includes cases where you used AI to:
- draft or rewrite text that appears in the paper
- summarize sources or help build a literature review
- generate code, scripts, or analysis steps
- suggest statistical approaches or data handling steps
- create, modify, or refine figures, tables, or images
- translate or paraphrase passages for the manuscript
- brainstorm research questions, hypotheses, or framing that materially shaped the paper
This is not a claim that every publisher treats each use case the same way. They do not. Elsevier draws a formal distinction between AI used in manuscript writing and AI used as part of the research method. Taylor & Francis says some journals may allow only language improvement, while others may allow broader use with disclosure. PLOS asks authors to state the tool name, how they used it, how they checked the outputs, and what parts of the work were affected. (elsevier.com)
Still, the practical rule holds. If AI touched the intellectual production of the paper, say so.
When you probably do not need a disclosure
Some tools sit outside what most policies target.
You usually do not need a special AI disclosure for:
- a standard reference manager
- a non-generative spellchecker
- ordinary formatting tools
- citation style software
- basic search engines, if you only used them as search engines
Even here, pause for a second. Many familiar products now include generative features. Grammarly, Microsoft Copilot, Google search summaries, and writing assistants inside word processors can move from simple correction into generative rewriting. Once that happens, you are back in disclosure territory.
If you used a tool and you cannot tell whether it generated content or just corrected mechanics, check the product documentation or disclose it anyway.
The gray areas that trip people up
Grammarly and writing assistants
This is the most common edge case.
If you used Grammarly or a similar tool only for routine spelling and grammar correction, many editors will not care. If you used generative rewriting, paraphrasing, tone shifts, or full-sentence suggestions that changed the manuscript text, disclose it.
You do not need a dramatic confession. One clean sentence will do.
Search tools, literature tools, and AI summaries
If you read a search engine summary and then went to the actual papers yourself, that alone usually does not need a formal statement.
If you used Elicit, Semantic Scholar features, Perplexity, ChatGPT, Gemini, Claude, or another system to find, cluster, summarize, or compare papers in a way that shaped your literature review, methods framing, or background section, disclose that use.
This matters even more in review articles and systematic reviews. If you work in that space, see AI Disclosure in Systematic Reviews and Meta-Analyses.
Code completion and analysis help
If Copilot, Cursor, ChatGPT, Claude, or another model helped write code that produced results in the paper, disclose it.
Elsevier says that AI used as part of research design or methods should be described in the methodology of the work. Taylor & Francis lists coding assistance among supported use cases but still requires acknowledgment of the tool, version, purpose, and reason for use. (elsevier.com)
That means a methods disclosure often makes more sense than a vague acknowledgment if the code affected results.
Images and figures
Be careful here.
Nature Portfolio says its journals cannot permit AI-generated images and videos for publication in many cases because of unresolved legal and integrity issues. Taylor & Francis says it does not permit generative AI in the creation and manipulation of images and figures for use in its publications. Elsevier allows some image-related AI use when it is part of the research methods, but it asks for a clear description of the content created or altered and the specific tool details. (nature.com)
So do not assume that disclosure alone fixes image issues. Sometimes the policy is not "disclose and proceed." Sometimes it is "do not use this at all."
Where to put the disclosure
This depends on the venue, but the common locations are easy to remember:
- Methods section
- Acknowledgments section
- a dedicated disclosure section near the end of the manuscript
- cover letter, if the venue asks for it during submission
Elsevier asks authors to place a specific declaration about generative AI in the writing process immediately above the references, while AI used in research methods belongs in the Methods section. Taylor & Francis says the statement should appear in the Methods or Acknowledgments section. ACM says disclosure should appear in the acknowledgments or elsewhere prominently in the work. Nature Portfolio says authors should declare writing assistance in the Methods section. (elsevier.com)
That is why one generic sentence copied from another paper often fails. The right location depends on the journal.
What a good disclosure should include
A good statement answers four questions:
Who or what tool did you use?
What did it do?
Where in the workflow did you use it?
How did you check the output?
PLOS spells this out well. It asks for the tool name, how the authors used it, how they evaluated the validity of its outputs, and what parts of the study or article were affected. (journals.plos.org)
That gives you a simple template.
\section*{AI use disclosure}
The authors used ChatGPT (OpenAI, GPT-4.1) to improve sentence-level clarity in the Introduction and Discussion.
The authors reviewed and edited all suggested text and verified all citations, claims, and interpretations.
No AI tool was used to generate results, analyze data, or make final scientific judgments.If the tool affected methods or code, say that directly.
\section*{AI use disclosure}
The authors used GitHub Copilot and Claude 3.7 Sonnet to assist with drafting Python code for data cleaning and visualization.
The authors inspected, tested, and revised all generated code before use.
The study results depend on the final human-reviewed code described in the Methods and shared in the repository.If you want a fuller format than a single paragraph, generate an AI Usage Card. That works well as a supplement, appendix item, or source text for a shorter journal statement. For examples, see AI Usage Cards Examples and Templates and our LaTeX Tutorial for AI Usage Cards.
What happens if you do not disclose
Do not treat this as a harmless omission.
PLOS says that if concerns arise about noncompliance with its AI policy, the journal may reject the submission before publication, retract it after publication, or publish an editorial notice. It may also notify the authors' institutions. Taylor & Francis places disclosure inside its broader publishing ethics framework. Nature journals have also raised public concerns about misuse of AI in scientific publishing and peer review. (journals.plos.org)
You do not need to overstate the risk to make the point. Editors do not like surprises. If they discover unreported AI use after submission, you have made a trust problem for no good reason.
And no, "I only used it to polish the wording" is not a great defense if the venue required disclosure.
The fastest way to handle this on real projects
Most disclosure failures happen because authors wait until submission week and then try to reconstruct months of tool use from memory.
A better system is simple.
Keep a running note with:
- tool name
- model or version if known
- date or project stage
- what you asked it to do
- what you kept, changed, or rejected
That note turns a stressful compliance task into a five-minute edit.
If you want structure, use ai-cards.org. The generator walks you through common use cases and gives you text that you can paste into an acknowledgment, methods section, appendix, thesis, or supplement. It also helps when coauthors need to agree on what happened.
That matters more than people think. One author may use ChatGPT for editing, another may use Copilot for code, and a third may use an AI search tool for literature screening. If nobody writes that down, the final paper ends up with a vague sentence that hides more than it reveals.
The real rule
If AI shaped the paper, disclose it.
That is the rule beneath all the policy wording.
You do not need to sound defensive. You do not need a page of legal prose. You need a clear record of what you used, where you used it, and how you checked it.
If you want to get this done now, generate a free AI Usage Card. You can attach the card to your submission, turn it into a short disclosure statement, or copy parts of it into your acknowledgments or methods section. If your case centers on ChatGPT, read How to Disclose ChatGPT Usage in Academic Papers next.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card