How to disclose AI use for NeurIPS, ICML, and ACL submissions
A practical guide to disclosing AI use in papers and reviews for NeurIPS, ICML, and ACL, with current policy differences, examples, and LaTeX templates.
The hard part is not the AI. It is the sentence you write about it.
Most researchers do not freeze when they use ChatGPT, Copilot, Claude, or Gemini. They freeze when the paper is due and they need to explain what happened.
You used a tool to tighten prose. Or to suggest code for preprocessing. Or to draft candidate annotation instructions that you later rewrote. None of that feels dramatic while you work. Then the deadline hits and the question changes from "Did this help?" to "Do we need to disclose this?"
For NeurIPS, ICML, and ACL, the answer depends on the venue and the year. The policies overlap, but they do not match. NeurIPS says that pure editing help does not need manuscript disclosure, while methodology-related use does. ACL asks authors to fully disclose generative AI use to create content in the acknowledgements. ICML 2026 allows authors to use generative AI tools for writing or research, encourages explanation of notable methodological use, and now uses a separate two-policy system for reviewer LLM use. (neurips.cc)
That means one thing for you: do not recycle one generic disclosure sentence across venues.
If you want a rule that travels well, use this one. Disclose any AI use that affected your method, data, analysis, code, figures, or substantive wording. Then save the details somewhere reusable. The easiest way to do that is to generate an AI Usage Card.
If you need journal guidance instead of conference guidance, read AI Disclosure Policies by Major Journals and AI Transparency Requirements for Journal Submissions.
What the current policies say
NeurIPS 2025 allows authors to use LLMs while preparing papers. Its policy says that authors must describe LLM use if the tool is part of the methodology. If the tool was used only for writing, editing, or formatting and did not affect the core methodology, scientific rigorousness, or originality of the research, NeurIPS says that declaration in the manuscript is not required. NeurIPS also says that authors remain fully responsible for all content in the paper, including text, figures, and methodology. (neurips.cc)
ACL takes a stricter line on author disclosure. The ACL publication ethics policy says that generative AI tools cannot be authors. It also says that any use of generative AI to create content should be fully disclosed in the acknowledgements section. The policy allows language help such as spell checking, grammar checking, paraphrasing, or polishing of author-written text, but it still expects transparency about generative assistance in authorship. Authors remain responsible for the final submission. (aclweb.org)
ICML 2026 is clearer on the author side than many people assume. The Call for Papers says that authors may use generative AI tools such as LLMs to assist in writing or research. It also says that authors take full responsibility for all content, including any AI-generated content that could amount to plagiarism or scientific misconduct. ICML encourages authors to explain notable uses of these tools in their research methodology, and it forbids prompt injection attempts in submissions. (icml.cc)
That last point matters. Earlier ICML discussions focused more on peer review than on author disclosure. The 2026 Call for Papers now gives you a direct author-side statement, so you no longer need to guess from scattered pages. (icml.cc)
The practical takeaway is simple. Check the venue. Check the year. Then write for that policy, not for your memory of last year's rules.
The biggest policy gap sits in peer review
Many authors think only about the manuscript. That is where they miss the stricter rules.
NeurIPS 2025 tells reviewers to keep all aspects of review confidential and not share submissions or code with anyone or any LLMs. At the same time, NeurIPS allows reviewers to use outside resources, including LLMs, to understand concepts or improve the phrasing of their review, as long as they do not share confidential submission content and still take responsibility for the final review. (neurips.cc)
ICML 2025 took a harder line. Its Peer Review FAQ says that privileged material such as submissions, reviews, and discussions may not be submitted to external generative AI tools. It also says that the credibility of ICML is damaged if reviews are automatically produced using generative AI tools, whether those tools run as external services or on a local machine. (icml.cc)
ICML 2026 changed that review system. It introduced two reviewer policies. Under Policy A, use of LLMs for reviewing is strictly prohibited. Under Policy B, reviewers may input submission text into privacy-compliant LLMs, but they still may not delegate the assessment of the paper or the writing of the review to LLMs. Authors can require Policy A for their papers or allow Policy B, and reviewer assignments follow that choice. The ICML 2026 Peer Review FAQ adds that violating the assigned LLM policy can trigger sanctions, including desk rejection for papers coauthored by a reviewer in breach of the policy. (icml.cc)
ACL is detailed here too. Its policy says that reviewers must read the paper fully and write the content and argument of the review themselves. They may not use generative assistance to create the first draft of a review. ACL allows limited help with paraphrasing or proofreading, but reviewers and editors should not upload a submitted manuscript, part of a manuscript, or a peer review report into a non-privacy-preserving generative tool. (aclweb.org)
If you both submit and review, keep those workflows separate. A tool that is fine for your own draft may be banned when someone else's confidential paper is on your screen.
If you want more on reviewer-side reporting, read AI Disclosure in Peer Review: What Reviewers and Editors Should Report.
What you should disclose in a conference paper
You do not need to log every spell check.
You do need to disclose AI use that changed the scientific record, shaped the work product, or left substantive generated content in the submission.
I find it useful to split conference use into three buckets.
Bucket 1: usually disclose
Disclose AI use when it shaped the research itself.
That includes cases where you used an AI system to generate or transform code used in experiments, draft annotation instructions, propose prompts used in evaluation, rewrite survey instruments, extract labels, summarize literature that informed your method, generate figures used in the submission, or produce text that remained in the final paper in more than a copyediting role.
This fits NeurIPS's rule to describe LLM use when it is part of the methodology. It also fits ACL's broader requirement to disclose generative AI use to create content in the acknowledgements. ICML 2026 does not force one exact template, but its Call for Papers encourages explanation of notable methodological use. (neurips.cc)
Bucket 2: disclose when in doubt
Some uses sit in the gray zone.
Maybe you used Copilot to speed up data cleaning. Maybe you used Claude to rewrite a dense paragraph that carries a technical claim. Maybe you used ChatGPT to draft rebuttal language that survived into the final camera-ready version. Maybe a coauthor used Gemini to produce a first-pass related-work summary that later got rewritten.
Those cases may not define the core method, but they still affect the submission. I would disclose them.
You do not lose much by being specific. You can lose a lot if a venue, editor, or coauthor later thinks you tried to hide the use.
This matters even more for resubmissions. A paper rejected from one venue and later sent to another may need a different disclosure sentence because the second venue asks for a different location or uses a broader definition of content creation.
Bucket 3: often not necessary in the manuscript
Basic grammar correction, spelling fixes, and formatting help often do not require manuscript disclosure under NeurIPS if they did not affect the method, rigor, or originality. NeurIPS says that directly. ACL is broader and still expects disclosure of generative AI use to create content in the acknowledgements. ICML 2026 allows author use of generative AI tools and encourages explanation of notable methodological use, so light editing help may fall below the line, but you still need to own the content. (neurips.cc)
My advice is dull and useful. Even if the venue does not require manuscript disclosure for light editing help, log it in your records.
Six weeks later, when a coauthor asks, "Did we use GPT on the abstract or not?", you will want an answer.
Where to place the disclosure
The location matters almost as much as the wording.
For ACL, put author-side AI disclosure in the acknowledgements section because the policy says that generative AI use to create content should be fully disclosed there. (aclweb.org)
For NeurIPS, if AI use formed part of the methodology, describe it in the methods section or another scientifically relevant section. If the use was only for grammar, writing, or formatting, NeurIPS does not require manuscript disclosure. (neurips.cc)
For ICML 2026, the Call for Papers says that authors may use generative AI tools for writing or research and encourages authors to explain notable methodological use. If AI affected the method, data, analysis, code, or generated content in the paper, disclose that in the manuscript. That approach matches the current call and will survive minor wording changes better than a narrow reading. (icml.cc)
If the venue does not specify placement and your use affected the method, put the disclosure near the method. If the use was limited to writing assistance, an acknowledgements note often works. When in doubt, give readers the sentence where they will look for it.
What a good disclosure sentence includes
A good disclosure sentence does four jobs.
It names the tool. It names the task. It shows human review. It makes clear where the AI output entered the workflow.
That means "We used ChatGPT for editing" is weak. It leaves too much open. Did the tool rewrite claims? Draft methods text? Touch the rebuttal only? See unpublished data?
A stronger sentence sounds like this:
"We used ChatGPT to revise author-written prose for clarity in the introduction and discussion. We used GitHub Copilot to suggest code for preprocessing scripts. Two authors reviewed and edited all generated text, and we tested all generated code before use."
That sentence is not stylish. Good. Policy text does not reward style.
If you want more examples for specific tools, see How to Disclose ChatGPT Usage in Academic Papers, How to disclose Microsoft Copilot use in academic writing, and AI Usage Cards Examples and Templates.
LaTeX templates you can adapt
Short beats vague. Name the tool, the task, the stage, and the human check.
If you need a compact disclosure section for a venue that allows or expects manuscript disclosure, start with this:
\section*{AI use disclosure}
During preparation of this submission, we used OpenAI ChatGPT for limited
writing assistance and GitHub Copilot for limited coding assistance.
ChatGPT helped revise author-written text for clarity and helped brainstorm
alternative phrasing for the rebuttal. GitHub Copilot suggested code used in
preprocessing scripts. We reviewed, edited, tested, and verified all generated
text and code. No AI system is listed as an author. We take full responsibility
for the paper's content, citations, figures, and claims.If AI use was part of the research method, say that in the methods section, not only in a note at the end:
\subsection{Use of AI tools in the study workflow}
We used GPT-4 to draft candidate annotation instructions and propose
paraphrases for pilot prompts. Two authors reviewed all outputs, removed
unsuitable items, and finalized the materials before data collection. We did
not use model outputs as labels without human review. We report this use here
because the tool affected study materials used in the experiment.If you submit to ACL, move the disclosure into acknowledgements so the placement matches the policy:
\section*{Acknowledgements}
We used ChatGPT for limited writing assistance during manuscript preparation
and GitHub Copilot for limited coding assistance in preprocessing scripts.
We reviewed and verified all generated content and take full responsibility
for the final text, code, citations, and claims.If you want a version that mirrors an AI Usage Card more closely, use this longer format:
\section*{AI use disclosure}
Tools used: OpenAI ChatGPT (writing assistance), GitHub Copilot (code suggestions).
Tasks: clarity edits to author-written prose, brainstorming alternative wording,
and suggestions for preprocessing code.
Human oversight: all outputs were reviewed by the authors. Generated code was
tested before use. Generated text was edited before inclusion.
Scope: AI tools were not used to generate results, assign final labels, or make
unreviewed scientific claims.
Responsibility: the authors take full responsibility for the final manuscript.A rebuttal can need its own note in your internal records, even if the conference does not ask you to attach one. This template works well for lab documentation:
\paragraph{Internal note on rebuttal drafting}
We used Claude to suggest alternative phrasing for author-written rebuttal
responses. We did not paste confidential peer review text into any tool unless
the venue policy allowed that use. One author checked each response against the
original reviewer comment before submission.And if your paper includes AI-assisted code that entered the experiment pipeline, say that plainly:
\paragraph{Code assistance disclosure}
We used GitHub Copilot to suggest parts of the data preprocessing pipeline.
We inspected all suggestions line by line, ran tests on the final code, and
verified that the generated code did not change the experimental design beyond
the implementation choices described in this paper.For more LaTeX patterns, see LaTeX Tutorial for AI Usage Cards and How to Use AI Usage Cards in Overleaf.
What an AI Usage Card adds that one sentence does not
A one-line acknowledgement solves the submission problem. It does not solve the recordkeeping problem.
Later, someone will ask a follow-up question. A coauthor will want to know what model you used. A lab head will ask whether the tool saw confidential data. A reviewer may wonder whether generated code entered the experiment. A journal editor may ask for a fuller statement when the conference paper becomes a journal submission.
You do not want to rebuild that history from memory.
An AI Usage Card gives you a structured record of the tool, version if known, task, input type, output type, human oversight, verification steps, and where the output entered the workflow. Then you can pull the short disclosure sentence from the card instead of inventing one at the deadline.
That helps with conference papers, but it also helps when the same work turns into a journal article, thesis chapter, grant report, or camera-ready revision. If that sounds familiar, read How to Disclose AI Usage in Your Thesis, AI Usage Reporting in Grant Proposals, and AI Disclosure for NLP Research Papers.
A submission checklist that catches most mistakes
Before you submit, ask five plain questions.
Did an AI tool affect the method, data, code, figures, or analysis?
Did generated text remain in the final paper beyond basic grammar edits?
Does the venue require a specific location for disclosure, such as acknowledgements?
Did any coauthor use a different tool that nobody documented yet?
Did anyone on the team paste confidential review material into an AI system during peer review?
If you answer yes to any of those, write the disclosure now and save the details in an AI Usage Card. NeurIPS, ICML, and ACL all place real weight on reviewer confidentiality, even though they draw the line in different places. (neurips.cc)
The safest approach is plain, specific, and a little boring
Conference AI policy does not reward clever wording. It rewards clean records.
Name the tool. Name the task. Name the human check. Put the disclosure where the venue expects it. Then keep the fuller record somewhere your coauthors can find it.
That is why I would not stop at a one-line acknowledgement. Generate an AI Usage Card before you hit submit. Keep the full card with your project files. Copy the short version into your paper, acknowledgements, cover letter, or lab records.
Ten minutes now beats a week of guesswork after the deadline.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card