Use Cases

How to disclose AI use for NeurIPS, ICML, and ACL submissions

A practical guide for researchers who use AI tools while writing, coding, reviewing, or editing papers for major ML and NLP conferences.

Conference AI policies are similar until they are not

If you submit to NeurIPS, ICML, or ACL, you already know the feeling. You use an AI tool for one task that seems harmless. Then you hit the submission page and wonder what, exactly, you need to disclose.

That confusion makes sense. These venues all allow some AI use, but they draw the line in different places. One venue may treat writing help as optional to disclose. Another may ask for disclosure in acknowledgments. Another may focus less on author disclosure and more on reviewer confidentiality.

If you want a simple rule, use this one: disclose any AI use that affected your method, data, analysis, code, figures, or substantive wording. Then record it in a form you can reuse. The easiest way to do that is to generate an AI Usage Card before you submit.

This article focuses on conference submissions, not journals. If you need journal guidance, read AI Disclosure Policies by Major Journals and AI Transparency Requirements for Journal Submissions.

What the current policies say

NeurIPS 2025 allows authors to use LLMs while preparing papers. Its policy says authors should describe LLM use if it is part of the methodology, while grammar and formatting help does not need to be declared in the manuscript. NeurIPS also says authors remain responsible for the full content, including references, figures, and methods. It warns that false citations or unsound content generated through LLM use can trigger investigation, including after acceptance. (neurips.cc)

ICML 2026 also allows authors to use generative AI tools for writing or research. Its call for papers says authors take full responsibility for all content and encourages them to explain notable AI use in the research methodology. ICML states that LLMs cannot be authors. It also bans prompt injection in submissions. (icml.cc)

ACL takes a stricter disclosure line for authors. The ACL Publication Ethics Policy says generative AI tools cannot be listed as authors and that use of generative AI to create content should be fully disclosed in the acknowledgements section. The policy even gives a sample form such as noting that a section was written with input from ChatGPT. (aclweb.org)

So the first practical takeaway is blunt. If you submit the same paper to different venues over time, your disclosure text may need to change.

The biggest difference sits in the review process

Most authors think about AI disclosure only in the manuscript. That misses half the problem.

Conference policies now treat reviewer use of AI as a separate ethics issue. That matters if you review for the same venues where you submit.

NeurIPS 2025 tells reviewers not to share submissions or code with any LLMs. At the same time, it allows limited use of outside resources, including LLMs, to understand concepts or polish the wording of a review, as long as the reviewer does not share confidential submission content and remains responsible for the final review. (neurips.cc)

ICML takes a harder line in its peer review FAQ. It says privileged review material such as submissions, reviews, and discussions may not be submitted to external generative AI tools. It also says the credibility of review suffers if reviews are automatically produced with generative AI tools, whether those tools run as external services or on a local machine. (icml.cc)

ACL is detailed here too. Its ethics policy says reviewers must read the paper fully and write the first draft of the review themselves. It allows limited generative assistance for paraphrasing or proof checking, but it forbids uploading a manuscript or peer review report into a non privacy preserving generative tool. (aclweb.org)

That means one bad habit can create two problems at once. You might break confidentiality as a reviewer and then forget to disclose your own AI use as an author.

What you should disclose in a conference paper

You do not need to confess every spell check. You do need to disclose AI use that changed the scientific record.

For conference submissions, I would split AI use into three buckets.

Bucket 1: usually disclose

Disclose AI use when it shaped the actual research.

That includes cases where you used an AI system to generate or transform code used in experiments, draft annotation instructions, synthesize prompts for evaluation, rewrite survey instruments, extract labels, summarize literature that informed your method, generate images or figures used in the paper, or produce text that survived into the final manuscript in a substantive way.

NeurIPS and ICML both point authors toward methodological disclosure when AI use affects the research itself. ACL goes further and expects disclosure of AI generated content in acknowledgments. (neurips.cc)

Bucket 2: disclose when in doubt

Some uses sit in the gray zone.

Say you used Copilot to speed up data cleaning scripts, Claude to rewrite messy prose, or ChatGPT to help draft rebuttal language. These uses may not define your method, but they still affect the paper. If the tool did more than fix grammar, I would disclose it.

This is the safest move for conference resubmissions too. A clear disclosure reduces the chance that a later venue reads your workflow as hidden assistance.

Bucket 3: often not necessary in the manuscript

Basic grammar correction, spelling fixes, or formatting help usually do not need manuscript disclosure under NeurIPS if they did not affect the method or originality. NeurIPS says that editing and formatting uses do not require declaration in the manuscript. (neurips.cc)

Still, I would log them in your own records. That way, if a coauthor asks later, you have a clean answer.

A practical disclosure template you can adapt

For conference papers, short beats clever. State the tool, the task, the stage, and the human check.

You can adapt this text:

\section*{AI use disclosure}
The authors used OpenAI ChatGPT and GitHub Copilot during manuscript preparation and code development. ChatGPT helped rewrite small portions of text for clarity and helped brainstorm phrasing for the rebuttal. GitHub Copilot suggested code used in preprocessing scripts. The authors reviewed, edited, and verified all generated text and code. No AI system was listed as an author, and the authors take full responsibility for the paper's content.

If you submit to ACL, move similar text into the acknowledgements section to match ACL's policy direction. (aclweb.org)

\section*{Acknowledgements}
We used ChatGPT for limited writing assistance and GitHub Copilot for limited coding assistance during manuscript preparation. We reviewed and verified all generated content and take full responsibility for the final text, code, citations, and claims.

If AI use formed part of the method, say so in the methods section, not only in acknowledgments.

\subsection{Use of AI tools in the study workflow}
We used GPT-4 to draft candidate annotation instructions and to propose paraphrases for pilot prompts. Two authors reviewed all outputs, removed unsuitable items, and finalized the material before data collection. We did not use model outputs as labels without human review.

If you want more examples, see AI Usage Cards Examples and Templates and How to Disclose AI Usage in Academic Papers.

What an AI Usage Card adds that one sentence does not

A one line acknowledgment helps. It does not capture enough detail when a reviewer, editor, coauthor, or lab head asks follow-up questions three months later.

An AI Usage Card gives you a structured record: tool name, version if known, task, input type, output type, human oversight, verification steps, and where the output entered the workflow. That record helps when you revise a paper across venues with different policies.

You can create one at ai-cards.org and use it in two ways. You can attach the card as supplementary documentation for your own records, or you can copy a shorter disclosure statement from the card into your manuscript, cover letter, acknowledgments, thesis appendix, or lab documentation page.

That saves time, and it cuts down on vague disclosures like "we used AI tools for editing." Those statements tell readers almost nothing.

A simple workflow before you hit submit

Before submission, ask four questions.

Did an AI tool affect the method, data, code, figures, or analysis?

Did AI generated text remain in the final paper beyond grammar edits?

Does the venue require a specific location for disclosure, such as acknowledgments?

Did any coauthor use a different tool that has not been documented yet?

If any answer is yes, write a disclosure now, not after acceptance. Then save the details in an AI Usage Card so you do not rebuild the record from memory later.

The safest approach is boring on purpose

Conference AI policy does not reward clever wording. It rewards clean documentation.

Name the tool. Name the task. Name the human check.

Then generate an AI Usage Card and keep it with your submission files. If you later submit the paper to NeurIPS, ICML, ACL, or a journal with different rules, you will have the full record ready and a short disclosure you can paste where the venue wants it.

Generate Your AI Usage Report

Create a standardized AI Usage Card for your research paper in minutes. Free and open source.

Create Your AI Usage Card