Do Conference Papers Need to Disclose AI Agent Use?
A practical FAQ for researchers using AI agents while preparing, coding, reviewing, or submitting conference papers.
Do you need to disclose AI agent use in a conference paper?
Usually, yes.
If an AI agent shaped your research work, your writing, your code, your analysis, your figures, your literature search, your references, or your review activity, you should record that use and check the venue rules before you submit.
An AI agent can do more than answer a prompt. It may plan tasks, call tools, search papers, edit files, run code, summarize reviewer comments, or draft sections across several steps. That makes disclosure harder than "we used ChatGPT for language editing."
Conference chairs care about responsibility. They want to know who made the scientific choices, who checked the output, and whether the tool changed the work.
An [[AI Usage Card](/chatgpt-disclosure-academic-papers/)](/) gives you one place to record that information. You can use it as an appendix, as a private author record, or as source text for a conference disclosure field.
First answer: disclose the work, not the hype
Do not write that you "used an AI agent" and stop there. That sentence tells an editor almost nothing.
Write what the agent did.
Did it draft code? Suggest experimental settings? Search literature? Screen abstracts? Rewrite the introduction? Generate captions? Propose statistical tests? Create BibTeX entries?
Those details matter because conference policies often draw lines between routine editing and research contributions.
NeurIPS 2025 tells authors to describe LLM use in the experimental setup, or an equivalent section, when the tool forms an "important, original, or non-standard component" of the approach. NeurIPS also says authors remain responsible for all text, figures, references, and methodology, and it warns against unverified LLM-generated references. (neurips.cc)
ICLR 2026 goes further. Its LLM FAQ says authors must state how they used LLMs in the paper text and in the submission form. It also says authors and reviewers remain responsible for the output, and it treats hidden prompt instructions meant to influence reviewers as a code of ethics problem. (iclr.cc)
AAAI-26 uses a stricter rule for generated text. It prohibits papers that include text generated by an LLM unless that text appears as part of the paper's experimental analysis, while allowing authors to use LLMs to edit or polish author-written text. AAAI also says AI systems do not qualify as authors or citable sources. (aaai.org)
That mix of policies creates a simple habit: keep a record while you work. Waiting until the submission deadline invites mistakes.
What counts as an AI agent for disclosure?
For conference disclosure, treat "AI agent" as a description of behavior, not a product category.
A tool acts agentically when it takes a goal and performs several steps with some independence. It may call a search tool, inspect files, write code, run tests, compare outputs, or revise a draft. A plain chatbot session can also become agent-like if you ask it to plan a workflow and carry out parts of it.
You do not need a perfect definition. You need a usable record.
Write down:
- the tool or model name, if you know it
- the date or version, if the system provides one
- the task you gave it
- the inputs you shared
- the output you accepted
- the output you rejected
- the checks that humans performed
The last item does the most work. Reviewers do not need to know that a button looked impressive. They need to know that a named author inspected the code, verified the references, checked the statistics, and approved the final text.
For a longer introduction to this documentation pattern, see What Are AI Usage Cards?.
When conference policies mention LLMs, do they also cover agents?
Often, yes.
Many venue rules still say "LLM" or "generative AI" because those terms appeared before agent systems became common in research workflows. If an agent uses an LLM to generate text, code, analysis, or review content, you should treat the policy as relevant unless the venue says otherwise.
The safer question is not "does the rule use the word agent?"
Ask this instead: did a computational system contribute to something that a reader may attribute to the authors?
If yes, disclose it.
This matters for AI and machine learning venues, but it also matters for domain conferences. A remote sensing paper, a psychology paper, or a public health paper can use an AI agent for data cleaning, coding, translation, screening, or writing. The disclosure problem follows the work, not the conference name.
For submission planning across venues, pair this article with AI Transparency Requirements for Journal Submissions and How to disclose AI use for NeurIPS, ICML, and ACL submissions.
Do you need to disclose grammar editing?
Sometimes no. Sometimes yes.
Some venues exempt grammar checking, spell checking, and light editing. NeurIPS 2025 says authors do not need to document spell checkers, grammar suggestions, or programming aids used for editing purposes. It also says authors should document LLM use in the method when the tool forms part of the approach. (neurips.cc)
ICLR 2026 asks authors to disclose any LLM use in the paper and the submission form. Its examples include grammar and wording support, but it still places responsibility on the authors. (iclr.cc)
So you need to read the call for papers, author instructions, submission form, and ethics policy for the current year.
If the venue gives no clear rule, use this practical test: if the AI agent changed only surface wording and no scientific content, a short acknowledgment may suffice. If it changed methods, code, data handling, analysis, references, figures, or claims, place the disclosure where readers can see it in context.
An AI Usage Card helps because it separates light writing support from research work. You can generate the card, then copy the relevant text into the venue field.
Where should the disclosure go?
Use the place that matches the role of the agent.
If the agent affected the method, describe it in the methods section, experimental setup, or reproducibility section.
If the agent helped with writing, place a short note in the acknowledgments, author contributions, or required AI disclosure section.
If the agent helped with literature screening, code generation, data extraction, or figure production, mention it near that task. Do not bury a major methodological role in a one-line acknowledgment.
If the conference uses a submission form, paste the same disclosure there and keep the wording consistent with the paper.
If the venue allows appendices, include an AI Usage Card. The card can list the task, tool, purpose, human checks, and limits without interrupting the main paper.
For examples you can adapt, see AI Usage Cards Examples and Templates.
A short disclosure template for conference papers
Use plain language. Name the task. Name the human check.
\section*{AI assistance statement}
The authors used an AI agent to support manuscript preparation and code review.
The agent suggested revisions to author-written text and proposed unit tests for
the preprocessing scripts. The authors reviewed all suggested text, ran the
tests locally, inspected the generated code, and take responsibility for the
final manuscript, code, figures, and references.This version works when the agent helped but did not define the scientific method.
If the agent affected the method, use stronger wording and place it in the methods section.
\subsection*{Use of AI agent in the experimental workflow}
We used an AI agent to generate candidate Python scripts for data preprocessing
and to propose parameter settings for the ablation study. The agent did not run
experiments without author approval. Two authors inspected each script before
execution, compared outputs against manually written checks, and verified all
reported results from the saved experiment logs.For help placing this in a LaTeX paper or appendix, see the LaTeX Tutorial for AI Usage Cards and How to Use AI Usage Cards in Overleaf.
A disclosure template for agent-assisted literature work
AI agents tempt researchers to speed through literature search and screening. That can help, but it can also hide bad search terms, missed papers, false summaries, or invented references.
If an agent helped with a review paper, say exactly where it entered the workflow.
\section*{AI assistance statement}
The authors used an AI agent to suggest search strings, identify duplicate
records, and summarize abstracts during the initial literature mapping stage.
The agent did not make inclusion or exclusion decisions. Two authors checked the
search strings, screened records against the eligibility criteria, verified all
included citations in bibliographic databases, and wrote the final synthesis.For review articles, this disclosure should sit beside your search strategy, screening method, or reporting checklist. It should not replace standard review reporting.
If you work on evidence synthesis, read AI Disclosure in Systematic Reviews and Meta-Analyses.
A disclosure template for coding agents
Coding agents create a different risk. A script can run. It can even pass tests. It can still implement the wrong analysis.
Use a disclosure that names the boundary between suggestion and authorship.
\section*{AI assistance statement}
The authors used an AI coding agent to suggest refactoring changes and draft
test cases for the analysis repository. The agent did not select statistical
models, define outcome variables, or approve results. The authors reviewed each
commit, ran the full test suite, inspected the generated code, and verified the
reported numerical results against independent analysis logs.If the generated code forms part of your published artifact, point readers to the repository, commit history, or supplementary material when the venue permits it.
If your paper concerns NLP systems or language model evaluation, see AI Disclosure for NLP Research Papers. NLP readers often expect more detail about prompts, models, benchmarks, and evaluation choices.
What if your paper is about AI agents?
Then you have two jobs.
First, report the agent system as research. That belongs in the methods section. Describe the model, tools, prompts, memory, environment, permissions, stopping rules, evaluation tasks, human interventions, and failure handling. Readers need enough detail to understand what you tested.
Second, disclose any AI assistance that helped you write or prepare the paper. That belongs in an AI assistance statement or AI Usage Card.
Do not mix these two records into one vague note.
A paper about agents may need Model Cards, System Cards, dataset documentation, and an AI Usage Card. Each one answers a different question. Model Cards describe model behavior and evaluation. System Cards describe deployed systems and safety analysis. Datasheets describe datasets. AI Usage Cards describe how researchers used AI while doing or reporting the work.
For the distinction, read AI Documentation Frameworks Compared, AI Usage Cards vs Model Cards, and AI Usage Cards vs System Cards.
Do reviewers need to disclose AI agent use?
Yes, if the venue allows the use at all.
Reviewers face a harder constraint than authors: confidentiality. If you paste a confidential submission into an external agent, you may expose unpublished work, data, code, or author identities.
NeurIPS 2025 tells reviewers not to share submissions or submitted code with anyone or any LLM. It allows reviewers to use outside resources only without sharing confidential submission material, and it still holds reviewers responsible for review quality and accuracy. (neurips.cc)
ICLR 2026 says reviewers must disclose LLM use in reviews and warns that confidentiality violations can trigger ethics consequences. (iclr.cc)
If you review conference papers, do not assume that your private AI tool is allowed. Read the reviewer guidelines. If the rule is unclear, ask the area chair before using the tool.
For more detail, see AI Disclosure in Peer Review: What Reviewers and Editors Should Report.
Can you list an AI agent as a co-author?
No.
Conference policies that address this issue assign authorship to humans. NeurIPS 2025 says only humans qualify as authors and that human authors remain responsible for the work. AAAI-26 says AI systems do not meet authorship criteria and cannot serve as citable sources. (neurips.cc)
The practical rule is simple: acknowledge tools, credit people, and keep responsibility with the authors.
If a colleague designed prompts, checked outputs, wrote code, or made research decisions, record that person’s contribution according to your author contribution policy. If a tool generated suggestions, disclose the tool use.
For the authorship question, see Can AI Be a Co-Author on a Research Paper?.
What should an AI Usage Card include for a conference submission?
A useful card answers the questions a reviewer or editor would ask.
It should state what tool you used, why you used it, what stage of the work it affected, what human checks you performed, and what limits remain.
For an AI agent, add the action boundary. Say whether the agent could search the web, call APIs, edit files, run code, access private data, or write to a repository. If you disabled some functions, say so.
A strong card for a conference paper might say:
\section*{AI Usage Card}
\textbf{Tool use.} The authors used an AI agent during manuscript preparation
and code maintenance.
\textbf{Tasks.} The agent suggested wording edits, generated candidate unit
tests, and proposed refactoring changes for preprocessing scripts.
\textbf{Human oversight.} The authors reviewed all text before submission,
verified all references manually, inspected generated code, ran tests locally,
and checked numerical results against experiment logs.
\textbf{Limits.} The agent did not select research questions, define evaluation
metrics, decide inclusion criteria, interpret results, or approve the final
paper.You can create this faster at ai-cards.org. Generate an AI Usage Card, copy the text into your paper or submission form, and keep the full card in your project records.
What should you do before submitting?
Read the current policy page for your venue. Look for author instructions, ethics rules, reviewer rules, and the submission form. Some conferences update their AI rules each year.
Then compare your actual workflow with the rule.
If the agent touched only grammar, follow the venue’s editing rule. If the agent touched science, code, data, evidence, references, or claims, disclose it in the paper. If you used an agent during peer review, check confidentiality rules before you use the tool at all.
Do not wait until camera-ready stage. By then, you may have lost the record of what the agent did.
Generate an AI Usage Card when you start using the agent. Use it as a running log during the project. At submission time, copy the relevant text into your methods section, acknowledgment, appendix, or conference disclosure field. That small habit makes the final disclosure easier to write and harder to forget.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card