Learn

What Are AI Usage Cards?

AI Usage Cards give researchers a structured way to disclose how AI tools shaped writing, methods, analysis, and presentation in scholarly work.

Most AI disclosures fail because they hide the real work

"We used AI for writing assistance" is not a useful disclosure.

It tells the reader almost nothing. It does not say what tool you used, what task it touched, how much of the work it shaped, how you checked the output, or whether any sensitive data left your system.

That missing context creates problems fast. Editors cannot tell whether you followed policy. Reviewers cannot tell where human judgment entered the workflow. Readers cannot tell whether AI polished wording or helped generate claims, labels, code, figures, or analytic categories.

An AI Usage Card fixes that. It gives you one structured record of what you used, where you used it, how you reviewed the output, and what limits still remain. You can create one at ai-cards.org and attach it to a paper, thesis, preprint, review, or grant document.

The idea comes from the 2023 paper AI Usage Cards: Responsibly Reporting AI-Generated Content by Jan Philip Wahle, Terry Ruas, Saif M. Mohammad, Norman Meuschke, and Bela Gipp. The paper appeared at the ACM/IEEE Joint Conference on Digital Libraries and proposed a reporting model built around transparency, integrity, and accountability.

What an AI Usage Card is

An AI Usage Card is a disclosure format for scholarly work.

It does not judge whether your AI use was good or bad. It records what happened.

That distinction matters. Most disputes about AI in research do not start with the tool alone. They start with missing detail. Did the model rewrite prose or draft whole sections? Did it help screen studies, label data, suggest code, translate text, or generate figures? Did you verify outputs against sources, ground truth, or expert review? Did anyone upload confidential material to a third party service?

A good card answers those questions in plain language.

The ai-cards.org generator walks you through that process. The site is a free non-commercial service by the University of Göttingen. It lets you generate an AI usage report and download material for LaTeX, Word, and Overleaf workflows.

If you are still deciding whether this level of detail makes sense for your work, read why AI transparency matters in research and do I need to disclose AI usage in my paper?.

Why researchers need a standard format

Without a standard, every team improvises.

One author drops a sentence into the acknowledgments. Another buries disclosure in the methods section. A third adds a vague supplement note after submission. Even honest disclosure becomes hard to compare because each paper puts the information somewhere else and describes it in a different way.

That wastes time for everyone who reads the work.

A standard format solves that problem. When many papers follow the same structure, readers know where to look for tool names, versions, usage context, review steps, and remaining risks. Editors can scan for policy compliance. Reviewers can judge the workflow without guessing what happened off the page.

Research already uses standard formats for conflicts of interest, ethics statements, data availability, and funding disclosures. AI use belongs in that same family.

The idea behind the framework

The original paper does more than suggest a form. It asks a sharper question: what should responsible AI reporting in research actually include?

The authors answer that with three ideas.

Transparency means that you show where AI entered the project.

Integrity means that you explain how you checked, corrected, limited, or rejected model output.

Accountability means that human authors still own the final work and its consequences.

That is why a real disclosure needs more than a tool name. "ChatGPT" alone is not a disclosure. "Copilot" alone is not a disclosure. Readers need the task, the scope, the review process, and the known limits.

If you want the policy angle, pair this article with AI transparency requirements for journal submissions and [[AI disclosure](/ai-disclosure-in-systematic-reviews-and-meta-analyses/) policies by major journals](/ai-disclosure-policies-by-journal/).

The four sections of an AI Usage Card

The current ai-cards.org generator organizes disclosures into four sections: Project Details, Methodology & Experiments, Writing & Presentation, and Ethics & Limitations.

Project details

This section records the basic facts.

You list authors, affiliations, tools, models, versions, and access methods. You can also note when you used the system and whether you used a web app, API, local model, or institutional deployment.

That may look administrative. It is not. Version details matter. "GPT-4" tells a reader far less than a dated model name, interface, or deployment description. The same goes for Claude, Gemini, Copilot, local open models, and image generators.

This section also helps when teams used several tools for different tasks. One researcher may use an LLM for literature triage. Another may use Copilot for code suggestions. Another may use a grammar tool during revision. A card makes that visible in one place.

Methodology and experiments

This section covers AI use inside the research process.

Did you use AI to propose hypotheses, generate survey items, refine coding schemes, classify records, transcribe interviews, write analysis code, create synthetic examples, or support study screening? Put that here.

For many projects, this is the section that matters most. If AI shaped the method, your disclosure should show how much human control remained. Did you accept outputs as written? Did you test them against labeled data? Did domain experts review them? Did you reject generated material after inspection?

A short card can still be honest. If your project used AI only for language polishing, say that. If AI touched your method, say exactly how.

For field specific cases, see AI disclosure for qualitative research, AI disclosure for social science research, and AI disclosure for NLP research papers.

Writing and presentation

This is where many researchers start.

This section covers drafting, paraphrasing, summarizing, translation, grammar correction, title suggestions, figure editing, slide generation, and table cleanup. It also gives you space to describe your review process.

Specific language matters. "We used ChatGPT for writing assistance" sounds evasive because it hides the scope. A better disclosure names the task and the check. For example, you might say that the authors used a language model to suggest alternative wording for the abstract, then rewrote the final text and verified factual claims against cited sources.

That kind of disclosure helps the reader. It also protects the author. It shows that you did not hand off judgment.

If you need sample wording, go to AI Usage Cards examples and templates and how to disclose ChatGPT usage in academic papers.

Ethics and limitations

This section asks the question that weak disclosures avoid: what could have gone wrong?

You can record hallucination risk, bias, privacy concerns, copyright concerns, security limits, and factual errors. You can also explain what you did to reduce those risks, such as manual review, source checks, prompt restrictions, local processing, or a decision not to upload confidential data.

This section matters most when AI touched evidence, interpretation, or sensitive material.

If a team pasted unpublished manuscripts, participant responses, peer review text, clinical notes, or institutional documents into a third party tool, readers and editors will care about that. They should. A card gives you a direct place to say what data moved where, under what conditions, and with what safeguards.

What an AI Usage Card is not

Researchers often confuse AI Usage Cards with other documentation frameworks because the names sound similar.

An AI Usage Card documents how you used AI in a scholarly project.

A Model Card documents a model.

A Datasheet documents a dataset.

A System Card documents a deployed AI system.

Those are different objects, so they answer different questions.

If your paper introduces a model or dataset, you may need more than one framework. You might publish a Model Card for the system you built and an AI Usage Card for the way your team used AI while designing experiments, writing the manuscript, or generating figures. That is not duplication. It keeps separate things separate.

For the side by side breakdown, read AI Usage Cards vs Model Cards, AI Usage Cards vs Datasheets for Datasets, AI Usage Cards vs System Cards, and AI Documentation Frameworks Compared.

When a card helps most

You can use an AI Usage Card for almost any scholarly output, but it helps most in a few common cases.

It helps when AI touched the research method. That includes screening, coding, annotation, extraction, translation, instrument design, and programming support.

It helps when a journal, conference, department, or supervisor asks for disclosure but gives weak instructions. A card gives you structure when the policy leaves room for guesswork.

It helps when you work in a team. Shared documentation stops the common mess where one author assumes someone else disclosed the tool use.

It helps when you expect scrutiny. Thesis committees, reviewers, editors, grant panels, and readers of high profile preprints all want a clear record.

It also helps when your project moves across formats. A thesis chapter becomes a journal article. A preprint becomes a conference paper. A grant proposal turns into a registered study. If your AI use is already documented, you do not need to rebuild the disclosure from memory.

If you submit to specific venues, you may also want how to disclose AI use for NeurIPS, ICML, and ACL submissions, AI usage reporting in grant proposals, and how to disclose AI usage in your thesis.

How to create one

The fastest route is the AI Usage Card generator.

The site walks you through the four sections and produces text that you can copy into a manuscript, supplement, thesis appendix, or grant attachment. It also offers LaTeX, Word, and Overleaf material so the disclosure can live in the same writing workflow as the rest of your project.

That matters more than it sounds. If the disclosure lives where the manuscript lives, teams update it. If it sits in a separate note, people forget.

If you work in LaTeX, start with the LaTeX tutorial for AI Usage Cards or how to use AI Usage Cards in Overleaf. If you want examples first, go to AI Usage Cards examples and templates.

A short LaTeX example

Many journals want a short disclosure in the manuscript and a fuller one in the supplement. This pattern works well:

\section*{AI usage disclosure}
 
During this project, the authors used GPT-4.5 through a web interface to
suggest alternative wording for parts of the introduction and abstract and
to correct grammar during revision. The authors reviewed all generated text,
rewrote sections as needed, and verified factual claims against cited
sources. No AI tool was used to generate results, analyze data, or make
final scientific judgments. A full AI Usage Card is included in the
supplementary materials.

If your AI use affected the method, state that in the methods section too. Do not hide it in the acknowledgments.

Here is a methods-style example:

\subsection*{AI-assisted screening}
 
The authors used an LLM through an API to suggest preliminary labels for
title and abstract screening. Two human reviewers independently checked all
suggestions before inclusion decisions were made. The model did not make
final eligibility decisions. The full workflow and limitations are reported
in the AI Usage Card in Appendix B.

For a thesis appendix or supplement, you can add a short heading like this:

\appendix
\section{AI Usage Card}
 
The full AI Usage Card for this project appears below. It documents the AI
tools used, the tasks they supported, the verification steps performed by
the authors, and the remaining limitations of those tools.

What good disclosure looks like

Good disclosure is concrete.

It names the tool. It names the task. It states the scope. It explains the human review. It admits the limits.

Bad disclosure hides behind soft phrases like "AI assistance" or "editorial support" without telling the reader what happened. Readers notice that. Editors notice it too.

A simple test helps. Ask whether another researcher could read your disclosure and understand where AI entered the project and how you controlled for its errors. If the answer is no, your disclosure needs more detail.

You do not need perfect prose. You need a usable record.

Clear reporting beats vague reassurance

AI tools will keep changing. Tool names will change. Journal policies will change. What should not change is the standard for honesty.

Readers do not need a polished slogan. They need a record they can inspect.

That is what an AI Usage Card gives them. It turns a vague disclaimer into something another person can read, question, compare, and reuse. If you used AI in a paper, thesis, review, or grant document, generate an AI Usage Card now. Then include the card itself or copy its text into your manuscript package. It takes a few minutes, and it saves a lot of confusion later.

Generate Your AI Usage Report

Create a standardized AI Usage Card for your research paper in minutes. Free and open source.

Create Your AI Usage Card