Comparisons

AI Usage Cards vs Model Cards

Model Cards document AI models, while AI Usage Cards document how researchers used AI tools in a specific study.

Same word, different job

Researchers mix up Model Cards and AI Usage Cards because both use the language of AI transparency.

The names sound related. The formats look short. Both ask people to write down what an AI system can do, what it cannot do, and where risk enters the picture.

But they answer different questions.

A Model Card answers: "What is this model?"

An AI Usage Card answers: "How did you use AI in this research?"

That difference matters when you prepare a manuscript, thesis, peer review, grant proposal, or repository. If you choose the wrong document, you leave out the thing readers need.

If you built or released a model, readers need to know the model's purpose, training context, evaluation, and limits. Use a Model Card.

If you used ChatGPT, Claude, Gemini, Microsoft Copilot, GitHub Copilot, Midjourney, or another AI tool while writing, coding, analyzing, or reviewing, readers need to know what you did with the tool and how you checked the result. Use an AI Usage Card.

Short version:

  • Use a Model Card for the AI artifact.
  • Use an AI Usage Card for the research workflow.

Many projects need only one. Some need both.

If you already know you used AI during a paper, you can generate an AI Usage Card at ai-cards.org and use it in your appendix, methods section, or supplementary files.

What Model Cards document

Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, and Timnit Gebru introduced Model Cards in the 2019 paper "Model Cards for Model Reporting." They defined Model Cards as short documents that accompany trained machine learning models and report evaluation results across relevant conditions, intended uses, and other context that helps users judge fit for use. (research.latinxinai.org)

A Model Card helps someone decide whether they should reuse, deploy, audit, or compare a model.

It usually covers questions like these:

  • Who developed the model?
  • What task does the model support?
  • What data shaped the model?
  • What evaluation did the developers run?
  • What metrics did they report?
  • Where does performance vary?
  • What uses should people avoid?
  • What legal, ethical, or social limits should users know?

Model Cards travel with models. On Hugging Face, users can find a model card as the README.md file in a model repository, and Hugging Face describes model cards as documentation about training, use cases, bias, limitations, metrics, and related details. (huggingface.co)

That tells you who should write one: the team that knows the model.

If your lab trains a new classifier, fine tunes a language model, releases weights, or shares a model through a repository, you should prepare model documentation. If your paper only used a commercial AI writing assistant, you cannot write a full Model Card for that assistant because you do not know the training data, internal evaluation, or deployment setup.

You can still document your use. That is the AI Usage Card's job.

What AI Usage Cards document

Jan Philip Wahle, Terry Ruas, Saif M. Mohammad, Norman Meuschke, and Bela Gipp introduced AI Usage Cards in their 2023 JCDL paper "AI Usage Cards: Responsibly Reporting AI-Generated Content." The paper proposes AI Usage Cards as a way to report AI use in scientific research, with attention to transparency, integrity, and accountability. (arxiv.org)

AI Usage Cards do not try to describe a whole model from the outside.

They describe what you did.

That makes them a better fit for academic disclosure. A journal editor usually does not need you to restate a vendor's entire system documentation. The editor needs to know whether AI helped draft prose, generate code, summarize sources, suggest labels, make figures, translate text, or write reviewer comments.

A reader needs the same thing.

An AI Usage Card records:

  • the tool or model you used
  • the research stage where you used it
  • the task you gave it
  • the type of output it produced
  • the amount of output that entered the work
  • the checks you performed
  • the human decisions you made

If you need the basic framework first, start with What Are AI Usage Cards?. If you mainly need policy guidance, read Do I Need to Disclose AI Usage in My Paper? and [[AI Disclosure](/how-to-disclose-ai-use-for-neurips-icml-and-acl-submissions/) Policies by Major Journals](/ai-disclosure-policies-by-journal/).

The core difference

Model Cards document models.

AI Usage Cards document use.

That sounds almost too simple, but it clears up most cases.

A Model Card belongs with a model release. It gives downstream users information about a model's behavior, evaluation, purpose, and limits.

An AI Usage Card belongs with a research output. It gives editors, reviewers, supervisors, funders, and readers information about AI assistance in a specific work.

The unit changes.

A Model Card covers one model or model release.

An AI Usage Card covers one paper, thesis, review, grant proposal, dataset article, or research project.

The author changes too.

Model developers write Model Cards. Researchers, reviewers, editors, or students write AI Usage Cards.

The risk changes as well.

Model Cards reduce confusion about what a model can and cannot do. AI Usage Cards reduce confusion about how AI entered the scholarly record.

Comparison table

DimensionModel CardsAI Usage Cards
Main question"What is this model?""How did you use AI here?"
Object documentedA trained model or model releaseA research workflow in one scholarly output
Usual authorModel developer, maintainer, or release teamResearcher, author, reviewer, editor, or student
Best fitReleasing or sharing a modelDisclosing AI assistance in academic work
Typical locationModel repository, technical appendix, release pageManuscript, thesis appendix, supplementary material, disclosure section
Typical fieldsIntended use, evaluation data, metrics, limitations, model details, risksTool used, task, stage, output type, extent of use, verification, human oversight
Main audienceModel users, deployers, auditors, repository usersEditors, reviewers, readers, supervisors, funders
It cannot replaceA disclosure of how authors used AITechnical documentation for a model artifact

OpenAI's gpt-oss documentation shows the same split from the model side. OpenAI called its gpt-oss document a Model Card rather than a System Card because the open weight models can appear inside many systems built by different stakeholders. (openai.com)

That is the model layer. A researcher who uses gpt-oss inside a study still needs to document how they used it in that study.

When you need a Model Card

You need a Model Card when you ask others to trust, reuse, test, deploy, or compare a model you release.

You trained a new model

If your group trained a classifier, retriever, segmentation model, speech model, medical imaging model, or domain language model, readers need model documentation.

The paper can explain the method. The Model Card helps users judge the released artifact.

You fine tuned an existing model

Fine tuning changes behavior.

If you fine tune BERT for legal text, Llama for a biomedical task, or a vision transformer for satellite images, the base model's documentation does not describe your version. Your release needs its own record of intended use, evaluation, and limits.

You publish a model with code and weights

If others can download the model, run it, or cite it as an artifact, attach a Model Card to the repository.

For NLP work, this often sits beside a data statement, dataset card, or other documentation. Our guide to AI Usage Cards vs Datasheets for Datasets explains that difference from the data side.

When you need an AI Usage Card

You need an AI Usage Card when AI helped create, revise, analyze, review, or package scholarly work.

This includes ordinary cases that researchers sometimes forget.

You used ChatGPT to rewrite dense paragraphs.

You used Claude to summarize papers before screening them.

You used Microsoft Copilot to draft spreadsheet formulas or code snippets.

You used GitHub Copilot to write tests.

You used Gemini to suggest interview code names.

You used an image model to draft a schematic.

You used AI while reviewing a manuscript.

Those uses may not change the study design. They still belong in the disclosure record if they shaped the work. The card tells readers where AI entered and where human judgment took over.

For writing cases, see How to Disclose ChatGPT Usage in Academic Papers. For coding help, see How to disclose Microsoft Copilot use in academic writing. For review work, see AI Disclosure in Peer Review: What Reviewers and Editors Should Report.

When you need both

Some projects need both documents.

Imagine a computational linguist who fine tunes a BERT model for sarcasm detection. She uses GitHub Copilot to draft preprocessing scripts. She asks ChatGPT to suggest error categories. She checks the code, tests the outputs, rejects several categories, and releases the fine tuned model with weights.

She needs a Model Card for the sarcasm detection model.

That Model Card should describe the task, data, evaluation metrics, intended use, failure cases, and limits.

She also needs an AI Usage Card for the paper or thesis.

That AI Usage Card should state that Copilot helped draft preprocessing code, ChatGPT suggested candidate error categories, and the researcher reviewed and validated the outputs before using them.

One document tells readers what the released model is.

The other tells readers how AI entered the research process.

This split also helps journal editors. They can assess the model artifact and the manuscript workflow without forcing one document to do both jobs.

If your project sits between several documentation types, read AI Documentation Frameworks Compared and AI Usage Cards vs System Cards.

The citation mistake

Many authors cite the vendor and stop there.

That does not solve the disclosure problem.

A citation to a model paper, product page, or repository tells readers what tool you used. It does not tell them what you asked the tool to do. It does not say whether AI drafted text, generated code, suggested themes, translated passages, or only fixed grammar. It does not say whether you checked the output.

The reverse mistake also causes trouble.

Some authors try to describe a released model only through an AI use disclosure. That leaves model users without the technical context they need.

Use the citation for the source.

Use the Model Card for the model.

Use the AI Usage Card for your use.

How to include an AI Usage Card in a manuscript

You can include a short disclosure in the manuscript and place the full card in an appendix or supplement.

A simple LaTeX disclosure can look like this:

\section*{AI use disclosure}
 
The authors used OpenAI ChatGPT for language editing in the introduction
and GitHub Copilot for draft code suggestions in the preprocessing script.
 
The authors reviewed all AI-generated text and code. The authors tested the
preprocessing script against manually checked examples and revised the manuscript
without accepting AI output automatically.
 
A full AI Usage Card appears in the supplementary materials.

If your journal allows appendices, you can add a fuller entry:

\appendix
\section{AI Usage Card}
 
\begin{description}
  \item[Tool] OpenAI ChatGPT, accessed through the web interface.
  \item[Purpose] Language editing for selected manuscript paragraphs.
  \item[Research stage] Manuscript preparation.
  \item[Extent of use] The tool suggested wording changes. The authors selected,
  edited, or rejected each suggestion.
  \item[Verification] The authors checked all revised passages against the
  original analysis and source materials.
  \item[Responsibility] The authors take responsibility for the final text.
\end{description}

For longer examples, use LaTeX Tutorial for AI Usage Cards and AI Usage Cards Examples and Templates.

A quick decision guide

Ask three questions.

Did you release a model?

If yes, write a Model Card.

Did you use AI while producing the scholarly work?

If yes, write an AI Usage Card.

Did you do both?

If yes, include both and keep their roles separate.

That separation gives readers a cleaner record. They can inspect the model as an artifact and inspect the research workflow as a human process.

Use the right card for the right layer

A Model Card travels with the model.

An AI Usage Card travels with the paper.

Once you see that split, the choice gets easier. Document the artifact when you release one. Document the workflow when AI helped you produce the work.

If your manuscript, thesis, review, or grant proposal involved AI assistance, generate an AI Usage Card at ai-cards.org. Use the card as a supplement, appendix, or source text for your AI disclosure section.

Generate Your AI Usage Report

Create a standardized AI Usage Card for your research paper in minutes. Free and open source.

Create Your AI Usage Card