AI Usage Cards vs System Cards
System Cards explain an AI system from the developer side. AI Usage Cards explain how a researcher used that system in one project.
Same Tool, Different Document
Researchers often see a vendor's System Card and assume that it covers their disclosure needs.
It does not.
A System Card explains the AI system that a company built and deployed. An AI Usage Card explains how you used that system in your own research. Those are different jobs, written by different people, for different readers.
That distinction matters in peer review. A journal editor does not only want to know what GPT-4 or another tool can do in general. The editor wants to know what you did with it, where you used it, and how you checked its output. If you want a quick overview of the idea behind AI Usage Cards, start with What Are AI Usage Cards?.
What System Cards Document
System Cards describe an AI system at the deployment level.
That means more than the base model. A deployed system includes the model, prompts or policies around it, interfaces, filters, safety controls, monitoring, and the context in that system reaches users. A System Card aims to document that whole setup.
Public examples often come from major AI companies. Their System Cards usually describe capabilities, limits, known risks, evaluation results, red-team findings, and safety measures. Some also discuss release decisions and remaining uncertainty.
The audience sits on the developer side and policy side. Think regulators, auditors, downstream developers, institutional buyers, and informed users who need to judge whether a system fits a given purpose.
A System Card answers questions like these:
- What system did the company release?
- What can it do well?
- Where does it fail?
- What risks did the developer test?
- What guardrails did the developer put in place?
Those are useful questions. But they are not the questions that journal reviewers ask about your paper.
What AI Usage Cards Document
AI Usage Cards document one research use case.
They capture the tool you used, the task you used it for, the stage of the workflow where it appeared, the level of human oversight, and the checks you applied before you trusted the output. They focus on practice, not product design.
The audience sits on the research side. Think reviewers, editors, readers, supervisors, collaborators, and research integrity teams.
An AI Usage Card answers questions like these:
- Which AI tool did you use?
- For what task?
- At what stage of the project?
- Did a human review the output?
- How did you verify accuracy, quality, or fit for purpose?
That is why AI Usage Cards fit academic disclosure so well. They translate vague statements like "we used ChatGPT for assistance" into a short record that others can inspect.
If you need help with journal-facing disclosure, see How to Disclose ChatGPT Usage in Academic Papers and Do I Need to Disclose AI Usage in My Paper?.
You can also generate an AI Usage Card directly at ai-cards.org.
Head-to-Head Comparison
| Dimension | System Cards | AI Usage Cards |
|---|---|---|
| Purpose | Explain a deployed AI system's capabilities, limits, risks, and safeguards | Explain how AI was used in one research project |
| Who writes it | The organization that builds or deploys the system | The researcher or research team that used the tool |
| Primary audience | Regulators, auditors, buyers, downstream developers, informed users | Reviewers, editors, readers, supervisors, collaborators |
| Scope | One deployed system and its operating context | One paper, thesis, grant, or research workflow |
| Main focus | System behavior, safety testing, deployment choices, known failure modes | Use purpose, workflow stage, human oversight, verification steps |
| Typical length | Long, often many pages | Short, often 1 to 2 pages |
| Main question | "What is this system, and what risks come with it?" | "How did you use this tool, and how did you check the results?" |
| Best fit | Product release and governance | Scholarly disclosure and research transparency |
Why Researchers Still Need Their Own Disclosure
A System Card cannot disclose your methods for you.
Even a detailed vendor document only tells part of the story. It might say that a model performs well on coding, summarization, or classification benchmarks. It does not say whether you used it to draft survey items, clean references, code interviews, translate text, or rewrite prose. It does not show whether you accepted outputs as-is or checked each one by hand.
That gap matters because risk changes with context.
A tool that works well for brainstorming section headings may fail badly for extracting data from clinical notes. A model that seems safe for grammar correction may create serious problems if it paraphrases participant quotations or invents citations. The same system can support low-risk and high-risk tasks. Reviewers need to know where your use falls.
That is the core difference. System Cards describe the system in general. AI Usage Cards describe your use in context.
For more on why journals and readers care about this context, see Why AI Transparency Matters in Research and AI Ethics and Documentation in Academic Research.
When to Use One, the Other, or Both
Use a System Card when you build or deploy an AI system for others.
That applies to companies, labs, hospitals, platforms, and institutions that release AI-enabled services. If you own the system, control deployment, and need to explain risks and safeguards, a System Card fits that job.
Use an AI Usage Card when you use an AI tool in research.
That applies to papers, theses, grant proposals, reviews, coding workflows, literature screening, writing support, and many other academic tasks. If you are the end user and need to disclose what happened in your project, an AI Usage Card fits that job.
Use both when your research depends on a third-party system and readers need both views.
For example, you might cite a vendor's System Card to show what the provider claims about the tool's limits and evaluations. Then you add your own AI Usage Card to show how you used that tool, what checks you ran, and what boundaries you set. One document does not replace the other.
A Practical Research Example
Imagine that a company deploys an AI assistant for radiology reporting.
The company publishes a System Card. It describes the assistant's intended use, evaluation results, known failure cases, safety controls, escalation rules, and deployment limits. That document helps hospitals judge the product.
Now imagine that a doctoral researcher uses a general-purpose chatbot to help draft a codebook for qualitative interviews about clinician workflow.
The researcher's disclosure problem looks different. Reviewers do not need a long description of the vendor's safety process. They need to know whether the chatbot touched raw transcripts, whether the researcher fed in sensitive data, whether a human reviewed every suggested code, and whether another coder checked agreement.
That is exactly where an AI Usage Card helps.
Here is a compact LaTeX example that shows the kind of disclosure a researcher might include in a supplement or appendix:
\section*{AI Usage Disclosure}
\textbf{Tool:} ChatGPT (version accessed January 2026)
\textbf{Purpose of use:} Initial drafting of a qualitative codebook and
generation of candidate labels for recurring themes.
\textbf{Materials provided to the tool:} De-identified excerpts only.
No direct identifiers or full interview transcripts were uploaded.
\textbf{Human oversight:} The first author reviewed all suggested codes.
Two human coders revised the codebook before formal analysis.
\textbf{Verification:} A 25\% sample of coded excerpts was checked by a
second coder. Disagreements were resolved through discussion.
\textbf{Role in final results:} The AI tool did not determine final themes.
The research team made all analytic decisions.This does not try to document the whole AI system. It documents the research use.
If you write in LaTeX, you may also want our LaTeX Tutorial for AI Usage Cards and How to Use AI Usage Cards in Overleaf.
Common Mistake: Treating Vendor Documentation as Method Reporting
Many authors cite a provider page and stop there.
That shortcut weakens transparency. Vendor documentation can support your method section, but it cannot stand in for it. You still need to report your prompts or task framing when relevant, your review process, your verification steps, and any limits you placed on the tool.
This point becomes sharper in fields with stricter standards. In systematic reviews, for example, readers may need to know whether AI assisted with screening, extraction, or synthesis. In theses, committees may want to know what text the student wrote alone and what text AI helped shape. In journal submissions, editors may ask for disclosure language that matches house policy.
If those cases apply to you, these guides may help:
AI Disclosure in Systematic Reviews and Meta-Analyses
How to Disclose AI Usage in Your Thesis
AI Transparency Requirements for Journal Submissions
How They Fit into the Wider Documentation Ecosystem
System Cards and AI Usage Cards sit in the same family of transparency tools, but they solve different layers of the problem.
System Cards sit close to deployment and governance. They help people assess an AI product or service.
AI Usage Cards sit close to research practice. They help people assess how a scholar used AI inside a study, manuscript, or proposal.
That is why comparisons with other frameworks can help. If you want to sort out model-level, system-level, dataset-level, and use-level documentation, read AI Documentation Frameworks Compared. You can also compare AI Usage Cards with Model Cards and Datasheets for Datasets.
The Short Rule to Remember
System Cards say, "Here is the system we built."
AI Usage Cards say, "Here is how we used that system."
If you are a researcher, that second sentence is your responsibility. Do not assume that a vendor already covered it. Create a record that reviewers can read, trust, and cite.
Generate your own AI Usage Card now at ai-cards.org.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card