AI Usage Cards Examples and Templates
Real-world examples of AI Usage Cards for different research scenarios, from thesis writing to NLP research to data science projects.
Ready-to-Adapt Examples
The best way to learn how to fill out an AI Usage Card is to see what a completed one looks like. Below are four examples drawn from realistic research scenarios. Each one shows the complete card content in LaTeX, followed by a brief explanation of the choices made. You can copy any of these into your own paper and adapt the details.
All examples use the ai-usage-card LaTeX package. If you need help setting it up, see our LaTeX tutorial or Overleaf guide.
Example 1. Thesis with ChatGPT for Writing Assistance
This example represents a common scenario. A PhD student who is a non-native English speaker used ChatGPT to polish the language in their thesis. The AI involvement is minimal and limited entirely to surface-level writing improvements.
\begin{aicard}{Morphological Variation in Alpine Plant Communities
Under Climate Stress}
\aicardauthors{Yuki Tanaka, Department of Ecology,
University of Freiburg}
\aicardtools{
\aitool{ChatGPT}{GPT-4o, OpenAI}{Web interface}{January--March 2026}
\aitool{DeepL Write}{Pro, DeepL SE}{Web interface}{February 2026}
}
\aicardmethodology{
No AI tools were used in the research design, data
collection, field sampling, statistical analysis, or
interpretation of results. All ecological modeling was
performed by the author using R (v4.3.2) with scripts
written entirely without AI assistance.
}
\aicardwriting{
ChatGPT (GPT-4o) was used to improve the English
phrasing and readability of Chapters 1, 3, and 5, which
were originally drafted in a mix of Japanese and English.
The author submitted paragraphs individually and reviewed
each suggestion before accepting or rejecting it.
Approximately 60\% of suggested changes were accepted,
primarily corrections to article usage, preposition
choice, and sentence structure. No new content or
scientific claims were generated by the tool.
DeepL Write was used for a final grammar check of the
complete manuscript. Minor corrections to punctuation
and word order were accepted where appropriate.
}
\aicardethics{
Only draft text was submitted to AI services. No
research data, participant information, or unpublished
results were shared with any AI tool. The author
recognizes that AI-assisted language editing may
introduce subtle shifts in tone or emphasis, and
reviewed all changes with this in mind. The thesis
supervisor reviewed the final manuscript independently.
}
\end{aicard}Why this example works. The methodology section is explicit that AI played no role in the science. The writing section quantifies the level of acceptance (60%) and identifies exactly which chapters were affected. The ethics section addresses the fact that text was sent to cloud services. This level of specificity lets an examiner quickly understand the scope of AI involvement and move on.
Example 2. Data Science Paper with GitHub Copilot
This example covers a research team that relied heavily on GitHub Copilot for writing analysis code. The AI was a genuine part of the technical workflow, so the methodology section needs more detail.
\begin{aicard}{Predicting Hospital Readmission Rates from
Electronic Health Records}
\aicardauthors{Sarah Al-Rashidi, King's College London;
James Whitfield, University of Edinburgh}
\aicardtools{
\aitool{GitHub Copilot}{v1.142, OpenAI Codex}{VS Code
extension}{August 2025--January 2026}
\aitool{ChatGPT}{GPT-4o, OpenAI}{Web interface and API}{October
2025--January 2026}
\aitool{Grammarly}{Premium, Grammarly Inc.}{Browser
extension}{January 2026}
}
\aicardmethodology{
GitHub Copilot was used extensively during development
of the data processing and modeling pipeline. Specific
tasks included generating boilerplate code for loading
and cleaning the MIMIC-IV dataset, writing feature
engineering functions (Section 3.2), and implementing
cross-validation loops (Section 4.1). The first author
wrote all Copilot prompts and reviewed every suggestion
before acceptance. All generated code was validated
through unit tests (achieving 94\% line coverage) and
manual review by both authors.
ChatGPT (GPT-4o) was used via the API to generate
docstrings and inline comments for the analysis
codebase. The first author also used ChatGPT through
the web interface to debug three specific errors
encountered during data preprocessing. In each case,
the error message and relevant code snippet (with all
patient data removed) were submitted, and the suggested
fix was tested before implementation.
The choice of model architecture (gradient-boosted
trees), the feature selection strategy, the evaluation
metrics, and the interpretation of results were
determined entirely by the authors.
}
\aicardwriting{
Grammarly was used for grammar and style checking on
the final manuscript. ChatGPT (GPT-4o) was used to
suggest a clearer structure for Section 2 (Related
Work). The authors provided an outline and asked for
organizational suggestions. The final structure
reflects the authors' decisions, though two of
ChatGPT's suggestions for grouping related papers
were adopted. All text in the paper was written by
the authors.
}
\aicardethics{
The MIMIC-IV dataset is de-identified and publicly
available under a data use agreement. No patient data
was submitted to any AI service at any point. Code
snippets shared with ChatGPT for debugging contained
only synthetic example data. GitHub Copilot's telemetry
was configured to prevent code snippet transmission.
The authors acknowledge that Copilot suggestions may
reflect patterns from its training data that could
introduce non-obvious biases in data handling approaches,
and they reviewed all generated code with this concern
in mind.
}
\end{aicard}Why this example works. The methodology section is detailed because AI played a real role in the technical work. It distinguishes between different types of code generation (pipeline code vs. docstrings vs. debugging) and specifies the validation approach (unit tests with 94% coverage). The ethics section directly addresses the sensitive nature of health data and explains what safeguards were in place. A reviewer reading this card can quickly assess whether the AI involvement threatens the validity of the results.
Example 3. NLP Paper with GPT-4 for Data Annotation
This is the most AI-intensive example. A research team used GPT-4 as part of their experimental methodology, specifically for annotating a large dataset. When AI is part of the method, the card needs to document it with the same rigor you would apply to any other methodological choice.
\begin{aicard}{Detecting Misinformation Narratives in
Multilingual Social Media Posts}
\aicardauthors{Priya Sharma, IIT Delhi;
Lucas Weber, University of Amsterdam;
Fatima Zahra Benali, Mohammed V University, Rabat}
\aicardtools{
\aitool{GPT-4}{gpt-4-0125-preview, OpenAI}{API}{September
2025--December 2025}
\aitool{Claude}{Claude 3.5 Sonnet, Anthropic}{API}{October
2025--November 2025}
\aitool{ChatGPT}{GPT-4o, OpenAI}{Web interface}{January 2026}
}
\aicardmethodology{
GPT-4 (gpt-4-0125-preview) was the primary annotation
tool for this study. The model was used via the OpenAI
API to classify 15,000 social media posts across English,
Hindi, French, and Arabic into six misinformation
narrative categories defined by the authors (Section
3.1). The classification prompt was developed iteratively
over three weeks using a development set of 500 manually
annotated posts. The final prompt achieved 89\% agreement
with human annotations on a held-out validation set of
1,000 posts (Cohen's kappa = 0.84).
Claude 3.5 Sonnet was used as a secondary annotator on
a subset of 3,000 posts to measure inter-model agreement
(Section 4.2). Agreement between GPT-4 and Claude was
83\% (kappa = 0.78), which is reported alongside
human-model agreement figures in Table 3.
Three human annotators independently labeled 2,000 posts
to establish a gold standard. All disagreements between
human annotators and AI annotations were resolved through
discussion among the research team. The final dataset
uses human labels where available and AI labels (with
confidence filtering at threshold 0.85) for the
remaining posts.
The annotation prompt, confidence calibration procedure,
and disagreement resolution protocol are documented in
Appendix A. All analysis code was written by the authors
without AI assistance.
}
\aicardwriting{
ChatGPT (GPT-4o) was used by the third author to
translate an early draft of Section 5 (Discussion) from
French to English. The translation was reviewed and
substantially revised by all three authors. ChatGPT was
also used to suggest improvements to figure captions.
All other text was written by the authors without AI
assistance.
}
\aicardethics{
Social media posts were collected from public accounts
only and were anonymized by removing usernames and
profile information before annotation. Posts were
submitted to the OpenAI and Anthropic APIs for
classification. Both providers' data retention policies
were reviewed, and the API configurations were set to
disable training data retention where possible.
The authors recognize several limitations of AI-based
annotation. GPT-4's performance varied across languages,
with lower accuracy on Arabic posts (82\%) compared to
English posts (93\%). This disparity likely reflects
imbalances in the model's training data. The confidence
filtering threshold was chosen specifically to mitigate
this issue, and per-language accuracy figures are
reported in Table 4. The authors also note that both
GPT-4 and Claude may carry biases in how they interpret
political and cultural content across different regions,
and the human review process was designed to catch
systematic misclassifications.
}
\end{aicard}Why this example works. When AI is part of the experimental method, the card becomes a mini-methods supplement. This example reports agreement statistics, confidence thresholds, and per-language performance variations. It treats the AI annotation step with the same methodological transparency that a reviewer would expect for any annotation procedure. The ethics section addresses both data privacy (sending posts to APIs) and the scientific limitation of cross-language performance gaps.
Example 4. Literature Review Using AI Search Tools
This example covers a researcher who used AI-powered search and synthesis tools during a systematic literature review. These tools are becoming common in review papers, and the AI involvement is qualitatively different from coding or annotation tasks.
\begin{aicard}{A Systematic Review of Explainable AI Methods
in Clinical Decision Support Systems}
\aicardauthors{Henrik Lindqvist, Karolinska Institute;
Amina Osei, University of Cape Town}
\aicardtools{
\aitool{Elicit}{Elicit.com, Ought Inc.}{Web interface}{July--September
2025}
\aitool{Semantic Scholar}{AI features, Allen Institute for
AI}{Web interface}{July--October 2025}
\aitool{ChatGPT}{GPT-4o, OpenAI}{Web interface}{August--November 2025}
\aitool{Grammarly}{Premium, Grammarly Inc.}{Browser
extension}{November 2025}
}
\aicardmethodology{
Elicit was used during the search phase of this
systematic review. The authors submitted their research
questions to Elicit, which returned ranked lists of
relevant papers with AI-generated summaries. These
results supplemented (but did not replace) traditional
database searches in PubMed, IEEE Xplore, and the ACM
Digital Library. The traditional searches followed the
PRISMA protocol described in Section 2.2.
Semantic Scholar's AI-powered related paper
recommendations were used for snowball sampling. When
reviewing included papers, the authors used Semantic
Scholar's "Related Papers" feature to identify
additional candidates that the database searches may
have missed. Seven papers were added to the review
through this method (noted in the PRISMA flow diagram,
Figure 2).
ChatGPT (GPT-4o) was used to help summarize the key
findings of 12 papers that were written in languages
other than English (German, Mandarin, and Korean). The
authors provided the abstracts and relevant sections,
and ChatGPT produced English summaries. These summaries
were verified against the original text by native
speakers recruited from the authors' departments.
The inclusion and exclusion criteria, quality
assessment, data extraction, and narrative synthesis
were performed entirely by the authors using standard
systematic review methodology.
}
\aicardwriting{
Grammarly was used for grammar and style checking on
the final manuscript. No other AI tools were used in
the writing or preparation of the paper.
}
\aicardethics{
All papers reviewed in this study are published and
publicly accessible. No unpublished data or
confidential information was submitted to any AI
service. The authors note that Elicit's paper rankings
may reflect biases in its training data, potentially
favoring highly cited English-language publications.
To mitigate this, Elicit results were treated as one
input alongside traditional database searches, not as
a replacement. The PRISMA protocol ensures that the
final set of included papers is determined by explicit
criteria rather than by AI ranking alone.
The authors also acknowledge that ChatGPT's summaries
of non-English papers may contain inaccuracies or
subtle misinterpretations. All summaries were verified
by native speakers, and the original papers (not the
AI summaries) were used as the basis for data
extraction and quality assessment.
}
\end{aicard}Why this example works. Literature reviews present a unique disclosure challenge because AI search tools shape which papers you find, which in turn shapes your conclusions. This card is transparent about how AI recommendations fit into the broader PRISMA search protocol. It quantifies the contribution (seven papers added through Semantic Scholar) and explains the verification steps for the translated summaries. The ethics section addresses the important concern that AI ranking algorithms might introduce selection bias into a systematic review.
Patterns Across These Examples
Looking at all four examples together, a few patterns stand out.
Be specific about what AI did and did not do. Every example explicitly states which parts of the research were done without AI assistance. This is just as important as documenting what the AI did.
Quantify where you can. Agreement percentages, acceptance rates, and counts of affected sections give reviewers concrete information to work with. "Most suggestions were accepted" is less useful than "approximately 60% of suggestions were accepted."
Match the detail level to the AI involvement. Example 1 (thesis editing) has a brief methodology section because AI played no methodological role. Example 3 (dataset annotation) has an extensive methodology section because AI was a core part of the experimental pipeline. Let the significance of the AI involvement determine how much you write.
Address ethics even when the risks seem low. Every example includes a thoughtful ethics section, even when the AI usage was limited to grammar checking. This demonstrates that you considered the implications of your choices.
Generate Your Own
You can create an AI Usage Card for your project in about ten minutes. Use the online generator at ai-cards.org for a guided experience, or set up the LaTeX package following our LaTeX tutorial. If you work in Overleaf, the Overleaf guide will get you started even faster.
For background on the framework and why it was created, see What Are AI Usage Cards?.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card