AI Disclosure for Qualitative Research
A practical guide to documenting AI use in interviews, coding, thematic analysis, and writing for qualitative studies.
AI tools can quietly shape qualitative research
Qualitative research depends on judgment.
Researchers choose prompts, frame interview guides, code text, group themes, and write interpretations. If you use AI in any of those steps, the tool can influence your study in ways that readers should understand.
That does not mean you should avoid AI.
It means you should document it with care.
A clear disclosure helps readers see where AI assisted your work, where human judgment stayed central, and where risks may have entered the process. If you want a quick way to do that, you can create an AI Usage Card and attach it to your project materials or manuscript.
Qualitative research needs more than a one-line AI statement
A short sentence like “We used ChatGPT for analysis support” tells readers almost nothing.
It does not say whether the tool drafted interview questions, suggested codes, summarized transcripts, translated quotes, or rewrote findings. It also does not say whether you uploaded sensitive data, checked outputs, or rejected suggestions that did not fit the data.
Qualitative work often rests on traceable interpretation.
Readers need to know how you moved from raw material to claims. If AI entered that chain, your disclosure should show where, how, and under what limits.
This need becomes even stronger when your data include personal narratives, protected identities, or culturally specific language.
The key question is simple
Ask one question at every stage of the project.
Did AI affect the production, processing, interpretation, or presentation of the qualitative material?
If the answer is yes, disclose it.
That applies even when the tool did not make final decisions. AI can still shape your process by suggesting categories, compressing long transcripts into summaries, or changing tone in participant quotations that you later edited.
A good disclosure does not confess wrongdoing.
It records method.
Where AI often appears in qualitative studies
AI use in qualitative research tends to cluster in a few stages.
The first is study design. Researchers may ask a chatbot to draft interview questions, refine focus group prompts, or propose probes for sensitive topics.
The second is data handling. Some use AI for transcription cleanup, translation, de-identification, or summarization of interview material.
The third is coding and analysis. Researchers may ask AI to suggest initial codes, cluster excerpts, compare themes, or produce candidate memos.
The fourth is writing. AI may help rewrite method sections, shorten abstracts, improve grammar, or format references.
Each use raises a different disclosure need.
Using AI to polish prose does not carry the same risk as using it to suggest themes from interviews. Your documentation should reflect that difference.
What readers need to know
Your disclosure should answer a small set of practical questions.
Name the tool. State the version if known. Record when you used it. Describe the task. State whether you entered study data into the tool. Explain what human review you performed. Note any limits you imposed.
That level of detail helps readers judge the role of AI without guessing.
It also helps you later, when a journal asks for clarification or a coauthor wants to confirm what happened.
For broader context on why this matters, see Why AI Transparency Matters in Research and What Are AI Usage Cards?.
Interviews and focus groups need special care
Interview and focus group data often contain sensitive information.
That creates two disclosure issues at once. One concerns research integrity. The other concerns privacy and data governance.
If you used AI to draft or refine interview guides, say so. Readers should know whether a model influenced the wording, order, or framing of questions.
If you used AI on transcripts or notes, be more specific. State whether you uploaded full transcripts, excerpts, or de-identified text. State whether the tool ran locally, through an institutional service, or through a public web interface.
Do not hide this behind vague language.
A simple sentence such as “We used an AI assistant during analysis” leaves out the very facts that matter most.
AI-assisted coding needs human boundaries
Coding sits near the center of many qualitative methods.
If AI suggested codes or grouped excerpts, readers need to know whether you treated those outputs as raw ideas, provisional labels, or structured input to your coding scheme.
You should also state who made final coding decisions.
In most defensible workflows, the human research team defines the codebook, reviews suggestions, rejects weak labels, and resolves disagreements. AI may speed up exploration, but it should not replace interpretive accountability.
This distinction matters because coding is not only a sorting task.
It involves theory, context, reflexivity, and often deep familiarity with the field site or participants. A language model does not have that position, even when its labels sound plausible.
Translation and quote editing can change meaning
Many qualitative projects involve multilingual data.
Researchers may use AI to translate interviews, back-translate quotes, or improve readability in English. These uses can help, but they can also flatten tone, remove ambiguity, or shift culturally specific meanings.
If AI touched translated material, disclose that clearly.
State whether AI produced a first translation, whether a human speaker reviewed it, and whether quoted passages in the paper came directly from participant speech, from human translation, or from AI-assisted translation that the team then checked.
The same rule applies to quote editing.
If you used AI to clean grammar or shorten participant quotations for readability, readers should know. Even small edits can affect voice and emphasis.
Reflexivity still belongs to the researcher
Qualitative research often asks researchers to reflect on their own role in producing knowledge.
AI does not remove that duty. It adds to it.
If you used AI during analysis, include a short reflexive note in your methods or appendix. Explain why you used the tool, what you did not trust it to do, and how you prevented it from overriding contextual interpretation.
That kind of statement shows discipline.
It tells readers that you did not treat AI output as neutral or objective. You treated it as one input that required scrutiny.
A useful disclosure structure
You do not need a long appendix to disclose AI well.
You need a structure that readers can scan.
One strong format includes five parts: tool, task, data exposure, human oversight, and impact on reported findings.
Here is a plain example.
“We used Claude 3.5 Sonnet in May 2026 to suggest preliminary labels for 120 de-identified interview excerpts after the research team completed first-cycle manual coding. We did not upload full transcripts or direct identifiers. Two authors reviewed all AI-suggested labels, rejected most suggestions that lacked contextual fit, and did not allow AI outputs to determine final themes. The tool did not generate any text included in participant quotations or findings.”
That is much more useful than a generic sentence.
You can also include the disclosure in LaTeX
If you write in LaTeX, add a short section or appendix note.
\section*{AI Use Disclosure}
We used Gemini Advanced in February 2026 to refine an interview guide and to
suggest possible code labels for de-identified excerpts from 18 interviews.
We did not upload direct identifiers or full raw transcripts. All AI outputs
served only as provisional suggestions. The authors performed all final coding,
theme development, and interpretation. AI-generated text does not appear in
participant quotations or analytic claims.If you want help formatting this in a paper, see the LaTeX tutorial for AI Usage Cards and the AI Usage Cards examples.
Common mistakes to avoid
The first mistake is underreporting.
Researchers often disclose AI writing help but omit AI use in coding, summarization, or translation. Those hidden steps may matter more than copyediting.
The second mistake is overstating certainty.
Do not write that AI “validated themes” or “confirmed findings.” A language model cannot validate qualitative interpretation.
The third mistake is collapsing all tools into one label.
“AI” is too broad. Name the system you used. Different tools handle data, memory, and outputs in different ways.
The fourth mistake is ignoring data handling.
If sensitive material touched an AI service, say what you shared and what protections you applied.
When should the disclosure appear?
Put the disclosure where readers will find it.
For many papers, the methods section works best. If the journal has strict length limits, place a short summary in the methods and a fuller statement in supplementary material, a repository, or an AI Usage Card.
If a journal asks for a separate AI statement, follow that rule. You can still keep a fuller record for your own files and for future reuse.
For journal-specific guidance, check AI Disclosure Policies by Major Journals and AI Transparency Requirements for Journal Submissions.
The goal is not perfection
The goal is traceability.
Readers should be able to understand where AI entered your qualitative workflow and where researchers kept interpretive control. That makes your paper easier to trust, easier to review, and easier to build on.
A vague statement hides the method.
A clear record shows the work.
If you used AI in interviews, coding, translation, memoing, or writing, document it now. Generate a free AI Usage Card at ai-cards.org and give your readers a disclosure they can actually use.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card