AI Ethics and Documentation in Academic Research
A practical guide to AI ethics in research, with clear documentation practices for disclosure, accountability, bias, privacy, and journal submission.
AI disclosure is research ethics in plain language
AI documentation does not ask researchers to confess wrongdoing. It asks them to describe their method.
That distinction matters.
Researchers already document instruments, software, data sources, funding, conflicts of interest, and author roles. AI use belongs in the same family. If a tool shapes a paper, a proposal, a code file, an analysis, a translation, or a review, readers need enough detail to judge the work.
A good disclosure does not say, "AI helped." That sentence tells readers almost nothing. A useful disclosure names the tool, the task, the stage of the project, and the checks that the researcher made afterward.
That is the ethical core of AI Usage Cards. They turn a vague disclosure into a short record that another researcher can read.
The disclosure line runs through intellectual work
Every researcher uses tools. Nobody expects you to disclose that you typed a manuscript in Word or checked spelling before submission.
The ethical question starts when a tool changes intellectual content.
A grammar checker that catches a typo rarely matters. A large language model that drafts a literature review paragraph matters. A coding assistant that writes a data cleaning script matters. A chatbot that suggests rival explanations for an unexpected result matters.
Some publisher policies draw a similar line. ACM says authors do not need to disclose basic spelling or grammar tools, but they must disclose generative AI tools when those tools create content such as text, images, tables, code, data, or citations. ACM also tells authors to disclose when they feel unsure. (acm.org)
That gives researchers a simple working rule: disclose when AI shaped the scholarly claim, the evidence, the analysis, the writing, or the presentation of results.
If you want a more detailed decision path, use Do I Need to Disclose AI Usage in My Paper? before submission. For ChatGPT-specific language, see How to Disclose ChatGPT Usage in Academic Papers.
Authorship still belongs to humans
AI tools cannot take responsibility for a paper. They cannot approve the final manuscript. They cannot answer reviewers. They cannot correct the record after publication.
That is why many publication policies reject AI authorship. ICMJE says journals should require authors to disclose AI-assisted technologies at submission, and it says chatbots and similar tools should not appear as authors because they cannot take responsibility for accuracy, integrity, and originality. ICMJE also warns that nondisclosure may require correction and may count as misconduct in some cases. (icmje.org)
ACM makes the same point in its authorship policy: generative AI tools may not appear as authors of ACM publications, and authors must disclose generative AI use when the tools create content. (acm.org)
This leaves no real gap. You do not credit AI with a byline. You credit humans with authorship and describe the AI assistance.
That protects readers and authors at the same time. Readers learn how the work came together. Authors make clear that they checked the output, chose what to use, and accept responsibility for the final text.
For more on this point, read Can AI Be a Co-Author on a Research Paper?.
Bias can enter through ordinary tasks
Researchers often think about AI bias when they study AI systems. They think about it less when they use AI as a writing, coding, or analysis assistant.
That blind spot causes trouble.
A model can suggest sources that overrepresent English language journals. It can miss local literature, work from smaller institutions, or papers that use different terms for the same concept. It can propose survey questions that carry cultural assumptions. It can help code interview data in a way that pulls themes toward familiar categories.
NIST warns that AI systems can increase the speed and scale of harmful bias and can amplify harms to people and organizations. (nist.gov)
Documentation cannot remove that risk. It can show where the risk entered the project.
If you used an AI tool during a literature search, say so. If you used it to group qualitative themes, say how you checked the themes against the raw data. If you used it to generate survey items, say who reviewed the wording before fielding the survey.
Researchers working with interviews, field notes, focus groups, or ethnographic material should also read [[[AI Disclosure](/ai-disclosure-in-systematic-reviews-and-meta-analyses/)](/how-to-disclose-ai-use-for-neurips-icml-and-acl-submissions/) for Qualitative Research](/ai-disclosure-for-qualitative-research/). The ethical issues look different when AI touches human accounts rather than public text.
Privacy and consent need special care
Human subjects research raises a sharper problem. Many AI tools send prompts and uploaded files to external services. If a transcript contains names, locations, health details, workplace stories, or other private information, uploading it can create a privacy problem.
The U.S. Office for Human Research Protections has noted that AI and large datasets can create questions about identifiable private information, including cases where researchers generate identifiable information by combining data. (hhs.gov)
University IRB guidance now reflects this concern. Lehigh University tells researchers and IRBs to evaluate privacy, consent, and secondary use when researchers use generative AI in human subjects research. Its guidance asks researchers to explain de-identification, metadata removal, verification of AI outputs, repeatability of methods, and consent language for AI processing. (research.lehigh.edu)
If your project includes participant data, do not treat AI disclosure as a final manuscript task. Bring it into the IRB stage.
Ask three plain questions before using the tool:
- Does the tool receive identifiable or sensitive data?
- Did participants consent to that kind of processing?
- Can your institution approve the data security terms?
If you cannot answer those questions, pause before uploading anything.
Peer review has its own ethics
Peer review involves confidential work. Manuscripts, grant proposals, reviewer comments, and editorial decisions often contain unpublished ideas. Reviewers cannot hand those ideas to external tools just because the tool feels convenient.
Elsevier tells reviewers not to upload submitted manuscripts or review reports into generative AI tools because doing so may violate confidentiality, proprietary rights, and data privacy. It also says reviewers should not use generative AI to assist with scientific review because human reviewers must make the evaluation. (elsevier.com)
Funding agencies have issued similar rules. NIH prohibits scientific peer reviewers from using generative AI technologies to analyze or draft critiques for grant applications and contract proposals. It frames uploading application content to online tools as a breach of confidentiality and integrity. (grants.nih.gov)
NSF also tells reviewers that sharing proposal information with generative AI tools through the open internet violates confidentiality and integrity in its merit review process. (nsf.gov)
ERC guidance for grant evaluation draws a useful line. Reviewers may use AI to polish language only under conditions that protect data and do not delegate judgment. They may not upload proposals to online AI tools, ask AI to summarize proposals, or ask AI to evaluate scientific merit. (erc.europa.eu)
Editors and reviewers can use AI Disclosure in Peer Review to separate language support from judgment, confidentiality, and delegation.
Grant proposals now need the same discipline
AI disclosure does not stop at journal articles.
The DFG asks applicants to disclose generative models used in proposal preparation when those models affect content. Its guidance says disclosure should name the models, purpose, and extent of use, such as summarizing the state of research, developing methods, analyzing data, or generating hypotheses. It also says grammar, style, spell check, and translation tools do not need disclosure when they do not affect proposal content. (dfg.de)
That pattern fits journal submissions too. You do not need a dramatic statement. You need a clear one.
If AI helped you draft the background section of a grant proposal, say that. If AI suggested a statistical approach, say how you evaluated that suggestion. If AI only checked spelling, many policies will not ask for disclosure, though you should still follow the funder’s instructions.
For proposal language, see AI Usage Reporting in Grant Proposals.
AI Usage Cards differ from model documentation
Researchers often mix up AI Usage Cards with other AI documentation formats.
A Model Card documents a model: its intended use, evaluation, limitations, and training context. A Datasheet for Datasets documents a dataset: how researchers collected, cleaned, stored, and shared it. A System Card describes an AI system and its behavior in use.
An AI Usage Card documents the researcher’s use of AI in a specific scholarly work.
That shift matters. You may use a model you did not build, a tool you cannot inspect, and a platform that changes over time. You can still document your own choices: prompts at a general level, task categories, data restrictions, output checks, and author responsibility.
For a side-by-side map of these formats, see AI Documentation Frameworks Compared.
What to record before you forget
The best time to document AI use is while you work. Memory gets fuzzy after submission, revision, and resubmission.
Record the tool name and version if you have it. Record the date or date range. Record the task. Record whether you entered unpublished data, participant data, code, figures, or draft text. Record how you checked the output.
You can keep this short. A lab notebook entry may be enough during the project. Before submission, turn that record into an AI Usage Card.
The AI Usage Cards generator helps you produce a structured card that you can attach as supplementary material, adapt into an acknowledgment, or keep for journal queries.
A short LaTeX disclosure can look like this:
\section*{AI usage statement}
The authors used ChatGPT-5 (OpenAI) in April 2026 to improve the clarity
of selected paragraphs in the Introduction and Discussion. The authors did
not use the tool to generate hypotheses, analyze data, select references, or
write code. All AI-assisted text was reviewed and edited by the authors, who
take responsibility for the final manuscript.
A full AI Usage Card is provided in the supplementary materials.If your journal allows an appendix, you can point to the card like this:
\appendix
\section{AI Usage Card}
The AI Usage Card for this manuscript was generated at ai-cards.org and is
included to document tool use, task scope, human review, and data handling.For more formatting help, use the LaTeX Tutorial for AI Usage Cards or the Overleaf guide.
Ethics becomes easier when the record already exists
AI ethics can sound abstract until a reviewer asks, "How exactly did you use the tool?"
At that point, vague memory will not help. A short record will.
Documenting AI use supports honesty, accountability, privacy, and reproducibility. It also protects you from awkward late-stage rewrites when a journal, funder, supervisor, or ethics board asks for more detail.
Researchers do not need to apologize for using AI. They need to tell readers what happened.
Generate an AI Usage Card at ai-cards.org before you submit your next paper, thesis, review, or grant proposal. Use it as a supplementary file, copy its text into your AI disclosure statement, or keep it in your project record so your ethics match your methods.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card