Free service by the University of Göttingen

Learn

AI Ethics and Documentation in Academic Research

How proper AI documentation practices connect to research ethics, from bias awareness to intellectual honesty and institutional review requirements.

Documentation as an Ethical Act

When researchers document their methodology, declare their funding sources, and disclose conflicts of interest, they are practicing research ethics. Documenting AI usage belongs in the same category. It is not a new bureaucratic requirement. It is an extension of the same principles of honesty and openness that have always defined good scholarship.

The use of AI in research is growing so fast that ethical norms are still catching up. But a clear consensus is emerging across major research institutions, scientific publishers, and funding agencies. Documenting how you used AI in your work is the responsible thing to do. This page explains why, and how AI Usage Cards fit into the picture.

Intellectual Honesty and the Disclosure Line

Every researcher uses tools. Word processors, statistical software, reference managers, spell checkers. Nobody expects you to disclose that you used Microsoft Word to type your paper. So where does the line fall? When does a tool become significant enough that honesty requires you to mention it?

The answer depends on how much the tool shaped the intellectual content of your work. A spell checker fixes typos. It does not change your ideas. A grammar tool like Grammarly might restructure a sentence, but the argument remains yours. These sit on one side of the line.

On the other side, consider an LLM that generates a paragraph of your discussion section, suggests a research hypothesis you had not considered, or produces code that implements your analysis pipeline. These tools are contributing to the intellectual substance of your work. When that happens, your readers have a right to know.

The tricky part is that AI tools can slide from one side of this line to the other within a single research project. You might use ChatGPT for something as simple as fixing grammar in one session, then ask it to help you interpret a statistical result in the next. Good documentation captures this range honestly. AI Usage Cards are designed with exactly this challenge in mind, providing separate fields for writing assistance, methodological contributions, and experimental involvement.

Bias Propagation Is a Real Concern

AI models carry biases inherited from their training data. This is well documented and widely discussed, but many researchers underestimate how those biases can quietly influence their own work.

Consider a researcher using an LLM to help with a literature review. The model might disproportionately surface papers from well-known journals, English-language sources, and frequently cited authors. Work from smaller institutions, non-English publications, and researchers from underrepresented communities may be systematically overlooked. The researcher ends up with a literature review that looks thorough but has blind spots they never noticed, because the AI's suggestions felt authoritative.

The same problem arises with AI-assisted data analysis. If you use an AI tool to identify themes in qualitative data, the model's training biases will influence which themes it highlights and which it overlooks. If you use AI for survey design, the model's cultural assumptions may shape the questions in ways that disadvantage certain respondent groups.

Documentation does not eliminate these biases. But it makes them visible. When you record that an LLM assisted with your literature search, readers can factor that into their evaluation. They might check whether certain perspectives are missing. They might apply additional scrutiny to the sources you cited. This is exactly the kind of informed reading that good science depends on.

Recording which AI tools you used and for which tasks gives your readers the information they need to assess where bias might have entered your pipeline. The ethics section of an AI Usage Card specifically prompts you to reflect on these risks.

The Question of Contribution and Credit

One of the more philosophically interesting questions in research ethics right now involves the concept of credit. If an AI system generated a key insight that shaped your paper, who deserves the credit?

Major publishers have taken a clear position. AI cannot be an author. Nature, Science, IEEE, and ACM all state that authorship requires the ability to take responsibility for the work, and AI systems cannot do that. Human authors must be accountable for everything in the paper, including any AI-generated content.

But this creates an attribution gap. If GPT-4 suggested the framing of your argument and you thought it was brilliant and used it, your readers might assume that framing came from your own expertise. Without disclosure, you are receiving credit for intellectual work that was not entirely yours. This is not fundamentally different from presenting a colleague's idea as your own without acknowledgment.

Documentation solves this cleanly. You do not need to give the AI a byline. You just need to note, clearly and specifically, where AI contributed to the intellectual content. This lets readers properly attribute the human contributions (your judgment in selecting, validating, and building on the AI's suggestion) while being honest about where the starting point came from.

For a deeper discussion of AI and authorship, see our page on whether AI can be a coauthor.

Institutional Review Boards Are Paying Attention

IRBs and research ethics committees have traditionally focused on human subjects protections, data privacy, and informed consent. AI is now entering their scope of concern, particularly in research that involves human participants.

If you use AI to analyze interview transcripts, those transcripts may contain identifiable information. Uploading them to a cloud-based AI service raises questions about data security and participant confidentiality. Did participants consent to having their data processed by a third-party AI system? Does the AI provider's data retention policy comply with your IRB approval?

These are not hypothetical concerns. Several universities have reported cases where researchers uploaded sensitive data to commercial AI services without considering the privacy implications. Ethics committees are responding by adding AI-specific questions to their review forms.

Even in research that does not involve human subjects directly, IRBs at some institutions are beginning to ask about AI usage in the research process. If AI tools influenced your experimental design, data collection, or analysis, ethics reviewers want to know. Having a completed AI Usage Card ready to submit alongside your ethics application demonstrates that you have thought carefully about these issues.

AI Documentation and Research Integrity Frameworks

Research integrity has always rested on a few core principles. Honesty, accountability, transparency, and stewardship. AI documentation connects to all four.

Honesty means accurately representing how your work was done. If AI tools played a role, saying so is honest. Omitting it is not.

Accountability means taking responsibility for every part of your published work. When you document your AI usage, you are showing that you know what the AI contributed and that you have verified it. You are accepting responsibility for the final product.

Transparency means making your methods open to scrutiny. Other researchers should be able to evaluate your choices, including your choice to use AI and the way you used it.

Stewardship means caring for the integrity of the research record. When you add clear AI documentation to your paper, you are contributing to a scientific record that future researchers can trust and build on.

These are not new principles. They are the same values that motivate data sharing, pre-registration, and open peer review. AI documentation is simply the newest expression of these long-standing commitments.

From Individual Practice to Institutional Norm

The movement toward AI documentation is not happening in a vacuum. It reflects a broader shift in how research institutions think about responsible AI use.

The German Research Foundation (DFG) has issued guidelines on AI usage in funded research. The European Research Council expects transparency about AI tools in grant reporting. Universities across the UK, Australia, and North America have published policies ranging from simple disclosure requirements to detailed frameworks for evaluating AI contributions.

These institutional policies share a common thread. They do not aim to ban AI from research. They aim to ensure that AI usage is visible, documented, and subject to the same standards of integrity that apply to every other aspect of the research process.

Researchers who adopt documentation practices early are well positioned to comply with whatever specific policies their institutions and target journals adopt. The format matters less than the habit. But having a standardized format like AI Usage Cards makes the habit easier to build and maintain.

Making Ethics Practical

Research ethics can sometimes feel abstract, a set of principles discussed in training modules and then forgotten in the daily rush of lab work and paper deadlines. AI documentation has the advantage of being concrete. You used specific tools for specific tasks. You can write that down. It takes ten minutes, and it aligns your practice with your principles.

The AI Usage Cards generator at ai-cards.org was designed to make this process as straightforward as possible. It asks you the right questions, structures your answers, and produces a document that satisfies the requirements of journals, conferences, and institutions. It is a small investment that pays off in credibility, compliance, and the knowledge that you are contributing to a research culture built on openness.

For more on why AI transparency matters in research, including the regulatory and policy dimensions, see our dedicated page on the topic.

Generate Your AI Usage Report

Create a standardized AI Usage Card for your research paper in minutes. Free and open source.

Create Your AI Usage Card