Can AI Be a Co-Author on a Research Paper?
Most journals say no. Here is what the major publishers, conferences, and style guides say about AI authorship, and what to do instead.
The Short Answer
No, most major journals and conferences do not allow AI to be listed as a co-author on a research paper. This is the settled position at Nature, Science, The Lancet, JAMA, IEEE, ACM, AAAI, NeurIPS, ICML, ACL, and virtually every other major venue. The reasoning is consistent across all of them. Authorship requires accountability, and an AI system cannot be held accountable.
Why the Consensus Is Against AI Authorship
The academic publishing system ties authorship to a set of responsibilities that only humans can fulfill. When you put your name on a paper, you are saying that you stand behind the work. You can respond to reviewer critiques. You can explain your reasoning at a conference. You can issue corrections if errors are found. And you can be held responsible if the work turns out to contain fabricated data or plagiarized content.
An AI system can do none of these things. ChatGPT cannot respond to a reviewer comment asking for clarification about the methodology. GPT-4 cannot issue a correction if a hallucinated reference is discovered after publication. Claude cannot attend an ethics hearing if questions arise about the research. The responsibilities that come with authorship are human responsibilities, and no AI system can meet them.
This is not a fringe position. The Committee on Publication Ethics (COPE), which provides guidance followed by thousands of journals, explicitly states that AI tools do not meet the criteria for authorship. The International Committee of Medical Journal Editors (ICMJE), whose recommendations govern authorship standards across biomedical publishing, reached the same conclusion.
The ICMJE Criteria and Why AI Fails Them
The ICMJE defines four criteria that must all be met for someone to qualify as an author.
- Substantial contribution to the conception or design of the work, or to the acquisition, analysis, or interpretation of data.
- Drafting the work or revising it critically for important intellectual content.
- Final approval of the version to be published.
- Agreement to be accountable for all aspects of the work, ensuring that questions about accuracy or integrity are appropriately investigated and resolved.
An AI tool might plausibly satisfy criteria 1 and 2. A language model can contribute to drafting text, and one could argue it makes a substantial contribution when it generates analysis code or suggests experimental approaches. But criteria 3 and 4 are impossible for an AI to meet. An AI cannot approve a publication, and it cannot agree to be accountable for the work. These are not technical limitations that might be overcome with better models. They are conceptual requirements that presuppose a moral and legal agent.
The Arguments for AI Authorship
Some researchers have argued that if an AI makes a significant intellectual contribution to a paper, failing to credit it as an author is misleading. The argument goes something like this. If a postdoc wrote half the paper, they would be an author. If an AI wrote half the paper, why should the treatment be different?
There is a version of this argument that deserves serious consideration. Transparency about AI's role in research is genuinely important, and simply hiding a major AI contribution in a footnote does not serve the reader well. If GPT-4 generated the entire first draft of your related work section, listing it nowhere feels dishonest.
Others point out that the concept of authorship has evolved before. The meaning of authorship in a physics paper with 3,000 co-authors at CERN is already quite different from authorship on a single-author humanities monograph. Perhaps authorship can evolve again to accommodate AI.
These are fair points. But the mainstream response from publishers and the research community has been that these concerns are better addressed through transparent disclosure than through the authorship system.
What Happened When People Tried
In early 2023, several papers appeared on preprint servers and in published journals listing ChatGPT as a co-author. The response was swift.
One widely discussed case involved a paper in a medical journal that listed "ChatGPT, OpenAI" as a co-author. The journal required the authors to submit a correction removing ChatGPT from the author list. Multiple other papers faced similar requests.
Nature, Science, and several other publishers issued explicit policy statements in direct response to these cases, clarifying that AI tools cannot appear in the author byline. Conference program committees at NeurIPS, ICML, and ACL also addressed the question head-on.
The backlash was not about disapproval of AI usage. It was about the mismatch between listing something as an author and what authorship means. The researchers involved were generally not criticized for using ChatGPT. They were criticized for using the author list as the mechanism to disclose that usage, because the author list carries implications about accountability that no AI can fulfill.
What the Style Guides Say
APA 7th Edition guidance. The American Psychological Association published guidance stating that ChatGPT and other AI tools should not be listed as authors. Instead, AI usage should be described in the Method section, and if AI-generated text is quoted directly, it should be cited in a specific format that identifies the AI tool without placing it in the author position. The recommended approach treats AI output as a type of source material, not as an author contribution.
ICMJE recommendations. As described above, the ICMJE authorship criteria explicitly require accountability and final approval, which AI cannot provide. The ICMJE position is that AI-assisted content should be disclosed in the manuscript but that the AI tool should not be listed as an author.
CRediT taxonomy. The Contributor Roles Taxonomy, widely used by publishers to describe each author's contribution, does not include a role for AI tools. Some publishers have begun adding separate fields for AI contributions outside the CRediT framework, which signals that the community sees AI assistance as something to document alongside authorship, not within it.
The Better Alternative
The question "should AI be a co-author?" often arises because researchers want to be honest about AI's role in their work and are not sure how else to do it. The author list is the most visible place to credit a contribution, so it feels natural to put AI there.
But there is a better way. Documenting AI contributions through a structured disclosure gives readers more useful information than a co-author listing ever could. When you see "ChatGPT" in an author list, you learn almost nothing. Which version was used? For which parts of the paper? How was the output verified? A name in the byline answers none of these questions.
An AI Usage Card answers all of them. It specifies the exact tools and versions used, describes which tasks they performed, explains how the authors verified the outputs, and addresses ethical considerations. This is far more informative than a co-author listing, and it avoids the accountability problems entirely.
You can generate an AI Usage Card for free at ai-cards.org. It takes about ten minutes and produces a professional document that satisfies every major publisher's disclosure requirements.
Practical Recommendations
If you have used AI significantly in your research, here is what to do.
Do not list AI as a co-author. Your paper may be rejected or sent back for revision. Even if a journal does not explicitly prohibit it yet, the consensus is clear and growing stronger.
Do disclose AI usage thoroughly. Write a clear disclosure statement for your manuscript and consider attaching a full AI Usage Card as supplementary material. See our guide on how to disclose ChatGPT usage for ready-to-use templates.
Be specific about the AI's contribution. "AI assisted with this paper" tells reviewers nothing useful. Describe which tools you used, for which tasks, and how you verified the results. Specificity builds trust.
Credit the AI fairly without misrepresenting its role. The goal is to be transparent about what the AI contributed without implying it can be held accountable for that contribution. A detailed disclosure achieves exactly this.
The Ongoing Conversation
The question of AI authorship is not fully settled in a philosophical sense. As AI systems become more capable, the arguments for crediting them will become harder to dismiss. It is possible that the academic community will develop new categories between "author" and "tool" that better capture the role of AI in research.
For now, though, the practical answer is clear. Do not list AI as a co-author. Document its contribution properly instead. This protects your paper from policy conflicts, gives your readers the information they actually need, and positions you well regardless of how the authorship conversation evolves.
For more on the broader question of when disclosure is required, see our page on whether you need to disclose AI usage.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card