AI Usage Reporting in Grant Proposals
How to proactively document AI usage in research grant proposals for NSF, ERC, DFG, and other funding bodies.
Why Funders Are Starting to Care About AI
Funding agencies have always cared about research integrity. They want to know that the work they fund is original, that the methodology is sound, and that the researchers have the capacity to deliver on their promises. AI tools raise new questions on all three fronts.
When a PI uses ChatGPT to draft sections of a grant proposal, who is generating the ideas? When a proposed methodology relies on AI tools for data collection or analysis, how reproducible is that approach? When a team says they will complete a five-year project with three postdocs, but they are actually planning to offload significant work to AI systems, does the budget still make sense?
These are the questions funders are starting to ask. Even when explicit policies have not yet caught up, program officers are paying attention. Being proactive about AI disclosure in your proposals positions you as a researcher who takes these questions seriously.
Where Major Funders Stand Today
Most major funding bodies have not yet issued binding requirements for AI disclosure in grant proposals. But the trajectory is unmistakable, and several agencies have already made their expectations known.
NSF (National Science Foundation)
The NSF has issued guidance stating that AI-generated text in proposals must be disclosed. Program officers have emphasized in public statements that proposals should reflect the intellectual contribution of the investigators. While the formal policy is still evolving, the expectation is clear. If you used AI tools to help write your proposal, say so. And if your proposed research methodology relies on AI tools, describe those tools with the same rigor you would apply to any other methodological choice.
The NSF has also signaled increasing interest in how AI tools affect research reproducibility. If your proposed work depends on commercial AI APIs that could change or disappear, reviewers may want to see that you have considered this risk.
ERC (European Research Council)
The ERC has taken a characteristically European approach, framing AI tool usage within the broader context of research ethics and responsible innovation. ERC proposals already require an ethics self-assessment, and the use of AI tools in the research process is increasingly expected to be addressed there.
For ERC Starting, Consolidator, and Advanced Grants, the evaluation criteria emphasize the PI's intellectual contribution. Reviewers want to be confident that the groundbreaking ideas in the proposal come from the researcher, not from an AI system. A brief, honest disclosure of how AI tools were used in preparing the proposal can actually strengthen your case by showing that the ideas are yours and the AI was just a tool.
DFG (Deutsche Forschungsgemeinschaft)
The DFG, Germany's largest research funding organization and particularly relevant given that ai-cards.org was developed at the University of Göttingen, has addressed AI usage in several position papers. The DFG emphasizes that researchers must take full responsibility for the content of their proposals and publications, regardless of whether AI tools were involved in their creation.
The DFG's guidelines on good research practice (the "Kodex") stress transparency as a foundational principle. While there is no separate checkbox for AI usage on the DFG proposal form yet, the spirit of the guidelines clearly extends to AI tools. Disclosing your AI usage in the proposal aligns directly with these principles.
UKRI (UK Research and Innovation)
UKRI, which oversees research councils across the UK, has updated its terms of grant to include expectations around AI transparency. Applicants are expected to use AI tools responsibly and to disclose any significant use in their proposals. The UKRI has been particularly focused on ensuring that peer reviewers declare their own AI usage when writing reviews, which signals the direction of future applicant-facing requirements.
Where to Include AI Disclosure in a Proposal
Grant proposals do not typically have an "AI Disclosure" section, so you need to fit this information into the existing structure.
Methodology section. If your proposed research will use AI tools as part of the scientific workflow (for example, using LLMs for qualitative coding, using computer vision models for image analysis, or using AI for drug discovery), describe these tools in your methodology alongside your other methods. Treat them with the same level of detail. Name the model, explain why you chose it, and address potential limitations.
Resource and budget justification. If your budget includes costs for AI services (API fees, compute resources for running models, or licenses for AI tools), explain these costs and why they are necessary. Reviewers appreciate when you have thought through the practical requirements.
Ethics section. If your research involves AI tools that raise ethical considerations, address them here. This includes concerns about bias in AI-generated data, privacy implications of using cloud-based AI services with sensitive data, or the environmental impact of large-scale model training.
Proposal preparation statement. A brief statement at the end of the proposal acknowledging any AI assistance in writing the proposal itself. This is not yet widely required, but including it shows awareness of emerging norms.
Template Language for Your Proposal
Here are some adaptable passages you can incorporate into different sections of your proposal.
For the Methodology Section
The proposed analysis pipeline incorporates [Tool Name] ([Version/Model]) for [specific task]. We selected this tool based on [reason, e.g., "its demonstrated performance on similar tasks as reported in Smith et al. (2025)"]. To ensure reliability, all AI-generated outputs will undergo [verification process, e.g., "manual review by two trained annotators" or "validation against a gold-standard dataset"]. We have allocated [X] months of postdoc time for quality assurance of AI-assisted analysis.
For the Proposal Preparation Disclosure
In preparing this proposal, the investigators used [Tool Name] for [specific purpose, e.g., "improving the clarity of the English text" or "identifying relevant recent publications"]. All scientific content, including the research questions, proposed methodology, and expected outcomes, represents the original intellectual contribution of the investigators.
For the Ethics Section
Our use of [AI Tool] for [task] raises the following considerations, which we will address as part of our research ethics protocol. [Address specific concerns such as data privacy, bias, reproducibility, or environmental impact.]
Proactive Disclosure as a Competitive Advantage
There is a strategic argument for disclosing AI usage in grant proposals even when it is not required. Reviewers are human beings who read the news. They know that AI tools exist and they are wondering whether applicants are using them. An honest disclosure preempts suspicion and builds trust.
More importantly, demonstrating that you have a thoughtful approach to AI usage shows methodological sophistication. A proposal that says "we will use AI tools and here is exactly how we will validate their outputs" is stronger than one that pretends AI does not exist, only to have a reviewer wonder whether the polished prose was entirely human-written.
This is especially true in fields where AI tools are becoming standard parts of the research workflow. A computational biology proposal that does not mention AI tools at all in 2026 looks out of touch. One that describes a careful, well-validated AI-assisted pipeline looks modern and rigorous.
Documenting AI Usage Across a Funded Project
The disclosure question does not end when the grant is awarded. Funding agencies increasingly expect transparency throughout the life of a project, including in progress reports, publications, and final reports.
Consider establishing an AI usage documentation protocol at the start of any funded project. This can be as simple as a shared spreadsheet where team members log their AI tool usage, or as formal as a periodic AI Usage Card generated at ai-cards.org that is attached to progress reports.
This approach has three benefits. First, it makes writing the AI disclosure sections of your publications much easier, since the information is already collected. For more on publication-level disclosure, see our guide on AI transparency in journal submissions. Second, it satisfies any reporting requirements the funder may introduce during the grant period. Third, it creates a record that protects the team if questions about AI usage arise after publication.
Looking Ahead
The conversation around AI usage in grant proposals is just beginning. As AI tools become more capable and more deeply integrated into the research process, funding bodies will develop more specific requirements. Researchers who start documenting their AI usage now will be well positioned when those requirements arrive.
The University of Göttingen's AI Usage Cards project at ai-cards.org was built with exactly this trajectory in mind. The standardized format captures the information that funders, publishers, and institutions need to see, in a way that is easy to generate and easy to update as policies evolve. Whether you are writing your first grant proposal or managing a multi-year funded project, building AI transparency into your workflow from the start saves time and builds credibility.
Generate Your AI Usage Report
Create a standardized AI Usage Card for your research paper in minutes. Free and open source.
Create Your AI Usage Card