News

NIH vs. AI: How New Rules Are Redefining Grant Writing

NIH takes a firm stance on how AI can (and can’t) be used, joining a growing list of agencies committed to responsible AI use.

Adele Barlow
· 5 min read
Send by email

Summary:

  • Starting September 25th 2025, NIH will limit Principal Investigators (PIs) to six applications per year and reject proposals “substantially developed by AI”. This signals a change in the conversation around AI, with human-led creativity viewed as acceptable, while AI-generated proposals are seen as unacceptable.
  • NIH joins other agencies like NSF and DOE in setting its own definition of responsible AI use.
  • This move demonstrates how quickly AI is changing the academic landscape. GPTZero adds a layer of accountability in helping to determine how much text was AI-generated.  

The National Institutes of Health (NIH) has taken a firm stance when it comes to how AI can (and can’t) be used in the research application process. Starting from September 25th 2025, Principal Investigators (PIs) will have a limit of six applications per year. Also, any proposals “substantially developed by AI or containing sections substantially developed by AI” will be rejected. 

So why is NIH drawing the line now? In short, NIH has seen what seems like a tidal wave of applications from some researchers, as AI makes it possible to put out dozens of proposals at once. The agency announced that some PIs would “submit more than 40 distinct applications in a single application submission round.” While this might masquerade as efficiency, in reality, it simply creates more of a strain on reviewers. 

The new policy won’t affect most researchers, as only 1.3 percent of applicants submitted more than six proposals in 2024. This cap is squarely focused on a small group of high-volume submitters, some of whom were obviously leaning on AI to get proposals out. 

However, it symbolizes a change in the conversation around AI, as NIH is joining an ever-expanding list of organizations committed to responsible AI use. This is a statement about a major funding body differentiating between legitimate and illegitimate authorship: rejecting proposals that have been “substantially developed by AI” means that human-led creativity is viewed as acceptable, while AI-generated proposals are seen as unacceptable. 

National Agencies and AI 

NIH is part of a growing list of institutions that are coming up against the new questions raised by the ubiquity of generative AI tools. In 2023, the U.S. National Science Foundation announced that proposers were to disclose if and how they used generative AI in preparing proposals; and that uploading proposal or review information to open AI systems would be treated as public disclosure, which could undermine confidentiality and risk legal liability. 

The U.S. Department of Energy (DOE) has also shared that “all content generated with AI assistance meets all federal requirements, including Section 508 and the Plain Writing Act of 2010” – and requires a “valid business justification” to access OpenAI’s ChatGPT tool from a DOE computer.

In that sense, each agency is defining for itself what responsible AI use looks like: the NIH has focused on emphasizing originality, while the NSF has focused on transparency, and DOE has focused on compliance. This shows that in the future, researchers will need to navigate an ever-shifting regulatory landscape around AI use. 

Frontline response to NIH decision on AI 

When Crystal Herron, PhD, ELS(D), a Scientific & Medical Writing Consultant, shared the news, most commenters felt the new AI policy was too vague and potentially unhelpful. There has been a lot of concern about whether AI detection tools are reliable enough to enforce these new rules without causing false positives that unnecessarily penalize otherwise legitimate applicants. 

Resource: How AI Detection at GPTZero works

“I understand and agree with the intent, but it's unclear what qualifies as substantial and how exactly they are assessing whether sections of the applications are AI-generated. I'm just worried about false-positive cases. Legitimate applicants spend a lot of effort, and their work should not be buried,” said Niladri Chakraborty, PharmD, Senior Medical Writer at Cactus Life Sciences. 

At the same time, there was also the view that the changes might encourage researchers to be more strategic and focused on quality as opposed to volume; and would be a step towards protecting originality in the research process. 

FAQs about NIH AI ruling

  • Does this mean researchers cannot use AI at all? NIH has made it clear that AI can still be used in limited and administrative ways, such as formatting and summarizing background research. What is now prohibited is submitting applications where the majority of the content has been generated by AI. 
  • How will NIH enforce this? That remains a question mark to the public. No official AI detection tool has been named so far, but NIH has now committed to monitoring applications, and so it’s expected that they may use external software to trace AI-generated sections. 

Resource: 7 Best AI Detectors With The Highest Accuracy in 2025

  • Won’t this slow down innovation? Arguably, it could, but the NIH position seems to be that mass-produced AI-generated proposals simply create noise as opposed to progress. It seems to be the agency’s view that keeping originality at the heart of proposals will improve the overall quality of applications and innovation. 

GPTZero’s perspective 

“We’re not anti-AI, we’re AI-forward,” says Edward Tian, CEO of GPTZero. “Using AI responsibly starts with making sure we don’t outsource critical thinking to generative AI tools. Reviewers need to be able to trust that they are reading original work from a researcher.” 

“This move demonstrates how quickly AI is changing the academic landscape and forcing us to answer new questions around integrity in innovation. We’re proud that GPTZero provides a way to check writing patterns and reveal how much text has been created by AI, which adds a layer of accountability that is fundamental to responsible AI use.”  

How to future-proof your research applications 

Researchers will need to be very upfront about when and how AI tools have been used. They will also have to focus on originality of ideas instead of prioritizing volume (i.e., quality over quantity). They will have to anticipate potential scrutiny from AI-detection tools and likely be prepared to defend the intellectual integrity of their submissions. 

Why use GPTZero 

This move is proof of how quickly AI is changing the academic landscape and introducing new questions around trust. Reviewers need to know that what they are reading is the original intellectual property of a researcher, as opposed to text that has been churned out by generative AI. This is where tools like GPTZero can be critical. 

GPTZero can provide a means to check writing patterns and reveal how much of a text has been written using AI. This forces a level of transparency that is fundamental to responsible AI use.  

Conclusion 

The NIH’s decision to reject AI-generated proposals is a sign that the future of research funding will require accountability alongside innovation. As the social contract of scientific research and funding is rewritten, researchers must make sure AI is not used to outsource originality; while for agencies, enforcing these new rules is going to mark a new chapter as well. Overall, this news is a signal that grant writing is going to be one of the first areas with responsible AI use under the spotlight.