Education

How to Establish AI Policies for Universities

Some universities have started sharing AI policy templates. We’ve seen how much educators value these shared frameworks, so here, we explore why AI policies matter and offer practical steps to building one at your university.

Adele Barlow
· 10 min read
Send by email

If you were to guess exactly how many students were using AI in their studies, an accurate number would likely be around 86%, according to a survey from the global alliance Digital Education Council. Institutions grapple with this ongoing tension: should students be allowed (or even encouraged) to use AI, to prepare them for the future workforce? Or is it more ethical for AI tools to be banned on campus? 

Some universities have begun publishing AI policy templates and examples, but these approaches vary widely. At GPTZero, we’ve seen firsthand how much educators care about getting this right and how valuable shared frameworks can be. In this article, we’ll look at why AI policies matter and outline practical steps to create an AI use policy that’s both ethical and forward-thinking.

What is a University AI Policy? 

At its core, a university AI policy is a campus-wide agreement around how AI can and should be used in the institution. As Raffi DerSimonian and Christa Montagnino explain in Faculty Focus, a strong policy should have the “right balance between governance and innovation”, ideally “framing governance as an enabler rather than a constraint”. 

In practice, this means that a “one-size-fits-all” approach to AI doesn’t work. Rather than copying and pasting another institution’s playbook, each campus now has to come up with a policy reflecting its own values while staying adaptable as technology and expectations evolve. 

Importance of AI policies

While most people agree that AI policies are required, in practice, it can be difficult to get everyone on the same page at the same time. As a GPTZero user and tutor Maria Alejandra shared: “My syllabus clearly states that students can use AI tools like ChatGPT as learning aids, but they must cite any AI material. The main rule is: AI can help you learn, but it can’t do the learning for you.”

Meanwhile, the Chronicle of Higher Ed quotes Daniel James O’Neill sharing: “I would compare this to using steroids in baseball. If you don’t ban steroids in baseball, then the reality is every player has to use them. Even worse than that, if you ban them but don’t enforce it, what you actually do is create a situation where you weed out all of the honest players.” 

In other words, students can face mixed messages as these are the early days of AI in the classroom, and what’s encouraged in one class could be considered misconduct in another. The ideal (which is easier said than done, as we’ve found from the discussions we’ve been having with educators) AI policy will find a balance between setting firm expectations that protect integrity while allowing space to make way for innovation. 

Why Universities Need AI Policies Now

While AI used to seem futuristic, the reality is that it is already embedded in day-to-day student life, and becoming increasingly so. Students are already using ChatGPT to help with their workload, and faculty are doing the same (e.g. experimenting with grading automation). The longer universities wait, the more these practices start to become normalized, which can make it a lot harder to course-correct later. 

There’s also a trust issue, with students and faculty waiting to see if leadership actually grasps the realities of AI. If institutions are slow to act, this sends a message they’re out of touch; or worse, don’t care. Acting now means universities can set the tone for responsible use and show they’re prepared to shepherd their communities through this transition.

Key areas of AI policy in universities

Governance, Pedagogy, and Operations

AI policies need to be comprehensive and should cover several domains, as shown in a framework by DerSimonian and Montagnino:

  1. Governance: Providing oversight and ethical review processes to ensure AI use aligns with institutional values and legal requirements.
  2. Pedagogy: Supporting faculty and students by defining acceptable AI use in teaching and assessment.
  3. Operations: Managing AI’s role in data management and other behind-the-scenes functions.

A holistic approach requires addressing all three. Otherwise, for example, a university might come up with strong rules for academic integrity but forget to address privacy concerns in student data systems, or encourage innovative teaching tools without setting boundaries for bias. 

Who should be involved in creating AI policies at universities?

David Hatami, writing for Harvard Business Publishing Education, emphasizes that AI policies cannot be handed down from the top without input from those most affected. Successful policies, he argues, are built collaboratively, and should start with an “AI task force.”

  • Faculty from various disciplines should ideally make up around two-thirds of the committee. 
  • Students must also have a seat at the table to share how they’re using AI and where they are coming up against confusion or challenges.
  • Administrators can steer policies to make sure they fit broader institutional goals as well as compliance requirements.
  • Skeptics and champions of AI alike should both be included to balance voices of excitement alongside those of caution, as including bringing in a full spectrum of viewpoints early can help policies to be more readily adopted later on (including, in Hatami’s words, “committee members who know nothing whatsoever about AI”).

Step-by-Step Framework for Establishing AI Policies in Universities

So where do you start? Based on our own discussions with teachers and students from the GPTZero community, we’ve mixed these with insights from DerSimonian and Montagnino, as well as Hatami, to help you design an institutional policy.

Gather the right people

It begins by bringing together a task force with (as mentioned above) a mix of voices. As Antonio Byrd shared with us in a GPTZero webinar: educators don’t need to rewrite their entire curriculum or become AI experts overnight, they just need to centre reflection, relationships, and values. “In a world where generative AI is everywhere, it’s our job to ensure that learning still happens, and that students see the value in doing things thoughtfully, ethically, and with intention.”

From the very first AI Task Force meeting, make it clear why this group exists and how it connects to the university’s broader mission. This shared sense of purpose helps keep discussions grounded and productive. Also, agree on how decisions will be made and how the group will communicate with the wider campus community. This is also a good time for light-touch discovery, like short conversations or quick surveys, to capture initial perspectives. 

Understand What’s Already Happening

Once the task force is in place, take a structured look at how AI is already being used across campus. This stage is about discovery and gathering information. Look at how AI is being used in teaching and assessment, and where it’s embedded in behind-the-scenes functions. For instance, some staff might be actively encouraging their students to use AI; like Geri Sawicki shared, to “get students more ready for the increasingly complex information landscape they will encounter long after graduation.”

This mapping process often uncovers areas where faculty are experimenting with AI in their classrooms, or where students are using it informally to collaborate. By surfacing these patterns early, the task force can identify both opportunities to build on and risks that need attention; most importantly, it also shows that the policy is being shaped by the lived experiences of those who will ultimately use and be affected by it.

Define Your Principles and Priorities

With a better picture of how AI is currently used, the group can now set its north star: the purpose of the policy and the principles that will guide it. A strong purpose statement links directly to the university’s mission, signalling that the policy is about steering the future of the academic community, as opposed to laying down a set of rules for the sake of it. 

Many universities anchor their policies around core values like integrity, privacy, equity, transparency, and accountability. These shared principles provide a framework for tackling tricky questions later and help build trust across campus. At this stage, also decide on priorities. For example, you might choose to focus first on academic integrity and classroom use before expanding into research ethics or data privacy.

Draft, Test, and Refine

With the groundwork laid, the task force can begin drafting the policy itself. The goal is to create a campus-wide framework that sets clear expectations while leaving room for individual departments to adapt it to their specific contexts. Transparency is pivotal here: share early drafts widely and invite honest feedback. When students and faculty can see their voices reflected in the language of the policy, they’re more likely to embrace it. 

Before rolling the policy out across the whole university, pilot it in a smaller setting, so you can see how it works in practice and make adjustments. Aim for a mix of disciplines (like engineering and history) to stress-test the policy across very different teaching needs. 

Launch with Support and Keep Iterating

When it’s time to launch, focus on making the policy visible and practical. Create a dedicated online hub that includes both plain-language summaries and the full document, so anyone can easily find and understand it. Offer training and resources to help people feel confident about putting the policy into practice. For faculty, this might include workshops on integrating AI into assessments, while students could benefit from drop-in sessions. 

Even after launch, treat the policy as a living document, and set up a standing review committee or governance structure to keep it updated. Schedule regular reviews, ideally annually, to make sure the policy adjusts alongside technology and the needs of the campus community.

Practical Examples of AI Policies in Higher Education

It’s helpful to see how leading universities are taking different approaches to AI governance. Here are some examples: 

Stanford University

Stanford has gone for a relatively firm stance on academic integrity, stating that students should generally steer away from using AI tools to complete assignments or exams:

“Absent a clear statement from a course instructor, use of or consultation with generative AI shall be treated analogously to assistance from another person. In particular, using generative AI tools to substantially complete an assignment or exam (e.g. by entering exam or assignment questions) is not permitted. Students should acknowledge the use of generative AI (other than incidental use) and default to disclosing such assistance when in doubt.”

(Stanford Guidance)

Duke University 

Duke University decentralizes its AI policies and gets its individual instructors to determine what is appropriate for their courses:

“We suggest that faculty clarify their expectations regarding the use of AI at the outset of their course. Instructors have discretion in setting specific AI policies to fit their course and individual assignments. There is no one-size-fits-all policy.”(Duke AI & Teaching)

Princeton University 

Princeton has given faculty resources instead of issuing strict directives, providing templates for syllabus language and prompts that instructors can adapt to craft course-specific policies: 

“We encourage faculty to experiment with generative AI (GAI) tools, which can be used to generate ideas, summarize articles, develop computer code, create images, and compose music. These tools are increasingly sophisticated and powerful.”Princeton Templates

UCLA 

The UC Instructional Design & Faculty Support Community of Practice’s AI Working Group has made a chart with guiding questions for instructors relating to AI:

“The chart asks you to reflect on your experience with GenAI, its relevance to your course, your ethical concerns, and how and where you might rethink your course to incorporate GenAI.”UCLA Guidance

Yale University 

Yale takes a collaborative approach, bringing faculty and students into discussions around AI with a focus on learning from each other before officially charting a path forward:

“Rather than wait to see how AI will develop, we encourage our colleagues across Yale to proactively lead the development of AI by utilizing, critiquing, and examining the technology.”Yale Guidance

FAQs

How to Balance Policy and Innovation in Universities

Setting boundaries without smothering innovation is a challenge: while policies that are too rigid or strict can stifle creativity, while too little structure leads to confusion, leaving each department or faculty member to interpret AI use in their own way. The sweet spot is flexible governance, with firm ethical guardrails that still leave room for exploration. Pilot programmes can be incredibly helpful: by testing AI policies on a small scale first, universities can see what works, gather feedback, and refine their approach before rolling it out more broadly.

What are some Non-Negotiables Every AI Policy Should Cover

While every institution will tailor its policy to its needs, there are certain core elements that are typically part of every AI policy: 

  • Integrity: Policies should define acceptable and unacceptable uses of AI, making sure original thought and research standards remain protected. In other words, keeping academic integrity sacred.
  • Privacy: Universities hold vast amounts of sensitive data about students and faculty, and policies must safeguard this, with directives on how AI systems collect and use data. 
  • Equity: Not all students or faculty have equal access to AI tools or training, and this should ideally be taken into account when drafting policies. 
  • Transparency: Stakeholders need to understand how and why AI is being used, with communication that is upfront and prevents misunderstandings. 
  • Enforcement: Without consequences, policies lose their power, and universities should make it very clear what happens if rules are broken. 

Best KPIs for University AI Initiatives: Measure Effectiveness

To understand whether policies are actually working or not, universities need to make sure they keep measuring the following KPIs: 

  • Learning outcomes: Student success is the bedrock of higher education, so metrics like retention rates, graduation rates, course performance improvements, and student satisfaction reveal whether AI is improving the learning experience or not. 
  • Operational efficiency: AI can streamline administrative tasks, and efficiency can be measured through reduced workloads or cost savings in areas like admissions and advising. 
  • Ethical impact: Fairness and integrity must be monitored, and a decrease in reported instances of academic misconduct signals that ethical guidelines are being followed. 
  • Stakeholder satisfaction: Surveys and focus groups will show how AI tools are experienced day-to-day, and high satisfaction rates indicate that policies are serving their intended purpose.

AI isn’t going anywhere, and institutions will have to come up with policies around AI use sooner or later. As we’ve explored, the most resilient policies aren’t a one-off set of rules but instead are ideally a living framework that protects academic integrity, while allowing innovation to breathe. By focusing on a pilot approach, refining the policy over time, universities can build trust across campus during a highly uncertain time.