Education

How can teachers reduce implicit bias in AI detection among students?

An overview of how teachers can reduce implicit bias when reviewing student written work for potential AI usage.

Edward Tian, Vivienne Chen
· 5 min read
Send by email
Photo by National Cancer Institute / Unsplash

A recent study by Common Sense Media has shown a disturbing trend: Black teenagers are approximately twice as likely as their white and Latino peers to be incorrectly accused of using AI tools in their homework. As AI use is only set to become more widespread, this study shows the urgent need for educators to actively work to mitigate implicit biases in the classroom. But how? 

Understand the core problem with AI models

Black students are already up against the highest rate of disciplinary measures (in both private and public schools), despite not being more likely to misbehave. The Common Sense Media study raises concerns about the heightened vulnerability that historically marginalized groups face when potentially being flagged for academic dishonesty. 

Many assume the core issue is that AI applications, including both detectors and generative models, are reliant on pattern matching to their training data. As such they can carry over biases from their training datasets. However, an AI content detector like GPTZero can be trained to avoid these pitfalls: we are rigorously committed to continuing our research in de-biasing AI detection. The core problem starts and runs deeper than detection.

Sadly, the real issue is that popular AI models like ChatGPT and Gemini that students rely on are starting to grow more covertly racist, as they learn from past human content, according to Valentin Hoffman, a researcher at the Allen Institute for Artificial Intelligence. Even the well-intentioned creation of ethical AI systems and ethical guardrails can perpetuate biases, or simply teach AI models to mask racial biases more effectively instead of eliminating them.

So what practical steps can teachers take themselves right now to reduce such implicit bias from AI tools and AI detectors in the classroom? 

How to use AI detection tools responsibly

Using AI detection tools fairly means applying them consistently across the board, and using them alongside your own human judgment. Here are a few ways to use AI detection tools effectively and responsibly as an educator:

Run scans for essays in the entire classroom

Rather than targeting specific students, run AI detection scans fairly across the entire classroom. Instead of singling out individual students, applying AI detection scans to everybody means that everybody is subject to the same standards. A practical way to do this could be to randomly sample essays for scanning, which can reduce implicit biases. 

Administrators can also turn on automatic scanning integrations with learning management systems (LMS) to monitor the quantity of scans and stop human bias from unfairly targeting particular groups. Most modern LMS platforms like Canvas or Moodle offer AI detection integrations, meaning that administrators can make sure every piece of work is scanned without having to manually intervene. 

Doing this consistently and in an automated way means that the tool runs equally across the board, instead of relying on human judgment and bias

Run your AI detector among sample essays, including your own

Before committing to any AI detection software, test it first. This means running your AI detector among sample essays – including your own writing as a teacher. It could be a sample essay you write in response to the question being asked, or something else you’ve been working on.

Regardless, this little experiment is the fastest and most effective route to fully understanding the strengths and limitations of AI detectors. They are impressive tools, but they are also best used when coupled with informed human decision-making. 

The reality is that some AI detectors are less accurate and less capable of handling specific subject areas being taught. Testing the software on different types of writing, in a variety of subject areas, shows where the software might need more human oversight. 

Compare AI-flagged language with students' previous writing

Each student has their own unique writing style and voice. When a student’s work is flagged by AI detection software, the software might be misinterpreting their vocabulary or wording. Rather than automatically assuming the software is correct, make sure to compare the detected language with that student’s previous writing samples to gauge the following: 

Vocabulary and phrasing

Does the flagged writing use words or phrases the student has used before? Every student has words and phrases they lean on more than others, and if their word choices align with previous work, it could mean the AI detection software is being overly cautious. 

As students progress, so too does their vocabulary – especially as they become more fluent in the subject matter. This is why it is crucial to balance the results from the software with your own independent understanding of how the student is progressing. 

Tone of voice

Is the overall voice comparable to their previous work? Tone can be a tricky thing to decipher, as changes in tone don’t always mean a student has used AI. They could be trying out a different approach, or adapting the way they describe an argument as part of their learning process. 

Of course, if the work that has been flagged is a massive departure from their usual work, then it warrants taking a closer look. But AI tools can sometimes overlook natural variations in style. 

Complexity

Is the level of sophistication comparable to their previous work? You know what the student has produced in the past, and – much like tone – if the flagged work is suddenly hugely different from what they’ve submitted before, then that is one thing. 

However, it’s also worth remembering that complexity evolves as the student does. As they grow more comfortable with or interested in a topic, they might naturally come up with more complex arguments. 

How to teach students to use AI responsibly

While school districts have banned ChatGPT on school devices and networks, the reality is that AI tools are only set to become more prevalent in education over the coming years. Students need to be taught how to use AI tools responsibly as part of their lifelong learning journey. Instead of punishing students, a more helpful model could be using this as a teaching opportunity, covering the following topics:

Teach students when and how to cite AI help

Teach students how to properly cite AI assistance, just like they would cite any other source. Students are taught to cite information sources for ideas that they are referencing (that they did not come up with themselves) and the same should apply to AI.

Help students understand the importance of developing critical thinking

Emphasize to students that AI is a tool to enhance, rather than replace, their own critical thinking skills – the ability to analyze information and reach the best decision possible. Explain that if they rely too much on AI without using their own critical thinking abilities, they miss out on the chance to exercise that muscle, which in the long run enables them to create meaning for themselves.

Foster academic integrity discussions in the classroom

Discuss the ethical questions around using AI in academic work, as presenting AI-generated content as their own is a form of plagiarism. Talking about academic integrity helps to show students that education is about more than just grades, or how they perform. It’s about building skills and knowledge that will serve them long after they’ve left school.

Show students the potential for bias

Since AI systems are trained on datasets that often reflect society’s biases, AI tools are not entirely neutral. Share with students the potential for bias in AI algorithms like ChatGPT, which unfortunately are a reality at the moment. This helps students view these tools through a more critical and rigorous lens.

Be mindful of fairness of access

The digital divide in education describes how some have access to learning technologies that others do not and has been discussed for decades. AI presents a new form of digital divide and you can use AI flagging incidents as a chance to address this important social and economic issue. For instance, low-income students have less access to AI tools (such as ChatGPT subscriptions), which can only exacerbate the AI divide.

By taking the above steps, you can create a more genuinely equitable learning environment while preparing students to better navigate tomorrow’s world. Instead of simply policing students for potential AI misuse, taking a proactive approach means building your own understanding of AI and its implications in the classroom. This shift goes beyond addressing implicit biases and equipping students for a world in which AI will play an increasingly prominent role.