Does AI Accelerate Thinking Skills or Atrophy Them?
A growing body of literature shows that if we don't use our critical thinking skills, we lose them.

New research raises one of the thornier questions around AI adoption: Will tools designed to help us instead atrophy our critical-thinking skills? Recently, SBS Swiss Business School professor Michael Gerlich found a negative correlation between frequent AI tool usage and critical thinking ability, especially in younger users.
I read about the research in a great article from The Globe and Mail. As Oliver Hardt, an associate professor at McGill University who researches neuroscience and memory, put it: “It’s really killing the ability to learn how to think, if they use it all the time.”
Younger participants exhibited higher dependence on AI tools and lower critical thinking scores compared to older participants. In other words, as the reliance on AI for everyday tasks grows, users may incrementally surrender the mental effort involved in reasoning and analysis. As the paper puts it: “It inadvertently fosters dependence, which can compromise critical thinking skills over time.”
Of course, it's important to note that the study found correlation, not causation. Still, it’s a dangerous loop when the people least able to critically evaluate AI output are the ones most dependent on it.
This type of research is highly relevant for GPTZero, and there is a growing body of literature showing how if we don't use our critical thinking skills, we lose them.
Many of us believe that educators should be able to gradually expand student access to AI capabilities as their skills improve, so students grow their ability to verify AI output instead of taking it for granted. It’s essential to find the right research-based balance before allowing tech companies to push maximum-AI solutions to young students.
At the same time, these are all such new questions, and Prof. Gerlich said he does not want the research to come across as alarmist. Instead, he suggests a more productive route is to ‘treat chatbots like intellectual sparring partners – push for evidence, ask for alternative views, look for logical gaps’.
Through this lens, humans are actively engaged rather than passively accepting outputs. While the skill is teachable, it is still a very new concept to many students, and arguably to those teaching them as well. In this unfamiliar territory, the open question is whether AI adoption can exist alongside building the intellectual muscles young people need to think critically.