AI tools are incredible: they speed up writing, pull together tons of data, even help us brainstorm. But new studies from MIT and Cornell are raising a red flag on what we’re giving up in exchange, namely, critical thinking effort, memory, and cognitive engagement.
Let’s dig into the science and explore how to use AI like a smart assistant and not as an intellectual crutch.
A recent MIT Media Lab study found striking results: people writing essays with ChatGPT showed the least brain activity, produced less original ideas, and felt less ownership over their own work. In other words: mental laziness. The study was published with the title, “Your Brain on ChatGPT: Accumulation of Cognitive Debt.” It’s worth taking a closer look because it offers a good illustration of exactly the type of thinking that can be diminished with AI overuse. The study followed people over several writing sessions. Participants were split into three groups:
Using EEG scans, researchers observed how engaged each group’s brain was. The results were striking: the brain-only group showed the most distributed neural connections. Search engine users were in the middle. But AI users showed the weakest brain engagement. Over time, “LLM users consistently underperformed at the neural, linguistic, and behavioral levels.” They wrote faster, but with less originality, recalled fewer details, and even felt less ownership over their work.
In practical terms, imagine switching off half your brain while asking AI to do the thinking for you. That’s not just productivity, it’s cognitive drift. In short, brain-only writers engaged more deeply with the assignment, producing writing that was both creative and distinctive. Afterwards, they could recall lines from their essays and expressed a stronger sense of ownership over their work.
By contrast, Chat GPT users got less engaged as time went on. They grew more dependent on the tool, felt less ownership of what they produced, and their writing came across as less original. Many couldn’t recall lines from their own essays, hinting that the experience didn’t stick in their long-term memory.
Researchers referred to this phenomenon as “metacognitive laziness”, not just a great name for a Prog-Rock band, but also a perfect label for the hazy distance between autopilot and copilot, where participants disengage and let the AI do the thinking for them.
But it was the fourth session that yielded the most worrying results. According to the study, when the LLM and brain-only group traded places, the group that previously relied on AI failed to bounce back to pre-LLM levels tested before the study. That’s the scary part. It’s one thing to decide that a given task isn’t worth our brain power, it’s another to learn that off-loading the thinking for that inconsequential task will hinder us when we want to us our brains later.
Remember the phrase “use it or lose it”? When applied to the brain, the concept is illustrated by neuroplasticity — our brain’s ability to rewire and strengthen itself through use and challenge. So it makes sense that if we stop doing the thinking, our brains start to lose that ability.
Generative AI might be encouraging cognitive atrophy by enabling us to defer thinking, recalling, and reasoning. Instead of engaging our prefrontal cortex (involved in creativity and logic), we let the algorithm do it. The risk: over time, the brain’s capacity for independent thought weakens.
A new paper from Cornell University titled, “The Memory Paradox: Why Our Brains Need Knowledge in an Age of AI” supports the MIT study. The scientists make the point that reliance on powerful AI for writing or calculation can shortcut deep memory formation. We skip the mental circuits that help us master knowledge and retain skills. As with muscles, if you don’t engage memory retrieval or problem-solving, those pathways can atrophy. Instead, users become overly dependent on AI systems they don’t fully understand. Our brains crave convenience, but they grow on challenge, with cognitive function being the ultimate use-it-or-lose-it deal.
The University of Pennsylvania’s Wharton School published a study that showed high school students in Turkey with access to a ChatGPT-style tutor performed significantly better at solving practice math problems. But when the program was taken away, they performed worse than students who had never used an AI tutor.
Some AI-powered tasks can support brain function, rather than replace it:
In short, tasks that automate labor intensive work can free mental space for higher-order thinking. But if AI does the high-order thinking — your brain is just sitting idle.
There are two easy ways to turn the tables and start using AI to strengthen your brain: debate and challenge.
Pick a debate prompt and make your chatbot your opponent. This will force you to hone your critical thinking. In a recent Psychology Today article, T. Alexander Puutio Ph.D. suggested, “Think of it as a gym for the mind where we’re sparring, not brawling. Treat every AI draft as an opening statement, not a final word, and your AI use will sharpen your cognitive skills instead of dulling them.”
Similar results can be obtained by pushing your AI partner into increasingly creative problem solving. When you are using AI to design a strategy or conquer a difficult task, keep challenging it to come up with alternative solutions. This will force you to use your brain to analyze all the possible scenarios, helping you grow your cognitive skills and find the optimal solution to your problem.
AI’s convenience is seductive — and productive. But when overused, it risks turning our complex thought muscles into couch potatoes. The antidote is mindful use: lean on AI for low-level tasks, but fight for your thinking time on high-level tasks. When you can, try a battle of wits against AI, you may be surprised at how intelligent you really are.