There is a particular kind of convenience that arrives without warning labels. You do not feel it taking anything from you. You feel lighter, faster, more capable — until one day, quietly, you are not. That is roughly the picture emerging from a cluster of new scientific studies examining what happens to the human brain when it leans heavily and habitually on artificial intelligence tools like ChatGPT. The findings are not catastrophic, but they are consistent, and for anyone who has folded AI into their daily cognitive life — writing, planning, problem-solving, remembering — they deserve more than a passing glance.
The most widely discussed piece of new research comes from MIT's Media Lab, where researcher Nataliya Kosmyna and her colleagues spent four months tracking what AI does to the brain in real time. They recruited 54 university students between the ages of 18 and 39 from five Boston-area institutions and fitted them with EEG headsets to monitor brain activity as they wrote essays. The participants were divided into three groups: one wrote with the help of ChatGPT, one used a standard search engine, and a third group — the "brain-only" condition — worked with no external tools at all. The results, published on arXiv in June 2025, showed something that went beyond mere academic curiosity. EEG scans revealed significant differences in brain connectivity: participants in the brain-only group showed the strongest and most distributed neural networks, search engine users displayed moderate engagement, and those who relied on the large language model demonstrated the weakest connectivity of all.
That alone might seem unremarkable. Of course your brain works less hard when a machine is doing the work. But the MIT team pushed further, and what they found in the fourth session is where things get genuinely interesting. Participants who exclusively used AI to help write essays showed not just weaker brain connectivity, but lower memory retention and a fading sense of ownership over their own work — and even when they stopped using AI tools later on, the effects lingered. This is the part that researchers have begun to call "cognitive debt" — a slow accumulation of mental passivity that does not reverse the moment you put down the tool. The brain, like any muscle, adapts to the demands placed on it. Reduce those demands consistently enough and the adaptation runs in the wrong direction.
It is important to be clear about what this study is and is not. As of its upload to arXiv in June 2025, the paper had not yet been peer-reviewed, and the researchers themselves caution that conclusions should be treated as preliminary. The sample was small, geographically concentrated, and skewed toward young adults in academic settings. A single institution's preprint does not rewrite neuroscience. But it does not exist in isolation, either. Several other lines of research, conducted by different teams at different institutions with different methodologies, are pointing in the same uncomfortable direction.
Consider what researchers at Microsoft and Carnegie Mellon University found when they turned their attention to working professionals rather than students. Their study, published in February 2025, surveyed 319 knowledge workers and found that the more humans lean on AI tools to complete their tasks, the less critical thinking they do — making it more difficult to call upon those skills when they are actually needed. The mechanism is almost deceptively simple. Higher confidence in generative AI was associated with less critical thinking, while higher self-confidence in one's own abilities was associated with greater critical thinking. Trust the machine more, think less. Trust yourself more, think more. It is not a paradox so much as a self-reinforcing loop, and it raises real questions about where that loop ends.
What makes the Microsoft-Carnegie Mellon findings particularly striking is what happens to the diversity of outputs when AI enters the picture. Workers relying on generative AI tended to produce a less diverse set of outcomes for the same task compared to those who worked without AI assistance. In other words, as individuals outsource their thinking, they do not just think less — they think more similarly. The rough edges, the idiosyncratic angles, the genuinely unexpected solutions that emerge from a mind working through friction — these get smoothed away. What is left is polished, serviceable, and increasingly interchangeable.
This homogenization concern is echoed in creativity research coming out of Canada. A study published in Scientific Reports and led by researchers at Université de Montréal found that while large language models outperform people of average creativity on divergent thinking tasks, highly creative individuals surpassed AI by a clear margin — with the gap actually widening in the top 25 and top 10 percent of participants. The lead researcher, University of Toronto psychologist Jay Olson, drew a direct line from these results to a practical warning: if people who are highly creative consistently use these models, they may begin generating less creative ideas. His conclusion was pointed — "maybe our creative thinking isn't something we should be offloading onto these models." The irony is that the people most at risk of this creative erosion are not those who lack imagination. They are the writers, designers, strategists, and problem-solvers who have the most to lose.
To understand why all of this happens, it helps to understand a concept that cognitive scientists have been studying for decades: cognitive offloading. Humans have always used external tools to reduce mental load — writing things down, using calculators, relying on GPS. The argument used to be that this was fine, even good, because it freed up cognitive resources for higher-order thinking. And to some degree, that remains true. But AI is different from a calculator or a notepad in ways that matter. Using a calculator doesn't fundamentally alter your ability to think. Using AI to generate email responses, answer questions on your behalf, or give you ideas for projects is fundamentally altering your ability to think and do those tasks. The distinction is between a tool that augments a specific, bounded function and one that substitutes for the thinking process itself.
Memory is one of the most vulnerable casualties in this picture. An experimental study had participants write essays with AI support and then perform recall tasks. Most participants who used a large language model struggled with the recall tasks, while only a small portion of those who wrote without AI encountered similar difficulty — results that point toward shallow encoding and an erosion of retrieval when AI handles cognitive tasks. The underlying neuroscience here is not complicated. Memory is consolidated through the effort of encoding — through the mental friction of struggling to find words, make connections, and organize thoughts. When AI eliminates that friction, it also eliminates the encoding. The information passes through you without leaving a mark.
A study involving 73 information science undergraduates at a Pennsylvania university divided participants into two groups: one engaged in pretesting before using AI, while the control group used AI directly. Results showed that pretesting improved retention and engagement, but prolonged AI exposure led to memory decline. The researchers concluded that AI works best as a scaffold rather than a replacement — a distinction that matters enormously in practice but gets lost in the daily rush to produce faster, with less effort.
Attention and focus tell a similar story. Research published in 2025 found that while AI-assisted solutions may lead to augmented productivity, they can concurrently undermine cognitive functions including critical thinking, creative problem-solving, and instinctual discernment — with prolonged AI use potentially imposing cognitive strain, attention depletion, information overload, and decision fatigue. The pattern will be familiar to anyone who has noticed how it feels to read a long article or sit with a difficult problem after weeks of consuming AI summaries and quick answers. The mental stamina required for sustained focus is, like any other capacity, something that grows with use and shrinks without it.
The education sector is arguably where these effects are most consequential, and most urgently being debated. Students who use AI tools for instant answers are not just getting their homework done faster — they are potentially bypassing the very cognitive processes that education is designed to develop. Research on LLM use in educational essay writing found that participants using AI showed weaker memory recall, less ownership of their work, and poorer neural and linguistic performance compared to other groups — and that session data revealed lingering effects of tool reliance even when participants switched to working without AI. The fact that these effects persist after the tool is removed is what elevates this from a minor academic concern to something worth taking seriously at a policy level.
None of this means that AI tools are inherently destructive, or that using ChatGPT to draft a work email is quietly dismantling your cognition. The picture is more nuanced than the alarming headlines suggest, and some researchers are careful to push back against the most sweeping interpretations of this evidence. Nature noted that scientists warn against reading too much into a small experiment generating widespread buzz — a fair caution in a media environment prone to turning preliminary findings into definitive verdicts. The MIT study's sample size, its demographic concentration, and its specific focus on essay writing all limit how broadly its conclusions can be applied. Real life is not a controlled experiment, and people use AI in wildly different ways, for wildly different purposes, with wildly different levels of critical engagement.
What the evidence does support is a more specific and tractable claim: that passive reliance on AI — the kind that involves accepting outputs without questioning, generating without thinking, and reading without encoding — carries measurable cognitive costs. The Microsoft-Carnegie Mellon research makes this distinction implicitly when it notes that workers with lower confidence in their AI tools actually engaged more critically with the work — they were more likely to interrogate the output, refine it, and apply their own judgment. Skepticism, it turns out, is cognitively protective. It keeps the brain in the loop.
Researchers writing in the peer-reviewed literature have introduced the concept of "AICICA" — AI chatbot-induced cognitive ability change — to describe a potential pattern of broader cognitive decline arising from overreliance on AI systems. Unlike calculators, which impact only specific cognitive areas like arithmetical computation, AI chatbots exhibit versatile applications across many domains, potentially influencing a broader range of cognitive abilities. This breadth is precisely what makes the current moment different from previous technological shifts. A GPS erodes spatial navigation. Social media reshapes attention. But a tool that substitutes for writing, reasoning, remembering, and deciding operates at a different level of cognitive penetration.
There are things individuals can do with this information that do not require abandoning every AI tool they use. The research consistently suggests that using AI as a starting point rather than an endpoint preserves more cognitive engagement — writing your own draft first, then consulting AI for refinement, rather than letting it generate the whole thing. Pretesting knowledge before turning to an AI tool, as the Pennsylvania study demonstrated, improves memory retention significantly. Maintaining a habit of questioning AI outputs, looking for errors, pushing back against conclusions, and integrating information with your own prior knowledge keeps the critical faculties exercised. It is not a question of using AI or not using it. It is a question of whether you remain the thinker, or gradually become the editor of someone else's thinking.
The deeper question, and one that the research has not yet fully addressed, is what happens over years rather than months. The MIT study tracked participants for four months. The Microsoft survey captured a snapshot of current workers. What cognitive science has not yet produced is a longitudinal portrait of a generation that has offloaded thinking to AI from early education onward — people for whom the friction of unaided cognition may never have been normalized in the first place. That study has not been done yet, because the generation it would need to study is still growing up with these tools in hand.
What seems increasingly clear, across the accumulating evidence, is that the brain responds to its environment with merciless efficiency. It allocates resources to what is demanded of it and withdraws them from what is not. A tool that reliably handles your hard thinking will, over time, be reflected in a brain that is less practiced at hard thinking. This is not a moral failing or a technological conspiracy. It is simply how neurological adaptation works, and it has been true for every cognitive prosthetic humans have ever invented. The difference now is the scope — the sheer range of cognitive tasks that AI can plausibly handle — and the speed with which those tools have become embedded in daily life.
The scientists are not saying ChatGPT is making you stupid. What they are saying is more precise, and in some ways more unsettling: it may be making you less practiced at being smart. The gap between those two things is worth paying careful attention to — which, given everything, may be the most important recommendation of all.
Further reading: MIT Media Lab — Your Brain on ChatGPT | Microsoft Research — AI and Critical Thinking | PMC — Cognitive Health and AI Chatbots

0 Comments