MIT Study Finds AI Use Reprograms the Brain, Leading to Cognitive Decline

(publichealthpolicyjournal.com)

Comments

goalieca 3 September 2025
Anecdote here, but when I was in grad school, I was talking to a PhD student i respected a lot. Whenever he read a paper, he would try to write the code out and get it working. I would take a couple of months but he could whip it up in a few days. He explained to me that it was just practice and the more you practice the better you become. He not only coded things quickly, he started analyzing papers quicker too and became really good at synthesizing ideas, knowing what worked and didn't, and built up a phenomenal intuition.

These days, I'm fairly senior and don't touch code much anymore but I find it really really instructive to get my hands dirty and struggle through new code and ideas. I think the "just tweak the prompts bro" people are missing out on learning.

tomrod 3 September 2025
A few things to note.

1. This is arxiv - before publication or peer review. Grain of salt.[0]

2. 18 participants per cohort

3. 54 participants total

Given the low N and the likelihood that this is drawn from 18-22 year olds attending MIT, one should expect an uphill battle for replication and for generalizability.

Further, they are brain scanning during the experiment, which is an uncomfortable/out-of-the-norm experience, and the object of their study is easy to infer if not directly known by the population (the person being studied using LLM, search tools, or no tools).

> We thus present a study which explores the cognitive cost of using an LLM while performing the task of writing an essay. We chose essay writing as it is a cognitively complex task that engages multiple mental processes while being used as a common tool in schools and in standardized tests of a student's skills. Essay writing places significant demands on working memory, requiring simultaneous management of multiple cognitive processes. A person writing an essay must juggle both macro-level tasks (organizing ideas, structuring arguments), and micro-level tasks (word choice, grammar, syntax). In order to evaluate cognitive engagement and cognitive load as well as to better understand the brain activations when performing a task of essay writing, we used Electroencephalography (EEG) to measure brain signals of the participants. In addition to using an LLM, we also want to understand and compare the brain activations when performing the same task using classic Internet search and when no tools (neither LLM nor search) are available to the user.

[0] https://arxiv.org/pdf/2506.08872

LocalPCGuy 3 September 2025
This is a bad and sloppy regurgitation of a previous (and more original) source[1] and the headline and article explicitly ignore the paper authors' plea[2] to avoid using the paper to try to draw the exact conclusions this article saying the paper draws.

The comments (some, not all) are also a great example of how cognitive bias can cause folks to accept information without doing a lot of due diligence into the actual source material.

> Is it safe to say that LLMs are, in essence, making us "dumber"?

> No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "passivity", "trimming" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it

> Additional vocabulary to avoid using when talking about the paper

> In addition to the vocabulary from Question 1 in this FAQ - please avoid using "brain scans", "LLMs make you stop thinking", "impact negatively", "brain damage", "terrifying findings".

1. https://www.brainonllm.com/

2. https://www.brainonllm.com/faq

TheAceOfHearts 3 September 2025
Personally, I don't think you should ever allow the LLM to write for you or to modify / update anything you're writing. You can use it to get feedback when editing, to explore an idea-space, and to find any topical gaps. But write everything yourself! It's just too easy to give in and slowly let the LLM take over your brain.

This article is focused on essay writing, but I swear I've experienced cognitive decline when using AI tools a bit too much to help solve programming-related problems. When dealing with an unfamiliar programming ecosystem it feels so easy and magical to just keep copy / pasting error outputs until the problem is resolved. Previously solving the problem would've taken me longer but I would've also learned a lot more. Then again, LLMs also make it way easier to get started and feel like you're making significant progress, instead of getting stuck at the first hurdle. There's definitely a balance. It requires a lot of willpower to sit with a problem in order to try and work through it rather than praying to the LLM slot machine for an instant solution.

sudosteph 3 September 2025
Meanwhile my main use cases for AI outside of work:

- Learning how to solder

- Learning how to use a multimeter

- Learning to build basic circuits on breadboxes

- learning about solar panels, mppt, battery management system, and different variations of li-on batteries

- learning about LoRa band / meshtastic / how to build my own antenna

And every single one of these things I've learned I've also applied practically to experiment and learn more. I'm doing things with my brain that I couldn't do before, and it's great. When something doesn't work like I thought it would, AI helps me understand where I may have went wrong, I ask it a ton of questions, and I try again until I understand how it works and how to prove it.

You could say you can learn all of this from YouTube, but I can't stand watching videos. I have a massive textbook about electronics, but it doesn't help me break down different paths to what I actually want to do.

And to be blunt: I like making mistakes and breaking things to learn. That strategy works great for software (not in prod obviously...), but now I can do it reasonably effectively for cheap electronics too.

Jimmc414 3 September 2025
Considerable amount of methodology issues here for a study with this much traction. Only 54 participants split three ways into groups of 18, with just 9 people per condition in the crossover. Far too small for claims about "brain reprogramming."

The study shows different brain patterns during AI-assisted writing, not permanent damage. Lower EEG activity when using a tool is expected just as showing less mental math activity when using a calculator.

The study translates temporary, task-specific neural patterns into "cognitive decline" and "severe cognitive harm." The actual study measured brain activity during essay writing, not lasting changes.

Plus, surface electrical measurements can't diagnose "cognitive debt" or deep brain changes. The authors even acknowledge this. Also, "83.3% couldn't quote their essay" equates to 15 out of 18 people?

planetmcd 3 September 2025
This article was probably written by AI, because anyone with half a brain could not read the study and come to the same conclusions.

Basically, participants spent less than half an hour, 4 times, over 4 months, writing some bullcrap SAT type essay. Some participants used AI.

So to accept the premise of the article, using an AI tool once a month for 20 minutes caused noticeable brain rot. It is silly on its face.

What the study actually showed, people don't have an investment or strong memory to output they didn't produce. Again, this is a BS essay written (mostly by undergrads) in 20 minutes, so not likely to be deep in any capacity. So to extrapolate, if you have a task that requires you to understand the output, you are less likely to have a grasp of it if you didn't help produce the output. This would also be true of work some other person did.

epolanski 3 September 2025
I can't but think this has to be tied to _how_ AI is used.

I actively use AI to research, question and argue a lot, this pushes me to reason a lot more than I normally would.

Today's example: - recognize docs are missing for a feature - have AI explore the code to figure out what's happening - back and forth for ours trying to find how to document, rename, refactor, improve, write mermaid charts, stress over naming to be as simple as possible

The only step I'm doing less is the exploration/search one, because an LLM can process a lot more text than I can at the same time. But for every other step I am pushing myself to think more and more profoundly than I would without an LLM because gathering the same amount of information would've bene too exhausting to proceed with this.

Sure, it may have spared me to dig into mermaid too, for what is worth.

So yes, lose some, win others, albeit in reality no work would've been done at all without the LLM enabling it. I would've moved to another mundane task such as "update i18 formatting of date for swiss german customers".

eviks 3 September 2025
No, vibe science is not as powerful as to be able to determine "long-term cognitive harm", especially when such "technical wonders" as "measurable through EEG brain scans." are used.

> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written

Not sure why you need to wire EEG up, it's pretty obvious that they simply did _not_ write the essay, LLM did it for them, and likely didn't even read it, so there is no surprise that they don't remember what didn't pass through their own thinking apparatus properly.

puilp0502 3 September 2025
Isn't this a duplicate of https://news.ycombinator.com/item?id=44286277 ?
gandalfgeek 3 September 2025
The coverage of this has been so bad that the authors have had to put up an FAQ[1] on their website, where the first question is the following:

Is it safe to say that LLMs are, in essence, making us "dumber"? No! Please do not use the words like “stupid”, “dumb”, “brain rot”, "harm", "damage", "brain damage", "passivity", "trimming" , "collapse" and so on. It does a huge disservice to this work, as we did not use this vocabulary in the paper, especially if you are a journalist reporting on it.

[1]: https://www.media.mit.edu/projects/your-brain-on-chatgpt/ove...

misswaterfairy 3 September 2025
I can't say I'm surprised by this. The brain is, figuratively speaking, a muscle. Learning through successes and (especially) failures is hard work, though not without benefit, in that the trials and exercises your brain works through exercises the 'muscle', making it stronger.

Using LLMs to do replace the effort we would've otherwise endured to complete a task short-circuits that exercising function, and I would suggest is potentially addictive because it's a near-instant reward for little work.

It would be interesting to see a longitudinal study on the affect of LLMs, collective attention spans, and academic scores where testing is conducted on pen and paper.

infecto 3 September 2025
Everyone is different. I don’t have a good grasp on the distribution of HN readers these days but I know for myself as a heavy user of LLMs, I am not sold on this for myself. I am asking more questions than ever. I use it for proof reading and editing. But I can see the risk as a software engineer. I really appreciate tools like cursor, I give it bite size chunks and review. Using tools like Claude code though. It becomes a black box and I no longer feel at the helm of the ship. I could see if you outsourced all thinking to an LLM there can be consequences. That said I am not sold on the paper and suspects it’s mostly hyperbole.
jennyholzer 3 September 2025
> In post-task interviews:

> 83.3% of LLM users were unable to quote even one sentence from the essay they had just written.

> In contrast, 88.9% of Search and Brain-only users could quote accurately.

> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could.

Reminds me of my coworkers who have literally no idea what Chat GPT put into their PR from last week.

NiloCK 3 September 2025
Every augmentation is also an amputation.

Calculators reduced our capabilities in mental and pencil-paper arithmetic. Graphing calculators later reduced our capacity to sketch curves, and in turn, our intuition in working directly with equations themselves. Power tools and electric mixers reduced our grip strength. Cheap long distance plans and electronic messaging reduced our collective abilities in long-form letter writing. The written word decimated the population of bards who could recite Homer from memory.

It's not that there aren't pitfalls and failure modes to watch out for, but the framing as a "general decline" is tired, moralizing, motivated, clickbait.

colincooke 3 September 2025
It is worth noting that this study was tbh pretty poorly performed from a psychology/neuroscience perspective and the neuro community was kind of roasting their results as uninterpretable.

Their trial design and interpretation of results is not properly done (i.e. they are making unfair comparison of LLM users to non-LLM users), so they can't really make the kind of claims they are making.

This would not stand up to peer review in it's current form.

I'm also saying this as someone who generally does believe these declines exist, but this is not the evidence it claims to be.

ticulatedspline 3 September 2025
Cognitive offload is nothing new, if you've been around for even a little while you've likely personally experienced it.

just like a muscle will atrophy from disuse skills and cognitive assets, once offloaded, will similarly atrophy. People don't memorize phone numbers, gps gets you where you want to go, your IDE seamlessly helps you along so much you could never code in a text editor, your TI-89 will do most of your math homework, as a manager you direct people to do work and no longer do the work yourself.

We of course never really lower our absolute cognitive load by much, just shift it. each of those points has it's own knowledge base that is needed to use it but sometimes we lose general skills in favor of esoteric skills.

While I may now possess esoteric skills in operating my GPS, setting way-points, saving locations, entering coordinates, if I use it a lot I find I need it to get back to the hotel from just a few miles away even if I've driven the route multiple times. I'm offloading learning the route to the gps. My father on the other hand struggles to use one and if he's away he pays a lot of attention to where he's going and remembers routes better.

Am I dumber than him? with respect to operating the device certainly not but if we both drove separately to a new location and you took GPS from me once I got there I'd certainly look a lot dumber getting lost trying to get back without my mental crutch. I didn't have to remember the route, so I didn't. I offloaded that to the machine, and some people offload a LOT, pretty sure nobody ever drove into a lake because a paper map told them to.

Modern AI is only interesting insofar as it subsumes tasks that until now we would consider fundamental. Reading, writing, basic comprehension. If you let it, AI will take over these things and spoon feed you all you want. Your cognitive abilities in those areas will atrophy and you will be less cognizant of task elements where you've offloaded mental workload to the AI.

And we'll absolutely see more of this, people who are a wiz at using AI, know every app, get things done by reflex. but never learned or completely forgot how to do basic shit, like read a paper, order a salad off a menu in-person or book a flight and it'll be both funny and sad when it happens.

kawfey 3 September 2025
The "your brain on ChatGPT" is giving the same feel as DARE's "your brain on drugs" campagign, and we now see how that went. It immediately loses any credibility for me.

It wasn't immediately clear what they actually had the subjects do. It seems like they wrote an essay, which...duh? I would bet brain activity would be similar -- if not identical -- as an LLM user if the subjects were asked to have the other cohorts to write their essay.

owisd 3 September 2025
I was reading The Shallows recently about how the Internet affects your brain, it's from 2009 so a bit out of date re: smartphones & a lot re: LLMs but makes the case that the Internet and hypertext generally as a tool is 'bad' for you cognitively because it puts additional load on your working memory, but offloads tasks from the parts of your brain that are useful for higher-level tasks and abstract thinking, so those more valuable skills atrophy. It contrasts this with a calculator that makes you "smarter" because it does the opposite - frees up your working memory so you have more time to focus on high-level thought. Found it quite striking because it seemed most likely LLMs and Smartphones would fit in the hypertext category and not the calculator category yet calculators is exactly what Sam Altman likes to use as an analogy to LLMs.
variadix 3 September 2025
Seems obvious. If you don’t use it you lose it. Same thing happened with mental arithmetic, remembering phone numbers, etc. Letting an LLM do your thinking will make you worse at thinking.
DrNosferatu 3 September 2025
If you blindly trust it instead of using it as an iterative tool, I guess…

But didn’t pocket calculators present the same risk / panic?

Brian_K_White 3 September 2025
It's probably an effect of the transition period where today people are using ais to meet work expectations and metrics of yesterday.

At some point ai will probably be like calculators where once everyone is using them for everything, that will be a new and different normal from today, and the expectations and the way of judging quality etc will be different than today.

Once everyone is doing the same one weird trick as you, it's no longer useful. You can no longer pretend to be a developer or an artist etc.

There will still be a sea of bottom-feeders doing the same thing, but they will just be universally recognized as cheap junk. Annd that's actually fine, kinda. There is a place and a use for cheap junk that just barely does something, the same as a cheap junky screwdriver or whatever.

ergonaught 3 September 2025
No idea whether this holds up, but the human body is all about conditioning and maximizing energy efficiency, so it should at least be unsurprising if true.

My vehicle has a number of self-driving capabilities. When I used them, my brain rapidly stopped attending to the functions I'd given over, to the extent that there was a "gap" before I noticed it was about to do the wrong thing. On resumption of performing that work myself, it was almost as if I had forgotten some elements of it for a moment while my brain sorted it out.

No real reason to think that outsourcing our thinking/writing/etc will cause our brains to respond any differently. Most of the "reasoned" arguments I see against that idea seem based on false equivalences.

abirch 3 September 2025
bgwalter 3 September 2025
I tried to see what the hype is about and translated one build system to another using "AI". The result was wrong, bloated and did not work. I then used smaller steps like the prompt geniuses recommend. It was exhausting, still riddled with errors, like a poor version of copy & paste.

Most importantly, I did not remember anything (which is a good thing because half of the output is wrong). I then switched to Stackoverflow etc. instead of the "AI". Suddenly my mental maps worked again, I recalled what I read, programming was fun again, the results were correct and the process much faster.

badbart14 3 September 2025
I remember this paper when it came out a couple months ago. Makes a lot of sense, the use of tools like ChatGPT essentially offshore the thinking processes in your brain. I really like the analogy to time under tension they talk about in https://www.theringer.com/podcasts/plain-english-with-derek-... (they also discuss this study and some of the flaws/results with it)
rusbus 3 September 2025
Does anyone else find it incredibly ironic that this article summarizing the paper was obviously written with AI?

All the headings and bullets and phrases like "The findings are clear:" stick out like a sore thumb.

pjio 3 September 2025
First step out of this mess: Use AI only to proof read or get a second opinion, but not to write the whole thing.
ramesh31 3 September 2025
I think like a lot of people here, my posture towards AI usage over the last 2 years has gone from:

"Won't touch it, I'd never infect my codebase with whatever garbage that thing could output" -> ChatGPT for a small function here or there -> Cursor/Copilot style autocomplete -> Claude Code fully automating 90% of my tasks.

It felt like magic at first once reaching that last (current) point. In a lot of ways for certain things it still is. But it's becoming clearer and clearer that this will never be a silver bullet, and I'm ready to evolve further to "It's another tool in the toolbox to be applied judiciously when and where it makes sense, which it usually does not.". I've also come to greatly distrust anything an LLM says that isn't verified by a domain expert.

I've also felt a great amount of joy from my work go away over this time. Much as the artisans of old who were forced to sit back and supervise the automated machines taking over their craft churn out crappier versions of something faster. There's more to this than just being an old fart who doesn't want to change. We all got into this field for a reason, and a huge part of that reason is that it brings us joy. Without that joy we are going to burn out quickly, and quality is going to nosedive.

flanbiscuit 3 September 2025
> and diminished sense of ownership over their own writing.

Anecdotally, this is how I felt when I tried out AI agents to help me write code (vibe coding). I always review the code and I ask it to break it down into smaller steps but because I didn't actually write and think of the code myself, I don't have it all in my brain. Sure, yes I can spend a lot of time really going through it and building my mental model but it's not the same (for me).

But this is also how I felt when I managed a small team once. When you start to manage more and code less, you have to let go of the fact that you have more intimate knowledge of the codebase and place that trust in your team. But at least you have a team of humans.

AI agentic coding is like shifting your job from developer to manager. Like the article that was posted yesterday said: 'treating AI like a "junior developer who doesn't learn"' [1,2].

One good thing I like about AI is that it's forcing people to write more documentation. No more complaining about that.

1. https://www.sanity.io/blog/first-attempt-will-be-95-garbage

2. https://news.ycombinator.com/item?id=45107962

SkyBelow 3 September 2025
The main issue I see is that the methodology section of the paper limited the full time to 20 minutes. Is this a study of using LLMs to write an essay for you, or using LLMs to help you write an essay? To be fair, LLMs can't be swapped between the two modes, so the distinction is left up to the user in how they engage in.

Thinking about it myself, and looking at the questions and time limits, I'm not sure how I would be able to navigate that distinction given only 20 minutes. The way I would use an LLM to aid me in writing an essay on the topic wouldn't fit within the time limit, so even with an LLM, I would likely stick to brain only except in a few specific case that might occur (forgetting how to spell a word or forgetting a name for a concept).

So this study likely is applicable to similar timed instances, like letting use LLMs on a test, but that's one I would have already seen as extremely problematic for learning to begin with (granted, still worth while to find evidence to back even the 'obvious' conclusions).

lo_zamoyski 3 September 2025
Why is this surprising? "Use it or lose it" may be a cliche, but it's true; if you don't keep some faculty conditioned, it gets "rusty". That's the general principle, so it would be surprising if this were an exception.

The age of social media and constant distraction already atrophies the ability to maintain sustained focus. Who reads a book these days, never mind a thick book requiring struggle to master? That requires immersion, sustained engagement, persevering through discomfort, and denying yourself indulgence in all sorts of temptations and enticements to get a cheap fix. It requires postponed gratification, or a gratification that is more subtle and measured and piecemeal rather than some sharp spike. We become conditioned in Pavlovian fashion, more habituated to such behavior, the more we engage in such behavior.

The reliance on AI for writing is partly rooted in the failure to recognize that writing is a form of engagement with the material. Clear writing is a way of developing knowledge and understanding. It helps uncover what you understand and what you don't. If you can't explain something, you don't know it well enough to have clear ideas about it. What good does an AI do you - you as a knowing subject - if it does the "writing" for you? You, personally, don't become wiser or better. You don't become fit by watching others exercise.

This isn't to say AI has no purpose, but our attitude toward technology is often irresponsible. We think that if we have the power to do something, we are missing out by not using it. This is boneheaded. The ultimate measure is whether the technology is good for you in some particular use case. Sometimes, we make prudential allowances for practical reasons. There can be a place for AI to "write" for us, but there are plenty of cases where it is simply senseless to use. You need to be prudent, or you end up abusing the technology.

TYPE_FASTER 3 September 2025
I used to know a bunch of phone numbers by heart. I haven't done that since I got a cellphone. Has that had an impact on my ability to memorize things? I have no idea.

I have recently been finding it noticeably more difficult to come up with the word I'm thinking of. Is this because I've been spending more time scrolling than reading? I have no idea.

mansilladev 3 September 2025
“…our cognitive abilities and creative capacities appear poised to take a nosedive into oblivion.”

Don’t sugarcoat it. Tell us how you really feel.

pfisherman 3 September 2025
Big caveat here is how people are using the LLMs. Here they were using them for things like information recall and ideation. LLMs as producer and human as editor / curator. They did not test another (my preferred) mode of LLM use - human as producer and LLM as editor / curator.

In this mode of use, you write out all your core ideas as stream of consciousness, bullet points or whatever without constraints of structure or style. Like more content than will make it into the essay. And then have the LLM summarize and clean it up.

Would be curious to see how that would play out in a study like this. I suspect that the subjects would not be able to quote verbatim, but would be able to quote all the main ideas and feel a greater sense of ownership.

Insanity 3 September 2025
So, logically, I know this is the case. I can feel it happen to myself, when I use an LLM to generate any kind of work. Although I rarely use it for coding as my job is more at a higher level (designs etc), if I have the LLM write part of a trade-off analysis, I'll remember it less and be less engaged.

What's really bothering me though, is that I enjoy my job less when using an LLM. I feel less accomplished, I learn less, and I overall don't derive the same value out of my work.. But, on the flip side, by not adopting an LLM I'll be slower than my peers, which then also impacts my job negatively.

So it's like being stuck between a rock and a hard place - I don't enjoy the LLM usage but feel somewhat obligated to.

Eawrig05 4 September 2025
This study is so limited in scope that the title is really misleading - "AI Use Reprograms the Brain" is not really a fair assessment of the study. The study focuses on one question: what is the effect of relying on an LLM writing your essay. The answer: it makes you forget how to write a good essay. I mean, I think its obvious that if you rely on an LLM to write for you, you effectually lose the skill of writing. But what if you use an LLM to teach you a concept? Would this also lead to a cognitive decline? I don't know the answer, but I think that is a question that ought to be explored.
j45 3 September 2025
The gap i see is the definition of "AI use" is not clearly delineated between passive (usage similar to consumption) vs active.

Passive AI use where you let something else think for your will obvious cause cognitive decline.

Active use of AI as a thought partner, and learning as you go yourself seem to feel different.

The issue with studying 18-22 year olds is their prefrontal cortex (a center of logic, will power, focus, reasoning, discipline) is not fully developed until 26. But that probably doesn't matter if the study is trying to make a point about technology.

The art of learning fake information from real could also increase cognitive capacity.

teekert 3 September 2025
Anybody who has tried to shortcut themselves into a report on something using an LLM, and was then asked to defend the plans contained within it knows that writing is thinking. And if you outsource the writing, you do less thinking and with less thinking there is less understanding. Your mental model is less complete, less comprehensive.

I wouldn't call it "cognitive decline", more "a less deep understanding of the subject".

Try solving bugs from your vibe coded projects... It's pain, you haven't learned anything while you build something. And as a result you don't fully grasp how your creation works.

LLM are tools, but also shortcuts, and humans learn by doing ¯\_(ツ)_/¯

This is pretty obvious to me after using LLMs for various tasks over the past years.

ETH_start 3 September 2025
When I'm really using AI, my mind is pushed to its very limits. I'm forced to maintain context that is much more complex than anything I had to keep in working memory pre-AI. But it also feels easier because you don't have to do nearly as much thinking to get every given task done. So maybe I get lazier, not in how much I accomplish, but in how much effort I put forth. So if my previous working intensity applied with AI would let me finish 10x as much work, now I'm content with exerting half as much effort and getting 5x as much work done as my pre-AI self.
blackqueeriroh 3 September 2025
I’d encourage folks to listen to this podcast[1] or read the transcript which is done by two incredibly respected people, Dr. Cat Hicks, a psychologist who studies software teams, and Dr. Ashley Juavinett, who is a practicing and teaching neuroscientist. They note the many flaws in the study and discuss what actually good brain research would look like.

1: https://www.changetechnically.fyi/2396236/episodes/17378968-...

r3trohack3r 3 September 2025
In the people around me I’ve observed:

AI solves the 2-sigma problem when used correctly.

AI is extremely neurodegenerative when used incorrectly.

The people using it as a research assistant to discover quality sources they can dive into, and as a tutor while working through those resources, are getting smarter.

The people using it as an “oracle made from magic talking sand” are getting dumber.

To be fair, the same thing is true of the web in general, but not to the extreme I’ve been seeing with AI.

I’m predicting the bell curve of IQ is going to flatten quite a bit over the next decade, as people shift two sigma in both directions.

sigbottle 3 September 2025
obviously obvious caveats like, intentional use is good, lazy use is bad, etc.

I've found it both helpful and dangerous, it's great for expanding scope obviously, greater search engine.

But I've also significantly noticed further some of the "harmful patterns" I guess that I would not have noticed about... myself? For example, AI is way too eager to "solve things" when given a prompt, even if you give it an abstract one. It's unable to take a step back and just.... think?

And hey, I notice that I do that too! Lol.

It's helped me realize more refined "stages" of thinking I guess, even beyond just "plan" and "solve".

But for sure a lot of the time I'm just lazy and ask AI to just "go do it" and turn off critical thinking, hoping that it can just 1 shot the problem instead of me breaking it down. Sometimes it genuinely works. Often it doesn't.

I think if I stay way more intentional with my thinking, I can use it to good use. Which will probably reduce AI usage - but it's the first principles of real critical thinking, not the usage of AI.

---

These kinds of studies remind me of when my parents told me "stop getting addicted to games" as a kid. Sure, anyone can observe effects, it takes real brains to really try and understand the first principles effects. Addiction went away in a flash once I understood the principles, lol.

whatamidoingyo 3 September 2025
I've been seeing people use LLMs to reply to people on Facebook. Like, they'll just be having a general discussion, and then reply as ChatGPT. I don't know if they think it makes them look smart; I think it has the complete opposite effect.

Not many people can perform mental arithmetic beyond single-digit numbers. Just plug it into a calculator...

We're at the point of people plugging their thoughts into an LLM and having it do the work for them... what's going to happen to thinking?

Mistletoe 3 September 2025
The future for humans worries me a lot. What evolutionary pressures will exist to keep us intelligent? We are already seeing IQ drop alarmingly across the world. Now AI comes in from the top rope with the steel chair?

https://www.ncbi.nlm.nih.gov/search/research-news/3283/

dns_snek 4 September 2025
That would not surprise me at all given what I've observed in a couple of people who outsource the thinking part to LLMs. One of them has dropped at least 20 IQ points and went from being able to grasp complex concepts with ease to needing an LLM to confirm that indeed, 2+2=4 (only somewhat hyperbolic).
digitcatphd 3 September 2025
So users are more detached from their work? How does this correspond with cognitive decline? Wouldn’t it need to be cross referenced in other areas beside the task at hand? Seems a bit of a headline grabbing study to me. Personally I find thinking with an LLM helps me take a more structured and unbiased approach to my thought process
babycheetahbite 3 September 2025
Does anyone have any suggestions for approaches they are taking to avoid the potential for this? Something I did recently in ChatGPT's 'Instructions' box (so far I have only used ChatGPT) is requesting it to "Make me think through the problem before just giving me the answer." and a few other similar notes.
hoppp 3 September 2025
Chatting with vibe coders on reddit, I can definitely tell.. although my hunch is that a lot of people "not smart" enough to learn to program will be entering the field calling themselves programmers.

I think maybe they are project managers since the programming is outsourced to Ai, but the idea don't seem to catch on there

wslh 3 September 2025
Calculators either? [1]. To be fair, we can find articles in favor and against the same tool.

[1] https://www.cell.com/trends/cognitive-sciences/abstract/S136...

gandalfgeek 3 September 2025
The title of the study is provocatively framed and the actual findings don't live up to it. I made a short video explaining it-- https://www.youtube.com/watch?v=hLDCi0VwyiQ
grim_io 3 September 2025
I have never used LLM's to write essays, so I can't comment on that.

What I can comment on is how valuable and energizing it is for me to cooperatively code with LLM's using agents.

I find it sad to hear when someone finds this experience disappointing, and I wonder what could go wrong to make it so.

lif 3 September 2025
What are the costs of convenience? Surely most LLM use by consumers leans into that heavily.
rekrsiv 3 September 2025
I believe this is true for literally anything that replaces practice. We're meant to build muscle memory for things through repetition, but if we sidestep the repetition by farming it out to another process, we never build muscle memory.
vonneumannstan 3 September 2025
No different than Socrates complaining about students using writing ruining their memory.
siliconc0w 3 September 2025
Isn't it obvious that you use your brain less to generate an essay with AI vs writing it manually?

I think what you'd want to measure is someone completing a task manually and someone completing n times the tasks with a copilot.

CuriouslyC 3 September 2025
This does not mesh with my personal experience. I find that AI reduces task noise that prevents me from getting in the flow of high level creative/strategic thinking. I can just plan algorithms/models/architectures and very quickly validate, test, iterate and always work at a high level while the AI handles syntax and arcane build processes.

Maybe it's my natural ADHD tendencies, but having that implementation/process noise removed from my workflow has been transformational. I joke about having gone super saiyan, but it's for real. In the last month, I've gotten 3 papers in pre-print ready state, I'm working on a new model architecture that I'm about to test on ARC-AGI, and I've gotten ~20 projects to initial release or very close (several of which concretely advance SOTA).

jugg1es 3 September 2025
All of the nay-sayers in the comments here are thinking about this from the POV of a person who reached intellectual maturity without LLMs and now use it as a force multiplier, and rightly so.

However, I think that take is too short-sighted and doesn't take into account the effect that these products have on minds that have not yet reached maturity. What happens when you've been using ChatGPT since grade school and have effectively offloaded all the hard stuff to AI through college? Those people won't be using it as a force multiplier - they will be using it to perform basic tasks. Ray-Ban sells glasses now with LLMs built in with a camera and microphone so you can constantly interact with it all day. What happens when everyone has one of these devices and use it for everything?

kelsey98765431 3 September 2025
Misleading title, the article explicitly says when used to cheat on essays.
m3kw9 3 September 2025
Is calculator/excel a bad thing? I'm ok with not having fast calculation(cognitive decline in that area) as it frees me to do other things that crops up as a result of the speed.
stevenjgarner 3 September 2025
This MIT study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?
stevenjgarner 3 September 2025
This study does not seem to address whether AI use causes true cognitive decline, or simply shifts the role of cognition from "doing the task" to "managing the task"?
j4hdufd8 3 September 2025
"The Court did recognize that divesting Chrome and Android would have gone beyond the case’s focus on search distribution, and would have harmed consumers and our partners."

Absolute idiots

tqwhite 3 September 2025
What a load of crap. I don't believe it for one second. Also, AI has only been an important influence for about twenty minutes.

Here's what I think: AI causes you to forget how to program but causes you to learn how to plan.

Also, AI enhances who you are. Dummies get dummer. Smarties get smarter.

But that's not proven. It's anecdote. And I don't believe anyone knows what is really happening and those that claim to are counterproductive.

m3kw9 3 September 2025
For those not critical of what AI says, this will be a bigger issue, they will just by pass their own decision making and paste AI response, likely atrophying that thought process
yayitswei 3 September 2025
Management roles have always involved outsourcing cognitive work to subordinates. Are we seeing a cognitive decline there too? Maybe delegation was the original misalignment problem.
briandw 3 September 2025
This is standard response to any new technology. Socrates call books the death of knowledge, in the 19th century there was a moral panic about girls reading novels etc etc.
mensetmanusman 3 September 2025
tsoukase 3 September 2025
Cognitive decline in already grown up brains. Decline in intelligence in growing brains, so the reverse Flynn effect carry on for a few more years.
bentt 3 September 2025
I believe this just based on my experience. I've also noticed that the rewards I feel from programming are stolen, and there's this conflicting feeling of accomplishment without the process. It's maybe a bit like taking mind-altering drugs in that they create reward artificially.

Much of what keeps me going with work is the reward loop. This changes it fundamentally and it's a bit frightening how compelling the actual productivity is, versus the psychological tradeoff of not getting the reward through the typical process of problem solving.

ChrisArchitect 3 September 2025
Paper from June.

Discussion then: https://news.ycombinator.com/item?id=44286277

shironandonon_ 3 September 2025
aren’t those with higher intellect at greater risk of depression?

I’m going to use 2x the amount of AI that I was planning to use today.

Kuinox 3 September 2025
Remember, they only measured that the less time you spend on a task, the less you remember it.
lawlessone 3 September 2025
Counterpoint: I just asked chatgpt and it says i'm the smrtest boy and very handsome
WalterBright 3 September 2025
The same thing with your body. Use a car instead of walking, and your body declines.
nperez 3 September 2025
I feel like this sort of thing will be referenced for comic relief in future talks about hysteria at the dawn of the AI era.

The article actually contains the sentence "The machines aren’t just taking over our work—they’re taking over our minds." which reminds me more of Reefer Madness than an honest critique of modern tech.

amelius 3 September 2025
Isn't intelligence -> asking the right questions?

Rather than coming up with the right answers?

rozab 3 September 2025
This study itself and also the media coverage of it are shockingly bad. I wrote a bunch about it at the time and I don't really want to do that again but here is the low down:

- This is not a longitudinal study. Each partipant did 4 20 minute sessions. It just happens that the total study took 4 months. - The paper does not imply long term harm of any kind, they just measured brain connectivity during the short tasks. - It is not surprising that when asked to use an LLM to write an essay, partipants don't remember it. They didn't write it. - It is not surprising they showed less brain activity. They were delegating the task to something else. They were asked to. - I think the authors of the paper deliberately attempted to obscure this. Q7 on p30 is "LLM group: If you copied from ChatGPT, was it copy/pasted, or did you edit it afterwards?" This has been removed from the results section entirely, and other parts of the results do not match the supposed methodology. - The whole paper is extremely sloppy, with grammar mistakes, inconsistencies, and nonsensical charts. Check out Figure 29...

darajava 3 September 2025
I'm really not advocating for people to push out reams of AI drivel and not learn anything while doing it, but of these three groups which ones are likely to be the most effective?

The ability to easily edit in word processors surely atrophied people's ability to really reason out what they wanted to write before committing it to paper. Is it sad that these traits are less readily available in the human populace? Sure. Do we still use word processors anyway because of the tremendous benefits they have? Of course. Similar could be said for spellcheckers, tractors, calculators, power tools, etc.

With LLMs, it's so much quicker to access a tremendous breadth of information, as well as drill down and get a pretty good depth on a lot of things too. We lose some things by doing it this way, and it can certainly be very misused (usually in a fairly embarrassing way). We need to keep it human, but AI is here to stay and I think the benefits far exceed the "cognitive decline" as mentioned in this journal.

FilosofumRex 4 September 2025
This study is by Media Lab, which along with Sloan School, Econ, and newly minted Schwarzman College of Computing are not on par with the old school MIT!

Besides academics are bitter since LLMs are better at teaching than they are!

nzach 3 September 2025
> 0% of LLM users could produce a correct quote, while most Brain-only and Search users could

I think a better interpretation would be to say that LLMs gives people the ability to "filter out" certain tasks in our brains. Maybe a good parallel would be to point out that some drivers are able to drive long distances on what is essentially an "auto-pilot". When this happens they are able to drive correctly but don't really register every single action they've taken during the process.

In this study you are asking for information that is irrelevant (to the participant). So, I think it is expected that people would filter it out if given the chance.

[edit] Forgot to link the related xkcd: https://xkcd.com/1414/

hopelite 3 September 2025
What a rather ironic headline that generalizes across all "AI use", while the story is about a study that is specifically about "essay writing tasks". But that kind of slop is just par for the course for journalists and also always has been.

But it does highlight that this mind-slop decline is not new in any way even if it may have accelerated with the decline and erosion of standards.

Think of it what you want, but if the standards that led to a state everyone really enjoys and benefits from are done away with, inevitably that enjoyable state everyone benefited from and you really like will start crumbling all around you.

AI is not really unusual in this manner, other than maybe that it is squarely hitting a group and population like public health policy journalists and programmers that previously thought they were immune because they were engaged in writing. Yes, programmers are essentially just writers.

asimovfan 3 September 2025
Writing long texts for school is stupid and it is a skill that is in practice purely developed in order to do homework. I am not surprised it immediately declines as soon as the necessity is removed.
MarkusWandel 3 September 2025
Muscles atrophy from lack of use - as an aging cyclist with increasing numbers of e-bikes all around, I think I may some day have to use one because of age, but what are all these younger people doing, cheating themselves out of exercise?

And so it is with many things. I wrote cursive right through the end of my high school years, but while I can type well on a computer, I have trouble even writing block lettering without mistakes now, and cursive is a lost cause.

Ubiquitous electronic calculators have eroded the heroic mental calculation skills of old. And now artificial "thinking machines" to do the thinking for you cause your brain to atrophy. Colour me surprised. The Whispering Earring story was mentioned here just recently but is totally topical.

https://croissanthology.com/earring

timhigins 3 September 2025
This was published by an anti-vaxxer/vaccine denier and also seems to be AI-generated. Would recommend linking to the original study instead. The homepage of the site includes articles like "Do Viruses Exist?" "(POLL) 96% support federal control of DC to fight crime" "Autism Spectrum Disorders: Is Immunoexcitotoxicity the Link to the Vaccine Adjuvants? The Evidence" and so does his twitter page: https://x.com/NicHulscher.
lowbloodsugar 3 September 2025
I mean, I felt the same way about people who built things with Visual Basic instead of C or assembly, back in the day. Then there were super smart people who were doing critical things in C/C++ and using VB to make a nice UI.

AI is no different. Most will use it and not learn the fundamentals. There’s still lots of work for those people. Then some of us are doing things like looking at the state machines that rust async code generation produces or inspecting what the Java JIT is producing and still others are hacking ARM assembly. I use AI to take care of the boring bits, just like writing a nice UI in C++ was tedious back in 1990 so we used VB for that.

footy 3 September 2025
there's going to be an avalanche of dementia for the generations that outsource all their thinking to LLMs
arzig 3 September 2025
Honestly the only use I’ve found for AI so far is for executing refactorings that are mechanical but don’t fit nicely into the rename/move or multi-cursor mode.

I’ll do it once or twice, tell the llm to do it and reference the changes I made and it’s usually passable. It’s not fit for anything more imo.

rogerkirkness 3 September 2025
This article is written by AI. The em dashes and 'Don't just X, but Y' logic is a classic ChatGPT writing pattern in particular.
tim333 3 September 2025
I'm not at all convinced that "AI Use Reprograms the Brain, Leading to Cognitive Decline"

Some of the points that LLM users could remember what they wrote and felt disconnected from it are kind of well, duh. Obviously that applies to anything written by someone or something else. If that's the level of argument I very much doubt it supports the LLM leads to cognitive decline hypothesis.

I mean you won't learn as much having an LLM write and essay than writing it yourself, but you can use LLMs and write essays or whatever. I doubt LLMs are any worse for your head than daytime TV or such like.

gowld 3 September 2025
This research is based on people being given 20 minutes to research and write an "essay"? (Or, in the Brain only case, write an "essay" without doing any research.)

How is that not utter garbage? You're comparing text that is barely more than a forum comment, and noticing that people who spend the short time thinking and writing are engaging in different activity from people who spend the time using a research tools and different activity from people whow spend the time asking an AI (and waiting for it) to generate content.

agigao 3 September 2025
Skill atrophy will be a composition of 2 words that will very much define the tech industry in 2025.

And, it is something we need to talk about loudly, but I guess it wouldn't crank up a number of followers or valuation of AI grifters.

plutoh28 3 September 2025
Is it just me or does this paper read like it was run through ChatGPT? Kind of ironic if true.
iphone_elegance 3 September 2025
well now that explains HN
feverzsj 3 September 2025
"@gork Is this true?"
hnpolicestate 3 September 2025
I've stopped thinking to formulate content. I now think to prompt.

This makes complete sense though. We're simply trying to automate the human thinking process like we try to use technology to automate/handoff everything else.

ath3nd 3 September 2025
That explains a lot of Hacker News lately. /s

Like everything else in our life, cognition is "use it or lose it". Oursourcing your decision making and critical thinking to a fancy autocomplete with sycopantic tendencies and incapable of reasoning sure is fun, but as the study found, it has its downsides.

quotemstr 3 September 2025
"Studies" like this bite at the ankles of every change in information technology. Victorians thought women reading too many magazines would rot their minds.

Given that AI is literally just words on a monitor just like the rest of the internet, I have a strong prior it's not "reprogram[ming]" anyone's mind, at least not in some manner that, e.g. heavy Reddit use might.