Kinda funny but my current feeling about it is different from a lot of people.
I did a lot of AI assisted coding this week and I felt, if anything, it wasn't faster but it led to higher quality.
I would go through discussions about how to do something, it would give me a code sample, I would change it a bit to "make it mine", ask if I got it right, get feedback, etc. Sometimes it would use features of the language or the libraries I didn't know about before so I learned a lot. With all the rubber ducking I thought through things in a lot of depth and asked a lot of specific questions and usually got good answers -- I checked a lot of things against the docs. It would help a lot if it could give me specific links to the docs and also specific links to code in my IDE.
If there is some library that I'm not sure how to use I will load up the source code into a fresh copy of the IDE and start asking questions in that IDE, not the one with my code. Given that it can take a lot of time to dig through code and understand it, having an unreliable oracle can really speed things up. So I don't see it as a way to gets things done quickly, but like pairing with somebody who has very different strengths and weaknesses from me, and like pair programming, you get better quality. This week I walked away with an implementation that I was really happy with and I learned more than if I'd done all the work myself.
This was a really fascinating project to work on because of the breadth of experiences and perspectives people have on LLMs, even when those people all otherwise have a lot in common (in this case, experienced programmers, all Recurse Center alums, all professional programmers in some capacity, almost all in the US, etc). I can't think of another area in programming where opinions differ this much.
> RC is a place for rigor. You should strive to be more rigorous, not less, when using AI-powered tools to learn, though exactly what you need to be rigorous about is likely different when using them.
This brings about an important point for a LOT of tools, which many people don't talk about: namely, with a tool as powerful as AI, there will always be minority of people with healthy and thoughtful attitude towards its use, but a majority who use it improperly because its power is too seductive and human beings on average are lazy.
Therefore, even if you "strive to be more rigorous", you WILL be a minority helping to drive a technology that is just too powerful to make any positive impact on the majority. The majority will suffer because they need to have an environment where they are forced not to cheat in order to learn and have basic competence, which I'd argue is far more crucial to a society that the top few having a lot of competence.
The individualistic will say that this is an inevitable price for freedom, but in practice, I think it's misguided. Universities, for example, NEED to monitor the exam room, because otherwise cheating would be rampant, even if there is a decent minority of students who would NOT cheat, simply because they want to maximize their learning.
With such powerful tools as AI, we need to think beyond our individualistic tendencies. The disciplined will often tout their balanced philosophy as justification for that tool use, such as this Recurse post is doing here, but what they are forgetting is that by promoting such a philosophy, it brings more legitimacy into the use of AI, for which the general world is not capable of handling.
In a fragile world, we must take responsibility beyond ourselves, and not promote dangerous tools even if a minority can use them properly.
> One particularly enthusiastic user of LLMs described having two modes: “shipping mode” and “learning mode,” with the former relying heavily on models and the latter involving no LLMs, at least for code generation.
Crazy that I agreed with the first half of the sentence and was totally thrown off by the end. To me, “learning mode” is when I want the LLM. I’m in a new domain and I might not even know what to google yet, what libraries exist, what key words or concepts are relevant. That’s where an LLM shines. I can see basic generic code that’s well explained and quickly get the gist of something new. Then there’s “shipping mode” where quality is my priority, and subtle sneaky bugs really ought to be avoided—the kind I encounter so often with ai written code.
The e-bike analogy in the article is a good one. Paraphrasing: Use it if you want to cover distance with low effort. But if your goal is fitness then the e-bike is not the way to go.
It’s a thin line to walk for me, but I feel that the whole “skill atrophy” aspect of it is the hardest to not slip into.
What I’ve personally liked about these tools is that they give me ample room to explore and experiment with different approaches to a particular problem because then translating a valid one into “the official implementation” is very easy.
I’m a guy who likes to DO to validate assumptions: if there’s some task about how something should be written concurrently to be efficient and then we need some post processing to combine the results, etc, etc, well, before Claude Code, I’d write a scrappy prototype (think like a single MVC “slice” of all the distinct layers but all in a single Java file) to experiment, validate assumptions and uncover the unknown unknowns.
It’s how I approach programming and always will. I think writing a spec as an issue or ticket about something without getting your hands dirty will always be incomplete and at odds with reality. So I write, prototype and build.
With a “validated experiment” I’d still need a lot of cleaning up and post processing in a way to make it production ready. Now it’s a prompt!
The learning is still the process of figuring things out and validating assumptions. But the “translation to formal code” part is basically solved.
Obviously, it’s also a great unblocking mechanism when I’m stuck on something be it a complex query or me FEELING an abstraction is wrong but not seeing a good one etc.
(Full disclosure: I have a lot of respect for RC and have thought about applying to attend myself. This will color my opinion.)
I really enjoyed this article. The numerous anecdotes from folks at RC was great. In particular thanks for sharing this video of voice coding [1].
This line in particular stood out to me that I use to think about LLMs myself:
"One particularly enthusiastic user of LLMs described having two modes: “shipping mode” and “learning mode,” with the former relying heavily on models and the latter involving no LLMs, at least for code generation."
Sometimes when I use Claude Code I either put it in Plan Mode or tell it to not write any code and just rubber duck with it until I come up with an approach I like and then just write the code myself. It's not as fast as writing the plan with Claude and asking it to write the code, but offers me more learning.
Such a thoughtful and well-written article. One of my biggest worries about AI is its impact on the learning process of future professionals, and this feels like a window into the future, hinting at the effect on unusually motivated learners (a tiny subset of people overall, of course). I appreciated the even-handed, inquisitive tone.
I feel like John Holt, author of Unschooling, who is quoted numerous times in the article, would not be too keen on seeing his name in a post legitimizes a technology that uses inevitabilism to insert itself in all domains of life.
--
"Technology Review," the magazine of MIT, ran a short article in January called
"Housebreaking the Software" by Robert Cowen, science editor of the "Christian
Science Monitor," in which he very sensibly said: "The general-purpose home
computer for the average user has not yet arrived.
Neither the software nor the information services accessible via telephone are
yet good enough to justify such a purchase unless there is a specialized need.
Thus, if you have the cash for a home computer but no clear need for one yet,
you would be better advised to put it in liquid investment for two or three
more years." But in the next paragraph he says "Those who would stand aside
from this revolution will, by this decade's end, find themselves as much of an
anachronism as those who yearn for the good old one-horse shay." This is mostly
just hot air.
What does it mean to be an anachronism? Am I one because I don't own a car or a
TV? Is something bad supposed to happen to me because of that? What about the
horse and buggy Amish? They are, as a group, the most successful farmers in the
country, everywhere buying up farms that up-to-date high-tech farmers have had
to sell because they couldn't pay the interest on the money they had to borrow
to buy the fancy equipment.
Perhaps what Mr. Cowen is trying to say is that if I don't learn how to run the
computers of 1982, I won't be able later, even if I want to, to learn to run
the computers of 1990. Nonsense! Knowing how to run a 1982 computer will have
little or nothing to do with knowing how to run a 1990 computer. And what about
the children now being born and yet to be born? When they get old enough, they
will, if they feel like it, learn to run the computers of the 1990s.
Well, if they can, then if I want to, I can. From being mostly meaningless, or,
where meaningful, mostly wrong, these very typical words by Mr. Cowen are in
method and intent exactly like all those ads that tell us that if we don't buy
this deodorant or detergent or gadget or whatever, everyone else, even our
friends, will despise, mock, and shun us the advertising industry's attack on
the fragile self-esteem of millions of people. This using of people's fear to
sell them things is destructive and morally disgusting.
The fact that the computer industry and its salesmen and prophets have taken
this approach is the best reason in the world for being very skeptical of
anything they say. Clever they may be, but they are mostly not to be trusted.
What they want above all is not to make a better world, but to join the big
list of computer millionaires.
A computer is, after all, not a revolution or a way of life but a tool, like a
pen or wrench or typewriter or car. A good reason for buying and using a tool
is that with it we can do something that we want or need to do better than we
used to do it. A bad reason for buying a tool is just to have it, in which case
it becomes, not a tool, but a toy.
On Computers
Growing Without Schooling #29
September 1982
Developing our position on AI
(recurse.com)218 points by jakelazaroff 23 July 2025 | 69 comments
Comments
I did a lot of AI assisted coding this week and I felt, if anything, it wasn't faster but it led to higher quality.
I would go through discussions about how to do something, it would give me a code sample, I would change it a bit to "make it mine", ask if I got it right, get feedback, etc. Sometimes it would use features of the language or the libraries I didn't know about before so I learned a lot. With all the rubber ducking I thought through things in a lot of depth and asked a lot of specific questions and usually got good answers -- I checked a lot of things against the docs. It would help a lot if it could give me specific links to the docs and also specific links to code in my IDE.
If there is some library that I'm not sure how to use I will load up the source code into a fresh copy of the IDE and start asking questions in that IDE, not the one with my code. Given that it can take a lot of time to dig through code and understand it, having an unreliable oracle can really speed things up. So I don't see it as a way to gets things done quickly, but like pairing with somebody who has very different strengths and weaknesses from me, and like pair programming, you get better quality. This week I walked away with an implementation that I was really happy with and I learned more than if I'd done all the work myself.
This was a really fascinating project to work on because of the breadth of experiences and perspectives people have on LLMs, even when those people all otherwise have a lot in common (in this case, experienced programmers, all Recurse Center alums, all professional programmers in some capacity, almost all in the US, etc). I can't think of another area in programming where opinions differ this much.
This brings about an important point for a LOT of tools, which many people don't talk about: namely, with a tool as powerful as AI, there will always be minority of people with healthy and thoughtful attitude towards its use, but a majority who use it improperly because its power is too seductive and human beings on average are lazy.
Therefore, even if you "strive to be more rigorous", you WILL be a minority helping to drive a technology that is just too powerful to make any positive impact on the majority. The majority will suffer because they need to have an environment where they are forced not to cheat in order to learn and have basic competence, which I'd argue is far more crucial to a society that the top few having a lot of competence.
The individualistic will say that this is an inevitable price for freedom, but in practice, I think it's misguided. Universities, for example, NEED to monitor the exam room, because otherwise cheating would be rampant, even if there is a decent minority of students who would NOT cheat, simply because they want to maximize their learning.
With such powerful tools as AI, we need to think beyond our individualistic tendencies. The disciplined will often tout their balanced philosophy as justification for that tool use, such as this Recurse post is doing here, but what they are forgetting is that by promoting such a philosophy, it brings more legitimacy into the use of AI, for which the general world is not capable of handling.
In a fragile world, we must take responsibility beyond ourselves, and not promote dangerous tools even if a minority can use them properly.
This is why I am 100% against AI – no compromise.
Crazy that I agreed with the first half of the sentence and was totally thrown off by the end. To me, “learning mode” is when I want the LLM. I’m in a new domain and I might not even know what to google yet, what libraries exist, what key words or concepts are relevant. That’s where an LLM shines. I can see basic generic code that’s well explained and quickly get the gist of something new. Then there’s “shipping mode” where quality is my priority, and subtle sneaky bugs really ought to be avoided—the kind I encounter so often with ai written code.
I’m a guy who likes to DO to validate assumptions: if there’s some task about how something should be written concurrently to be efficient and then we need some post processing to combine the results, etc, etc, well, before Claude Code, I’d write a scrappy prototype (think like a single MVC “slice” of all the distinct layers but all in a single Java file) to experiment, validate assumptions and uncover the unknown unknowns.
It’s how I approach programming and always will. I think writing a spec as an issue or ticket about something without getting your hands dirty will always be incomplete and at odds with reality. So I write, prototype and build.
With a “validated experiment” I’d still need a lot of cleaning up and post processing in a way to make it production ready. Now it’s a prompt! The learning is still the process of figuring things out and validating assumptions. But the “translation to formal code” part is basically solved.
Obviously, it’s also a great unblocking mechanism when I’m stuck on something be it a complex query or me FEELING an abstraction is wrong but not seeing a good one etc.
I really enjoyed this article. The numerous anecdotes from folks at RC was great. In particular thanks for sharing this video of voice coding [1].
This line in particular stood out to me that I use to think about LLMs myself:
"One particularly enthusiastic user of LLMs described having two modes: “shipping mode” and “learning mode,” with the former relying heavily on models and the latter involving no LLMs, at least for code generation."
Sometimes when I use Claude Code I either put it in Plan Mode or tell it to not write any code and just rubber duck with it until I come up with an approach I like and then just write the code myself. It's not as fast as writing the plan with Claude and asking it to write the code, but offers me more learning.
[1]: https://www.youtube.com/watch?v=WcpfyZ1yQRA
Is anyone John Henry-ing this question and having parallel teams build the same product at the same time?
--
"Technology Review," the magazine of MIT, ran a short article in January called "Housebreaking the Software" by Robert Cowen, science editor of the "Christian Science Monitor," in which he very sensibly said: "The general-purpose home computer for the average user has not yet arrived.
Neither the software nor the information services accessible via telephone are yet good enough to justify such a purchase unless there is a specialized need. Thus, if you have the cash for a home computer but no clear need for one yet, you would be better advised to put it in liquid investment for two or three more years." But in the next paragraph he says "Those who would stand aside from this revolution will, by this decade's end, find themselves as much of an anachronism as those who yearn for the good old one-horse shay." This is mostly just hot air.
What does it mean to be an anachronism? Am I one because I don't own a car or a TV? Is something bad supposed to happen to me because of that? What about the horse and buggy Amish? They are, as a group, the most successful farmers in the country, everywhere buying up farms that up-to-date high-tech farmers have had to sell because they couldn't pay the interest on the money they had to borrow to buy the fancy equipment.
Perhaps what Mr. Cowen is trying to say is that if I don't learn how to run the computers of 1982, I won't be able later, even if I want to, to learn to run the computers of 1990. Nonsense! Knowing how to run a 1982 computer will have little or nothing to do with knowing how to run a 1990 computer. And what about the children now being born and yet to be born? When they get old enough, they will, if they feel like it, learn to run the computers of the 1990s.
Well, if they can, then if I want to, I can. From being mostly meaningless, or, where meaningful, mostly wrong, these very typical words by Mr. Cowen are in method and intent exactly like all those ads that tell us that if we don't buy this deodorant or detergent or gadget or whatever, everyone else, even our friends, will despise, mock, and shun us the advertising industry's attack on the fragile self-esteem of millions of people. This using of people's fear to sell them things is destructive and morally disgusting.
The fact that the computer industry and its salesmen and prophets have taken this approach is the best reason in the world for being very skeptical of anything they say. Clever they may be, but they are mostly not to be trusted. What they want above all is not to make a better world, but to join the big list of computer millionaires.
A computer is, after all, not a revolution or a way of life but a tool, like a pen or wrench or typewriter or car. A good reason for buying and using a tool is that with it we can do something that we want or need to do better than we used to do it. A bad reason for buying a tool is just to have it, in which case it becomes, not a tool, but a toy.
On Computers Growing Without Schooling #29 September 1982
by John Holt.