LLM-powered tools amplify developer capabilities rather than replacing them

(matthewsinclair.com)

Comments

skydhash 21 April 2025
> Traditionally, coding involves three distinct “time buckets”:

> Why am I doing this? Understanding the business problem and value

> What do I need to do? Designing the solution conceptually

> How am I going to do it? Actually writing the code

> For decades, that last bucket consumed enormous amounts of our time. We’d spend hours, days or weeks writing, debugging, and refining. With Claude, that time cost has plummeted to nearly zero.

That last part is actually the easiest, and if you're spending inordinate amount of time there, that usually means the first two were not done well or you're not familiar with the tooling (language, library, IDE, test runner,...).

There's some drudgery involved in manual code editing (renaming variable, extracting functions,...) but those are already solved in many languages with IDEs and indexers that automate them. And so many editors have programmable snippets support. I can genuinely say in all of my programming projects, I spent more time understanding the problem than writing code. I even spent more time reading libraries code than writing my own.

The few roadblocks I have when writing code was solved by configuring my editor.

ttul 21 April 2025
I started my career as a developer in the 1990s and cut my teeth in C++, moving on to Python, Perl, Java, etc. in the early-2000s. Then I did management roles for about 20 years and was no longer working at the “coal face” despite having learned some solid software engineering discipline in my early days.

As an old geezer, I appreciate very much how LLMs enable me skip the steep part of the learning curve you have to scale to get into any unfamiliar language or framework. For instance, LLMs enabled me to get up to speed on using Pandas for data analysis. Pandas is very tough to get used to unless you emerged from the primordial swamp of data science along with it.

So much of programming is just learning a new API or framework. LLMs absolutely excel at helping you understand how to apply concept X to framework Y. And this is what makes them useful.

Each new LLM release makes things substantially better, which makes me substantially more productive, unearthing software engineering talent that was long ago buried in the accumulating dust pile of language and framework changes. To new devs, I highly encourage focusing on the big picture software engineering skills. Learn how to think about problems and what a good solution looks like. And use the LLM to help you achieve that focus.

Aperocky 21 April 2025
> Experience Still Matters

My personal opinion is that now experience matters a lot more.

A lot of times, the subtle mistakes that LLM makes or wrong direction that it takes can only be corrected by experience. LLM also don't tend to question its own decisions in the past, and will stick with them unless explicitly told.

This means LLM based project accumulate subtle bugs unless there is a human in the loop who can rip them out, and once a project accumulated enough subtle bugs it generally becomes unrecoverable spaghetti.

scrlk 21 April 2025
> The developers who thrive in this new environment won’t be those who fear or resist AI tools, but those who master them—who understand both their extraordinary potential and their very real limitations. They’ll recognise that the goal isn’t to remove humans from the equation but to enhance what humans can accomplish.

I feel like LLMs are just the next step on the Jobs analogy of "computers are bicycles for the mind" [0]. And if these tools are powerful bicycles available to everyone, what happens competitively? It reminds me of a Substack post I read recently:

> If everyone has AI, then competitively no one has AI, because that means you are what drives the differences. What happens if you and LeBron start juicing? Do you both get as strong? Can you inject your way to Steph’s jumpshot? What’s the differentiator? This answer is inescapable in any contested domain. The unconventionally gifted will always be ascendant, and any device that’s available to everyone manifests in pronounced power laws in their favor. The strong get stronger. The fast get faster. Disproportionately so. [1]

[0] https://youtu.be/ob_GX50Za6c?t=25

[1] https://thedosagemakesitso.substack.com/p/trashbags-of-facts...

strict9 21 April 2025
A lot of good points here which I agree with.

Another way to think about it is SWE agents. About a year ago Devin was billed as a dev replacement, with the now common reaction that it's over for SWEs and it's no longer a useful to learn software engineering.

A year later there have been large amounts of layoffs that impacted sw devs. There have also been a lot of fluff statements attributing layoffs to increased efficiency as a result of AI adoption. But is there a link? I have my doubts and think it's more related to interest rates and the business cycle.

I've also yet to see any AI solutions that negate the need for developers. Only promises from CEOs and investors. However, I have seen how powerful it can be in the hands of people that know how to leverage it.

I guess time will tell. In my experience the current trajectory is LLMs making tasks easier and more efficient for people.

And hypefluencers, investors, CEOs, and others will continue promising that just around the corner is a future in which human software developers are obsolete.

antirez 21 April 2025
I agree that AI powered programming can give you a boost, and the points made in the post I would agree with if they were not made about Claude Code or other "agentic" coding tools. The human-LLM boosting interaction exists particularly when you use the LLM in its chat form, where you inspect and reshape with both editing the code and explaining with words what the LLM produced, and where (this is my golden rule) you can only move code from the LLM environment to your environment after inspecting and manually cut & pasting stuff. Claude Code and other similar systems have a different goal: to allow somebody to create a project without much coding at all, and the direction is to mostly observe more the result itself of the code, that how it is written and the design decisions. This is fine with me, I don't tell people what to do, and many people can't code, and with systems like that they can build a certain degree of systems. But: I believe that tody, 21 April 2025 (tomorrow it may change) the human+LLM strict collaboration on the code, where the LLM is mostly a tool, is what produces the best results possible, assuming the human is a good coder.

So I would say there are three categories of programmers:

1. Programmers that just want to prompt, using AI agents to write the code.

2. Programmers, like me, that use LLM as tools, writing code by hand, letting the LLM write some code too, inspecting it, incorporating what makes sense, using the LLM to explore the frontier of programming and math topics that are relevant to the task at hand, to write better code.

3. Programmers that refuse to use AI.

I believe that today category "2" is what has a real advantage over the other two.

If you are interested in this perspective, a longer form of this comment is contained in this video in my YouTube channel. Enable the English subtitles if you can't understand Italian.

https://www.youtube.com/watch?v=N5pX2T72-hM

jillesvangurp 22 April 2025
Like all tool improvements in software engineering do, LLMs simply increase the demand for software as fast as software engineers can step up to use the new capabilities provided by tools. This is not a closed world and it's nothing new. It's not like we're all going to sit on our hands now. Improvements in tools (like LLMs) enable individuals to do more and more complicated things. So the complexity of what is acceptable as a minimum simply goes up along wit that. And this will allow a wider group of individuals to start messing around with software.

When the cost for something goes down, demand for that thing goes up. That fancy app that you never had time to build is now something that you are expected to ship. And that niche feature that wasn't really worth your time before, completely different story now that you can get that done in 30 minutes instead of 1 week.

Individual software engineers will simply be expected to be able to do a lot more than they can do currently without LLMs. And somebody that understands what they are doing will have a better chance of delivering good results than somebody that just asks "build me a thingy conforming to my vague and naive musings/expectations that I just articulated in a brief sentence". You can waste a lot of time if you don't know your tools. That too is nothing new.

In short everything changes and that will generate more work, not less.

bcrosby95 21 April 2025
This may depend upon every individual, but for me "How am I going to do it" is not actually writing code. It's about knowing how I'm going to do it before I write the code. After that point, its an exercise in typing speed.

If I'm not 100% sure something will work, then I'll still just code it. If it doesn't work, I can throw it away and update my mental model and set out on a new typing adventure.

pjmlp 21 April 2025
Keep believing it is augmentation.

The end game is outsourcing, instead of team mates doing the actual programing from the other side of the planet, it will be from inside the computer.

Sure the LLMs and Agents are rather limited today, just like optimizating compilers were still a far dream in the 1960's.

ivape 21 April 2025
If we go with this analogy, we don't have advanced mech suits yet for this. To think an IDE is going to be the "visor", and to think copy-and-pasting is going to be jury-rigged weapons on the Mech is probably not it. The future really needs to be Jarvis and that Iron Man suit, whatever the programming equivalent is.

"Hey I need a quick UI for a storefront", can be done with voice. I got pretty far with just doing this, but given my experience I don't feel fully comfortable in building the mech-suit yet because I still want to do things by hand. Think about how wonky you would feel inside of a Mech, trying to acclimate your mind to the reality that your hand movements are in unity with the mech's arm movements. Going to need a leap of faith here to trust the Mech. We've already started attacking the future by mocking it as "vibe coding". Calling it a "Mech" is so much more inspiring, and probably the truth. If I say it, I should see it. Complete instant feedback, like pen to paper.

gigel82 22 April 2025
When working in mature codebases and coordinating across teams, I'd say the time I spend "coding" is less than 5%. I do use GitHub Copilot to make coding faster, and sometimes to shoot ideas around for debugging, but overall its impact on my productivity has been in the lower single digits.

I'm wondering if I'm "holding it wrong", or all of these anecdotes of 10x productivity are coming from folks building prototypes or simple tools for a living.

alganet 21 April 2025
Expectation: mech suit with developer inside.

Reality: a saddle on the developer's back.

They really want a faster horse.

dist-epoch 21 April 2025
> Chess provides a useful parallel here. “Centaur chess” pairs humans with AI chess engines, creating teams that outperform both solo humans and solo AI systems playing on their own. What’s fascinating is that even when AI chess engines can easily defeat grandmasters, the human-AI combination still produces superior results to the AI alone. The human provides strategic direction and creative problem-solving; the machine offers computational power and tactical precision.

Can we stop saying this? It hasn't been true for more than 15 years.

causal 21 April 2025
The article is correct about the current state of using LLMs, but I didn't see an explanation WHY they are like this; just more "how".

I'm curious about the fundamental reason why LLMs and their agents struggle with executive function over time.

xbmcuser 22 April 2025
The biggest problem I have with all these articles about what LLM are and are not is that LLM are still improving rapidly 1000s if not 100000s are working on doing that. As LLM pass a new threshold we get another round denial, anger, bargaining, depression, and acceptance from another group of writers.
sheepscreek 21 April 2025
Yep. It’s the ultimate one person team. With the human playing the role of a team lead AND manager. Sometimes even the PM. You want to earn big bucks? Well, this is the way now. Or earn little bucks and lead a small but content life. Choice is yours.
mohsen1 22 April 2025
I’ve had the opposite experience from some of the skepticism in this thread—I’ve been massively productive with LLMs. But the key is not jumping straight into code generation.

Instead, I use LLMs for high-level thinking first: writing detailed system design documents, reasoning about architecture, and even planning out entire features as a series of smaller tasks. I ask the LLM to break work down for me, suggest test plans, and help track step-by-step progress. This workflow has been a game changer.

As for the argument that LLMs can’t deal with large codebases—I think that critique is a bit off. Frankly, humans can’t deal with large codebases in full either. We navigate them incrementally, build mental models, and work within scoped contexts. LLMs can do the same if you guide them: ask them to summarize the structure, explain modules, or narrow focus. Once scoped properly, the model can be incredibly effective at navigating and working within complex systems.

So while there are still limitations, dismissing LLMs based on “context window size” misses the bigger picture. It’s not about dumping an entire codebase into the prompt—it’s about smart tooling, scoped interactions, and using the LLM as a thinking partner across the full dev lifecycle. Used this way, it’s been faster and more powerful than anything else I’ve tried.

interpol_p 22 April 2025
> Why am I doing this? Understanding the business problem and value

> What do I need to do? Designing the solution conceptually

> How am I going to do it? Actually writing the code

This article claims that LLMs accelerate the last step in the above process, but that is not how I have been using them.

Writing the code is not a huge time sink — and sometimes LLMs write it. But in my experience, LLMs have assisted partially with all three areas of development outlined in the article.

For me, I often dump a lot of context into Claude or ChatGPT and ask "what are some potential refactorings of this codebase if I want to add feature X + here are the requirements."

This leads to a back-and-forth session where I get some inspiration about possible ways to implement a large scale change to introduce a feature that may be tricky to fit into an existing architecture. The LLM here serves as a notepad or sketchbook of ideas, one that can quickly read existing API that I may have written a decade ago.

I also often use LLMs at the very start to identify problems and come up with feature ideas. Something like "I would really like to do X in my product, but here's a screenshot of my UI and I'm at a bit of a loss for how to do this without redesigning from scratch. Can you think of intuitive ways to integrate this? Or are there other things I am not thinking of that may solve the same problem."

The times when I get LLMs to write code are the times when the problem is tightly defined and it is an insular component. When I let LLMs introduce changes into an existing, complex system, no matter how much context I give, I always end up having to go in and fix things by hand (with the risk that something I don't understand slips through).

otabdeveloper4 21 April 2025
More a halloween costume than mech suit.

Like a toy policeman costume so you can pretend you have authority and you know what you're doing.

sdeframond 23 April 2025
Questions:

How far can you go with the free tiers? Do I need to invest much in order to develop a good feeling of what is possible and what is not?

Also, if experience matters, how to help junior developers get the coding experience needed to master LLMs? While, as TFA says, this might not replace developers, it does seem like it will make things harder for unexperienced people.

(Edit: typos)

meander_water 22 April 2025
There's one point missing here - the speed at which code can be generated and code can be read and understood. You can't skim/speed read code. You may be able to generate an entire codebase in minutes, but it takes significantly longer than that to work within a large codebase to understand it's intricacies to be able to refactor it and add new features. This is why you see vibe coded codebase with tons of dead code, inefficient/unsafe use of functions etc. I think when you're working with LLMs the temptation is to go as fast as it allows, but this is a trap.
yieldcrv 22 April 2025
Although I hear that the junior level market is in shambles, what I've seen so far is more demand for developers. (This isn't data driven, like maybe layoffs and headcounts aren't growing, my niche isn't having a problem at the moment)

Basically a lot of projects that simply wouldn't have happened are now getting complex MVPs done by non-technical people, which gets them just enough buy-in to move it forward, and that's when they need developers.

nopinsight 22 April 2025
One way to think of this is as the Baumol effect* within software development.

Expert humans are still quite a bit better than LLMs at nuanced requirements understanding and architectural design for now. Actual coding will increasingly become a smaller and cheaper part of the process, while the parts where human input cannot be reduced as much will take up a larger proportion of time and cost.

* Not everything here applies, but many will be. https://en.m.wikipedia.org/wiki/Baumol_effect

api 22 April 2025
That’s been my experience too. It’s like super autocomplete. Good for unit tests and boilerplate, but it does not do high level reasoning for you.

It also can’t do the all important thing: telling you what to build.

submeta 21 April 2025
> How LLM-powered programming tools amplify developer capabilities rather than replace them

This is my experience as well. You have to know what you want, how to interfere if things go in the wrong direction, and what to do with the result as well.

What I did years ago with a team of 3-5 developers I can do now alone using Claude Code or Cursor. But I need to write a PRD, break it down into features, epics and user stories, let the llm write code, review the results. Vibe coding tools feel like half a dozen junior to mid level developers for a fraction of the cost.

crvdgc 22 April 2025
> The Centaur Effect

> even when AI chess engines can easily defeat grandmasters, the human-AI combination still produces superior results to the AI alone.

Is this still the case? I didn't find a conclusive answer, but intuitively it's hard to believe. With limitless resources, AI can perform exhaustive search and is thus not possible to lose. Even with resource limits, something like AlphaZero can be very strong. Would AlphaZero+human beat pure AlphaZero?

agentultra 22 April 2025
I just like to know things and learn them.

If I’m encountering a new framework I want to spend time learning it.

Every problem I overcome on my own improves my skills. And I like that.

GenAI takes that away. Makes me a passive observer. Tempts me to accept convenience with a mask of improved productivity. When, in the long term, it doesn’t do anything for me except rob me of my skills.

The real productivity gains for me would come from better programming languages.

AlexCoventry 21 April 2025
Archive.org link (site is down, for me.) https://web.archive.org/web/20250421152532/https://matthewsi...
rpmisms 22 April 2025
I'm not great at actually writing code. I am a damn good software architect, though. Being able to pseudocode and make real code out of it has been amazing for me. It takes a lot of the friction out of writing really nice code. I love working in Ruby and Perl, but now I can write pseudo-Ruby and get excellent JS out of my input.
Workaccount2 21 April 2025
I question how much code and what kind of code is actually going to be needed when the world is composed entirely of junior engineers who can write 100 LOC a second?

Will it just be these functional cores that are the product, and users will just use an LLM to mediate all interaction with it? The most complex stuff, the actual product, will be written by those skilled in mech suits, but what will it look like when it is written for a world where everyone else has a mech suit (albeit less capable) on too?

Think like your mother running a headless linux install with an LLM layer on top, and it being the least frustrating and most enjoyable computing experience she has ever had. I'm sure some are already thinking like this, and really it represents a massive paradigm shift in how software is written on the whole (and will ironically resemble the early days of programming).

_ink_ 22 April 2025
First it was Dev. Than it was DevOps. Soon it'll be DevOpsManQa.
Meneth 22 April 2025
Anything that amplifies a worker's speed will cause some layoffs if the amount of work needed doesn't change.
gyrovagueGeist 21 April 2025
How many people still play centaur chess?
sebastiennight 21 April 2025
In the current state of things, it's maybe more of a Justin Hammer mech suit than a Tony Stark mech suit.
th0ma5 22 April 2025
I've tried every way I can think of to get an LLM to generate valid code to do this, but everything seems to require manual intervention. I've tried giving it explicit examples, I've tried begging, I've tried bribing, and I've tried agreeing on the prompt first, and there doesn't seem to be a way for me to get valid code out for this simple idea from any of Claude, Gemini, Chat GPT, etc.

> Write a concise Python function `generate_scale(root: int, scale_type: str) -> list[int]` that returns a list of MIDI note numbers (0-127 inclusive) for the given `root` note and `scale_type` ("major", "minor", or "major7"). The function should generate all notes of the specified scale across all octaves, and finally filter the results to include only notes within the valid MIDI range.

... So I typed all of the above in and it basically said don't ever try to use an LLM for this, it doesn't know anything about music and is especially tripped up by it. And then it gave me an example that should actually work and then didn't. It's wild because it gets the actual scale patterns correct.

m3kw9 22 April 2025
Except if you few shot todo streak exercise apps apps and calories counters.
bionhoward 21 April 2025
sounds great as long as you don’t make any product or service that competes with Claude. Can anyone name something in that category?
marstall 21 April 2025
guessing the introduction of the mech suit reduced headcount on the loading deck ...
therebase 22 April 2025
I call BS. The way it is set up now akins to digital dementia.

https://dev.to/sebs/agentic-dementia-5hdc