I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.
This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.
The async approach to coding has been most fascinating, too.
I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.
I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.
I'm loving the new programming. I don't know where it goes either, but I like it for now.
I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.
It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.
For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.
Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.
Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.
So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.
What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.
My theory on AI is it's the next iteration of google search, a better more conversational, base layer over all the information that exists on the internet.
Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)
But IMHO this isn't the existential crisis people think it is.
It's just a tool. Smart, clever people can do lots of cool stuff with tools.
But you still have to use it,
Search has just become Chat.
You used to have to search, now you chat and it does the searching, and more!
Why are we counting the number of documents? It doesn't matter. What matters is putting together a plan and being able to articulate what you want. Then review and adjust and prompt again.
You have to know how software gets built and works. You can't just expect to get it right without a decent understanding of software architecture and product design.
This is something that's actually very hard. I'm coming to grips with that slowly, because it's always been part of my process. I'm both a programmer and a graphic designer. It took me a long while to recognize not everyone has spent a great deal of time doing both. Fewer yet decide to learn good software design patterns, study frameworks and open-source projects to understand the problems each of them are solving. It takes a LOT of time. It too me probably 10-15 years just to learn all of this. I've been building software for over 20 years. So it just takes time and that's ok.
The most wonderful thing I see about AI is that it should help people focus on these things. It should free people from getting too far into the weeds and too focused on the code itself. We need more people who can apply critical thinking and design from a bird's eye perspective. We need people who can see the big picture.
I seem to have missed the part where he successfully prompted for security, internationalizability, localizability, accessibility, usability, etc., etc.
This is a core problem with amateurs pretending to be software producers. There are others, but this one is fundamental to acceptable commercial software and will absolutely derail vibe coded products from widespread adoption.
And if you think these aspects of quality software are easily reduced to prompts, you've probably never done serious work in those spaces.
A bit part of this that people are not understanding is that a major part of the author's success is due to the fact that he clearly does not care at all how anything is implemented, mostly because he doesn't need to.
You get way farther when you have the AI drop in Tailwind templates or Shadcn for you and then just let it use those components. There is so much software outside that web domain though.
A lot of people just stop working on their AI projects because they don't realize how much work it's going to take to get the AI to do exactly what they want in the way that they want, and that it's basically going to be either you accept some sort of randomized variant of what you're thinking of, or you get a thing that doesn't work at all.
The input to output ratio is interesting. We are usually optimizing for volume of output, but now it’s inverted. I actually don’t want maximum output, I want the work split up into concrete, verifiable steps and that’s difficult to achieve consistently.
Ive taken to co-writing a plan with requirements with cursor and it works really well at first. But as it makes mistakes and we use those mistakes to refine the document eventually we are ready to “go” and suddenly it’s generating a large volume of code that directly contradicts something in the plan. Small annoyances like its inability to add an empty line after markdown headings have to be explicitly re added and re-reminded.
I almost wish I had more control over how it was iterating. Especially when it comes to quality and consistency.
When I/we can write a test and it can grind on that is when AI is at its best. It’s a closed problem. I need the tools to help me, help it, turn the open problem I’m trying to solve into a set of discrete closed problems.
>I've been coding for long enough to remember when we carved HTML tables by hand. When CSS was a suggestion, not a lifestyle. When JavaScript was for mouseover effects and nothing else.
Cringe. The tech is half baked and the author is already fully committed to this is the future, I am living in the future, I bake cookies while Claude codes.
Pure cringe. This confirms my earlier theories that everyone just wants to be a manager. You don't need to manage humans. You just want to be a manager.
The whole article could be summed down to I always wanted to be a manager and now I am a manager of bots.
I've been experimenting with model-based development lately, and this resonated strongly with me.
The section "What Even Is Programming Anymore?" hit on a lot of the thoughts and feels I've been going through. I'm using all my 25+ years of experience and CS training, but it's _not_ programming per se.
I feel like we're entering an era where we're piloting a set of tools, not hand crafting code. I think a lot of people (who love crafting) will be leaving the industry in the next 5 years, for better or worse. We'll still need to craft things by hand, but we're opening some doors to new methodologies.
And, right now, those methodologies are being discovered, and most of us are pretty bad at them. But that doesn't mean they're not going to be part of the industry.
I don’t know which models you guys are working with, but all of the SOTA ones I’ve tried in different configurations ended up producing mostly garbage.
I find myself having to spend more time guiding the model in the right direction and fixing its mistakes than I would’ve spent building it all myself.
Every time I read one of these stories I feel like maybe you guys have models from 2035, because the ones we have today seem to be useless outside of creating greenfield, simple React apps that just sort of work.
One thing I’ll say is that it’s been a real time saver for debugging. For coding, a huge waste of time. Even for tasks that are menial, repetitive, require no thinking etc. I find that it’s mostly crap.
I feel increasingly tired of reading excuse after excuse when you bring up that AI tools simply cannot solve anything beyond extremely basic problems.
It's always a mix of:
1. "Wait for the next models", despite models having all but plateaued for the past 3 years,
2. "It's so good for boilerplate code", despite libraries and frameworks being much better suited for this task, and boilerplate code being actually rare to write in the normal lifecycle of a project,
3. "You need to prompt it differently", glossing over the fact that to prompt it so it can do what you want it to do accurately it would take longer than not to use AI at all,
4. And the worst: "We don't know how to use those models yet"
Maybe the real reason it doesn't work is because IT JUST DOESN'T FUCKING WORK.
Why is it so unfathomable that a next token generator is gonna suck at solving complex problems? It is blindingly obvious.
Congratulations, you just passed project management class.
What you describe is exactly what a project manager does. Refines the technical, stories, organizes the development towards a goal.
This doesn’t feel like programming because it isn’t. It doesn’t NOT feel like programming because you’re supervising. In the end, you are now a project manager.
this isn’t vibe coding. This is something completely new. I call it “flex coding.”
heck I built a full app in an afternoon AND I was a good dad?
> I'd wander into my office, check what Claude had built, test it real quick. If it worked, great! Commit and push. "Now build the server connection UI," I'd say, and wander back out.
Made breakfast. Claude coded.
Played with my son. Claude coded.
Watched some TV. Claude coded.
Every hour or so, I'd pop back in. Five minutes of testing. One minute of feedback.
There's a weird insecurity I've noticed cropping up. I want to design the codebase 'my way'. I want to decide on the fundamental data structures. But there's this worry that my preferred architecture is not massively better than whatever the machine comes up with. So by insisting on 'my way' I'm robbing everyone productivity.
I know most true programmers will vouch for me and my need to understand. But clients and project managers and bosses? Are they really gonna keep accepting a refrain like this from their engineers?
"either it gets done in a day and I understand none of it, or it gets done in a month and I fully understand it and like it"
To me it feels like I’m in the camp of people who has already figured it out. And I have now learned the hard way that it’s almost impossible to teach others (I organized several meetups on the topic).
The ability seems like pure magic. I know that there are others who have it very easy now building even complex software with AI and delivering project after project to clients at record speed at no less of quality as they did before. But the majority of devs who won’t even believe that it’s remotely possible to do so is also not helping this style of building/programming mature.
I wouldn’t even call it vibe coding anymore. I think the term hurts what it actually is. For me it’s just a huge force multiplier, maybe 10-20x of my ability to deliver with my own knowledge and skills on a web dev basis.
- when I ask models to do defined that I know how to do and can tell them about that method but can't remember the details off off hand and then I check the answers things work.
- when I attempt to specify things that I don't understand fully the model creates rubbish 7 out of 10 times, and those episodes are irretrievable. About 30% of the time I get a hint of what I should do and can make some progress.
Not sure what the complaint is about. If the coding work has to be thrown away, we need to do that and move on. We did that many times earlier. We have thrown away hunting, farming, calculations by hand, cameras and so on. Coding work might get extinct for some use cases. Nothing wrong with it. Learn how to use your tools, assistants and godzillas.
The bigger issue, would there be a need for coding and software? Who would use them? Why are they using it? Are they buying something? searching for info? The usecase will see a revolution. The new usecases won't need the traditonal kind software. But AI can only produce traditional software.
Can I ask Claude to code up its clone for local use?
The "time dialation" is real. I mostly manage these days, yet my fun projects progress faster than they ever have, because I can prompt in the 2 minutes between meetings, and come back to significant progress.
> I'd wander into my office, check what Claude had built, test it real quick. If it worked, great! Commit and push.
Do people read the code? Or just test if it work and push?
To me, code is like a map that has to be clear enough so other humans can read it to navigate the territory (codebase). Even if it's just two – me and AI agent – working on the codebase, it's not much different from "me and another programmer". We both want to have updated mental model of how exactly code structured and how it works and why.
Using AI for coding and not reading the code sounds more like stopping being developer and self-promoting yourself to the manager of AI-programmers who trusts their craft completely.
The entire premise of using AI to build stuff is founded on the idea that building faster is somehow automatically better.
I’m starting to believe that’s not necessarily true. And if some study finds out later that stuff built slowly by hand is actually better in every way except time-to-market, then it means AI is not really a competitive edge, it’s just a Quality of Life improvement that allows software engineers to be even lazier. And at future price points of $200, $400, even $1000 a month per head, that becomes a hard sell for most companies. Might be easier to have engineers pay for their own AI if they want to be lazy. And of course whether they use AI or not, you can still measure productivity under the assumption that every engineer does…
"You’re using it wrong" arguments/hype articles showing up. Speculators love it. But in reality if you need to extol the benefits of AI, then is it really the user or the technology?
Honestly reminds me of the digital currency mania that busted a couple of years ago. Same types of articles popping up too.
Look I understand the benefits of AI but it’s clear ai is limited by the compute power of today. Maybe the dream this author has will be realized some day. But it won’t be today or in current generations lifespan.
> With enough AI assistants building enough single-purpose tools, every problem becomes shallow. Every weird edge case already has seventeen solutions. Every 2am frustration has been felt, solved, and uploaded.
> We're not drowning in software. We're wading in it. And the water's warm
Just sounds like GPT style writing. I’m not saying this blog is all written by GPT, but it sounds like it is. I wonder if those of us who are constantly exposed to AI writing are starting to adopt some of that signature fluffy, use-a-lot-of-words-without-saying-much kinda style.
Life imitates art. Does intelligence imitate artificial intelligence?? Or maybe there’s more AI written content out there than I’m willing to imagine.
(Those snippets are from another post in this blog)
So, is Protocollie actually any good? I don't have any means of evaluating the final product, but I feel like evaluating what ultimately stuck to the wall is a pretty important part of the tale.
How buggy is it? How long would it have taken to build something similar by hand?
Im going to be overly picky about the subheading (which is an incidental aspect of TFA): “The future of software development might just be jazz. Everyone improvising. Nobody following the sheet music.”
That’s not jazz. Jazz being what it is, a lot of people in 2025 think it’s “everyone improvising,” but (outside of some free jazz) it’s quite structured and full of shared conventions.
Analogies work when you and your audience both understand the things being compared. In this case, the author doesn’t, and maybe some of the audience shares the same misperception, and so the analogy only works based on shared misunderstanding.
The analogy to jazz actually works better the more you know about it. But that’s accidental.
Eh, idk. First of all, the article is really wordy to say very few things. That just frustrated me a bit.
Second of all, it's easy to fart out some program in a few days vibe coding. How will that fare as more and more features need to be added on? We all used to say "Dropbox that's just FTP wrapped in a nice UI anyone can make that". This protocollie project seems to be a documentation viewer / postman for MCP. Which is cool, but is it something that would have taken a competent dev months to build? Probably not. And eventually the actual value of such things is the extensibility and integrations with various things like corporate SAML etc.
Will the vibe code projects of today be extensible like that, enough to grab market share vs the several similar versions and open source versions anyone can make in a few days, as the author suggests? It can be hard to extend a codebase you don't understand because you didn't write...
Based on how easy it is to trigger cool-down or throttle on Claude Code, I think people know how to build with AI. Or they’re trying really hard to figure it out. The race is on and it’s wide open.
There are a lot gotchas with these new models. They get incredibly lazy if you let them. For example, I asked it to do a simple tally by year. I just assumed it’s simple enough I don’t need to ask to write a code. It counted first couple of years and just “guessed” the rest based on pattern it noticed.
Sometimes, it feels like having a lazy coworker that you have to double check constantly and email with repeated details. Other times, I just sit there in awe of how smart it is in my weekly AGI moment and how it’s going to replace me soon.
I’m feeling limited by the tools right now. The software for using LLMs is an incongruent mess. For example, how can I have a global system prompt shared across all the apps I use my OpenRouter key with? I can’t, I have to set it in each application. How can I carry a conversation over from one app to another? How can I give all my apps access to the same set of MCP tools? Most of the apps I use don’t even properly support MCP yet.
It sounds like Claude Code is the best UX right now but I don’t want to be locked into a Claude subscription, I want to bring my own key and tap into whatever provider I want.
Reminds me of Sussman’s “We don’t know how to compute” which is essentially maligning that we haven’t gone full Smalltalk with some Lisp sprinkled on top to get “soft computers.”
None of that is actually what makes my favorite tools work. It’s usually some nerds that never stopped using C/C++ and really know hardware.
The idea is to track all of the context of a project using git. It’s a CLI and MCP tool, the human guides it but the LLM contributes back to it as the project evolves
I used it to bootstrap the library itself, and have been using it more and more for context management of all sorts of things I care about
Building with LLMs is a bit like driving with GPS. You ever get that feeling the GPS is wrong or leading you along a difficult route? But for some reason you follow the GPS anyway? Maybe there's a name for the phenomenon. I notice it more and more.
The decision making parts of people's brains will atrophy. It will be interesting to see what will happen.
> There's this moment in every new technology where everyone pretends they know what they're doing.
Perhaps that is true, but without any examples I was immediately suspicious of this line.
> Either way, we're in this delicious middle ground where nobody can pretend expertise because the whole thing keeps changing under our feet.
Upon reflection this does in fact remind me of the early days of rocketry when we were just reaching into the upper atmosphere and then orbit. Wild things were being tried because there was not yet any handrails. Exploding a huge tank of water in the ionosphere just because, launching giant mylar balloons into orbit to try and bounce radar signals off of them, etc.
We are expecting a moment like when sublime was born. Sidebar map, concurrent cursors, blazing speed.
Right now its all monetization at gravity. As if companies are ready to pour software developer salaries in tools.
I imagine beginners will not have gpu rich environments and AI will not reach mainstream as traditional development did, unless something happens, idk what.
Right now, seniors love the complexity and entry barrier to it, so they can occupy the top of the food chain. History has proven that that does not last long.
In some scenarios as airtable, AI is replacing docs and customer support, eleminating the learning curve.
Instead of walking away, I have worked on multiple projects at the same time. Prompt one project, let it run, prompt another project, let it run and go back to the first.
At what point does LLM output become more expensive than search? For example, the author made protocollie. Could the LLM gather all similar apps that are already written to solve the prompt that birthed it,, and then simply provide THOSE apps iinstead of figuring out how to code it anew? Sounds like this could be a moat that only hyperscalars could implement and woould reduce their energy requirements drastically.
I like to use other parts of the codebase (or other codebases) as an example which helps a lot to maintain standards and consistency (though adds cost).
I am a bit disillusioned - I find mentoring humans satisfying but I don't get the same satisfaction mentoring AI. I also think it's a probably going to backfire by hamstringing the next generation and 'draining the competence' from the current.
> The Story Breakdown - Everything in 15-30 minute chunks. Why? Because that's roughly how long before Claude starts forgetting what we discussed ten minutes ago. Like a goldfish with a PhD.
This is interesting. Does Claude have a memory? Is this just a limit on the number of input tokens? It sounds like a fundamental misappropriation of cause, but maybe I just don't understand the latest whizbang feature of Claude. Can anyone clarify?
How wonderful for all of us to be thrown back into “what are we doing” mode. Like when you were a teenager, hacking stuff you were just excited to build, with none of the “here’s how you should do this” we’ve burdened ourselves with over the years.
There is a fundamentally unfortunate reality here which is quite problematic.
Namely, you don’t deserve to be paid for working 8 hours if you only worked for 30 minutes over an eight hour period.
I don’t care if you personally agree with that or not, the reality is that businesses believe it.
That means, sooner or later there will be a great rebalancing where people will be required to do significantly more work; probably the work of other developers who will be fired.
It’s fun for home projects; but the somewhat depressing reality is that there is no chance in hell this (sitting around for 7 hours a day reading reddit while Claude codes) will fly in corporate environments; instead, you’re looking at mass layoffs.
So. Enjoy it while you can folks.
In the future you’ll be spending that 8 hours struggling to juggle the context and review 20 different tasks, not playing with your kids.
I want claude code on my phone running in a cloud vm so I can give feedback out on a trail somewhere and continue my three hour hike or bike ride with my family.
I find its all about the joy of building with the subset you can be an expert in and the AI telling you where the typo is and why it still wont work after fixing it.
Yawn. I wish HN would just ban AI skip articles until someone writes about an actual product that provides actual value to paying customers.
In fact what I really want to see is a successful product that no one realizes was built by AI vibes until after it was successful. Customers don’t give a shit how something was built.
Reading articles like this feels like being in a different reality.
I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.
Also I am scared that any library that I am using through the myriad of dependencies is written like this.
On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me
Nobody knows how to build with AI yet, and the corollary (which TFA concludes with) is that everybody should figure out for themselves how to best work with AI.
I've said it before and I'll say it again: there likely isn't a "golden workflow" or "generally accepted best practices" on how to code with AI. The new models and agentic capabilities seem to be very powerful, and they will conform to whatever methodologies you currently use with whatever project you're working on, but that may still be under-utilizing what they are truly capable of.
A true optimum may even require you to adjust the way you work, down to structuring your code and projects differently. In fact you may need to figure out different approaches based on the project, the language, the coding style, the model, the specific task at hand, even your personality. I am convinced this aspect is what's causing the bimodal nature of AI coding discussions: people who stuck at it and figured it out, or just got lucky with the right mix of model / project / task / methodology, are amazed at their newfound superpowers -- whereas people who didn't, are befuddled by the hype.
This may seem like a lot of work, but it makes sense if you stop thinking of this as just a tool and more like working with a new team-mate.
I have a local LLM router app with profiles that set up the right system prompts and the right MCPs so I can swap between toolsets as I work.
This would take time to write if I’m doing it myself so I decided to vibe code it entirely. I had this idea that a compiled language is less likely to have errors (on account of the compiler giving the LLM quicker feedback than me) and so I chose Tauri with TS (I think).
The experience has been both wonderful and strange. The app was built by Claude Code with me intermittently prompting it between actual work sessions.
What’s funny is the bugs.
If you ever played Minecraft during the Alpha days you know that Notch would be like “Just fixed lighting” in one release. And you’d get that release and it’d be weird like rain would now fall through glass.
Essentially the bugs are strange. At least in the MC case you could hypothesize (transparency bit perhaps was used for multiple purposes) but this app is strange. If the LLM configuration modal is fixed, suddenly the MCP/tool tree view will stop expanding. What the heck, why are these two related? I don’t know. I could never know because I have never seen the code.
The compile time case did catch some iterations (I let Claude compile and run the program). But to be honest, the promise of correctness never landed.
Some people have been systematic and documented the prompts they use but I just free flowed it. The results are outstanding. There’s no way I could have had this built for the $50 in Claude credits. But also there’s no way I could interpret the code.
> Maybe all methodology is just mutually agreed-upon fiction that happens to produce results?
Good news! *All of computer science is this way.* There’s nothing universally fundamental about transistors, or Turing machines, or OOP, or the HTTP protocol. They’re all just abstractions that fit, because they worked.
———
When should I stop learning and start building? My coworker wrote absolutely ATROCIOUS code, alone, for the past decade. But he was BUILDING that whole time. He’s not up to date on modern web practices, but who cared? He built.
I know it's an analogy that's probably been done to death already, but it truly feels like Bitcoin 2.0.
Back in the Bitcoin hype days, there were new posts here every single day about the latest and greatest Bitcoin thing. Everyone was using it. It was going to take over the world. Remember all the people on this very site that sincerely thought fiat currency was going away and we'd be doing all of our transactions with Bitcoin? How'd that work out?
It feels exactly the same. Now the big claims are that coding jobs are going away, or if you at least don't use it you'll be left behind. People are posting AI stories every day. Everyone is using it. People say it's going to transform the industry.
Back then there was greater motivation to evangelize Bitcoin, as you could get rich by convincing people to buy in, and it's just to a lesser degree now. People who work for AI companies (like the author), posting AI stuff, trying to drum up more people to give them views/clicks, buy their products.
And of course you'll have people replying to this trying to make the case for why AI coding is already a thing, when in reality those posts are once again going to be carbon copies of similar comments from the Bitcoin days "hey, you're wrong, I bought pizza with Bitcoin last night, it's already taking over, bud!"
It’s amazing to me all the Luddite developers who are “against” all this.
Completely new ways of programming are forming, completely new ways of computing and the best the luddites can do is be “against it”.
A revolution came along, a change in history and instead of being excited by the possibilities, joining in, learning, discovering, creating …… the luddites are just “against it all”.
I feel sorry for them. Why be in computing at all if you don’t like new technology?
Nobody knows how to build with AI yet
(worksonmymachine.substack.com)524 points by Stwerner 19 July 2025 | 403 comments
Comments
I had stumbled upon Kidlin’s Law—“If you can write down the problem clearly, you’re halfway to solving it”.
This is a powerful guiding principle in today’s AI-driven world. As natural language becomes our primary interface with technology, clearly articulating challenges not only enhances our communication but also maximizes the potential of AI.
The async approach to coding has been most fascinating, too.
I will add, I've been using Repl.it *a lot*, and it takes everything to another level. Getting to focus on problem solving, and less futzing with hosting (granted it is easy in the early journey of a product) - is an absolute game changer. Sparking joy.
I personally use the analogy of mario kart mushroom or star; that's how I feel using these tools. It's funny though, because when it goes off the rails, it really goes off the rails lol. It's also sometimes necessary to intercept decisions it will take.. babysitting can take a toll (because of the speed of execution). Having to deal with 1 stack was something.. now we're dealing with potential infinite stacks.
I'm actually producing code right this moment, where I would normally just relax and do something else. Instead, I'm relaxing and coding.
It's great for a senior guy who has been in the business for a long time. Most of my edits nowadays are tedious. If I look at the code and decide I used the wrong pattern originally, I have to change a bunch of things to test my new idea. I can skim my code and see a bunch of things that would normally take me ages to fiddle. The fiddling is frustrating, because I feel like I know what the end result should be, but there's some minor BS in the way, which takes a few minutes each time. It used to take a whole stackoverflow search + think, recently it became a copilot hint, and now... Claude simply does it.
For instance, I wrote a mock stock exchange. It's the kind of thing you always want to have, but because the pressure is on to connect to the actual exchange, it is often a leftover task that nobody has done. Now, Claude has done it while I've been reading HN.
Now that I have that, I can implement a strategy against it. This is super tedious. I know how it works, but when I implement it, it takes me a lot of time that isn't really fulfilling. Stuff like making a typo, or forgetting to add the dependency. Not big brain stuff, but it takes time.
Now I know what you're all thinking. How does it not end up with spaghetti all over the place? Well. I actually do critique the changes. I actually do have discussions with Claude about what to do. The benefit here is he's a dev who knows where all the relevant code is. If I ask him whether there's a lock in a bad place, he finds it super fast. I guess you need experience, but I can smell when he's gone off track.
So for me, career-wise, it has come at the exact right time. A few years after I reached a level where the little things were getting tedious, a time when all the architectural elements had come together and been investigated manually.
What junior devs will do, I'm not so sure. They somehow have to jump to the top of the mountain, but the stairs are gone.
Of course some people will lose jobs just like what happened to several industries when search became ubiquitous. (newspapers, phone books, encyclopedias, travel agents)
But IMHO this isn't the existential crisis people think it is.
It's just a tool. Smart, clever people can do lots of cool stuff with tools.
But you still have to use it,
Search has just become Chat.
You used to have to search, now you chat and it does the searching, and more!
Man, I'm going to make so much money as a Cybersecurity Consultant!
You have to know how software gets built and works. You can't just expect to get it right without a decent understanding of software architecture and product design.
This is something that's actually very hard. I'm coming to grips with that slowly, because it's always been part of my process. I'm both a programmer and a graphic designer. It took me a long while to recognize not everyone has spent a great deal of time doing both. Fewer yet decide to learn good software design patterns, study frameworks and open-source projects to understand the problems each of them are solving. It takes a LOT of time. It too me probably 10-15 years just to learn all of this. I've been building software for over 20 years. So it just takes time and that's ok.
The most wonderful thing I see about AI is that it should help people focus on these things. It should free people from getting too far into the weeds and too focused on the code itself. We need more people who can apply critical thinking and design from a bird's eye perspective. We need people who can see the big picture.
This is a core problem with amateurs pretending to be software producers. There are others, but this one is fundamental to acceptable commercial software and will absolutely derail vibe coded products from widespread adoption.
And if you think these aspects of quality software are easily reduced to prompts, you've probably never done serious work in those spaces.
You get way farther when you have the AI drop in Tailwind templates or Shadcn for you and then just let it use those components. There is so much software outside that web domain though.
A lot of people just stop working on their AI projects because they don't realize how much work it's going to take to get the AI to do exactly what they want in the way that they want, and that it's basically going to be either you accept some sort of randomized variant of what you're thinking of, or you get a thing that doesn't work at all.
Ive taken to co-writing a plan with requirements with cursor and it works really well at first. But as it makes mistakes and we use those mistakes to refine the document eventually we are ready to “go” and suddenly it’s generating a large volume of code that directly contradicts something in the plan. Small annoyances like its inability to add an empty line after markdown headings have to be explicitly re added and re-reminded.
I almost wish I had more control over how it was iterating. Especially when it comes to quality and consistency.
When I/we can write a test and it can grind on that is when AI is at its best. It’s a closed problem. I need the tools to help me, help it, turn the open problem I’m trying to solve into a set of discrete closed problems.
Cringe. The tech is half baked and the author is already fully committed to this is the future, I am living in the future, I bake cookies while Claude codes.
Pure cringe. This confirms my earlier theories that everyone just wants to be a manager. You don't need to manage humans. You just want to be a manager.
The whole article could be summed down to I always wanted to be a manager and now I am a manager of bots.
The section "What Even Is Programming Anymore?" hit on a lot of the thoughts and feels I've been going through. I'm using all my 25+ years of experience and CS training, but it's _not_ programming per se.
I feel like we're entering an era where we're piloting a set of tools, not hand crafting code. I think a lot of people (who love crafting) will be leaving the industry in the next 5 years, for better or worse. We'll still need to craft things by hand, but we're opening some doors to new methodologies.
And, right now, those methodologies are being discovered, and most of us are pretty bad at them. But that doesn't mean they're not going to be part of the industry.
I find myself having to spend more time guiding the model in the right direction and fixing its mistakes than I would’ve spent building it all myself.
Every time I read one of these stories I feel like maybe you guys have models from 2035, because the ones we have today seem to be useless outside of creating greenfield, simple React apps that just sort of work.
One thing I’ll say is that it’s been a real time saver for debugging. For coding, a huge waste of time. Even for tasks that are menial, repetitive, require no thinking etc. I find that it’s mostly crap.
It's always a mix of:
1. "Wait for the next models", despite models having all but plateaued for the past 3 years,
2. "It's so good for boilerplate code", despite libraries and frameworks being much better suited for this task, and boilerplate code being actually rare to write in the normal lifecycle of a project,
3. "You need to prompt it differently", glossing over the fact that to prompt it so it can do what you want it to do accurately it would take longer than not to use AI at all,
4. And the worst: "We don't know how to use those models yet"
Maybe the real reason it doesn't work is because IT JUST DOESN'T FUCKING WORK.
Why is it so unfathomable that a next token generator is gonna suck at solving complex problems? It is blindingly obvious.
Why are these chatbots that mangle data 1/3 to 1/2 of the time getting their budgets 10x over and over again?
This is irrational. If the code mangles data this bad, it's garbage.
What you describe is exactly what a project manager does. Refines the technical, stories, organizes the development towards a goal.
This doesn’t feel like programming because it isn’t. It doesn’t NOT feel like programming because you’re supervising. In the end, you are now a project manager.
heck I built a full app in an afternoon AND I was a good dad?
> I'd wander into my office, check what Claude had built, test it real quick. If it worked, great! Commit and push. "Now build the server connection UI," I'd say, and wander back out.
Made breakfast. Claude coded.
Played with my son. Claude coded.
Watched some TV. Claude coded.
Every hour or so, I'd pop back in. Five minutes of testing. One minute of feedback.
I know most true programmers will vouch for me and my need to understand. But clients and project managers and bosses? Are they really gonna keep accepting a refrain like this from their engineers?
"either it gets done in a day and I understand none of it, or it gets done in a month and I fully understand it and like it"
The ability seems like pure magic. I know that there are others who have it very easy now building even complex software with AI and delivering project after project to clients at record speed at no less of quality as they did before. But the majority of devs who won’t even believe that it’s remotely possible to do so is also not helping this style of building/programming mature.
I wouldn’t even call it vibe coding anymore. I think the term hurts what it actually is. For me it’s just a huge force multiplier, maybe 10-20x of my ability to deliver with my own knowledge and skills on a web dev basis.
- when I ask models to do defined that I know how to do and can tell them about that method but can't remember the details off off hand and then I check the answers things work.
- when I attempt to specify things that I don't understand fully the model creates rubbish 7 out of 10 times, and those episodes are irretrievable. About 30% of the time I get a hint of what I should do and can make some progress.
The bigger issue, would there be a need for coding and software? Who would use them? Why are they using it? Are they buying something? searching for info? The usecase will see a revolution. The new usecases won't need the traditonal kind software. But AI can only produce traditional software.
Can I ask Claude to code up its clone for local use?
Do people read the code? Or just test if it work and push?
To me, code is like a map that has to be clear enough so other humans can read it to navigate the territory (codebase). Even if it's just two – me and AI agent – working on the codebase, it's not much different from "me and another programmer". We both want to have updated mental model of how exactly code structured and how it works and why.
Using AI for coding and not reading the code sounds more like stopping being developer and self-promoting yourself to the manager of AI-programmers who trusts their craft completely.
I’m starting to believe that’s not necessarily true. And if some study finds out later that stuff built slowly by hand is actually better in every way except time-to-market, then it means AI is not really a competitive edge, it’s just a Quality of Life improvement that allows software engineers to be even lazier. And at future price points of $200, $400, even $1000 a month per head, that becomes a hard sell for most companies. Might be easier to have engineers pay for their own AI if they want to be lazy. And of course whether they use AI or not, you can still measure productivity under the assumption that every engineer does…
Honestly reminds me of the digital currency mania that busted a couple of years ago. Same types of articles popping up too.
Look I understand the benefits of AI but it’s clear ai is limited by the compute power of today. Maybe the dream this author has will be realized some day. But it won’t be today or in current generations lifespan.
> With enough AI assistants building enough single-purpose tools, every problem becomes shallow. Every weird edge case already has seventeen solutions. Every 2am frustration has been felt, solved, and uploaded.
> We're not drowning in software. We're wading in it. And the water's warm
Just sounds like GPT style writing. I’m not saying this blog is all written by GPT, but it sounds like it is. I wonder if those of us who are constantly exposed to AI writing are starting to adopt some of that signature fluffy, use-a-lot-of-words-without-saying-much kinda style.
Life imitates art. Does intelligence imitate artificial intelligence?? Or maybe there’s more AI written content out there than I’m willing to imagine.
(Those snippets are from another post in this blog)
the most precise way to express your desire is by giving computer commands, or you may call it programming.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
Developers believe they complete tasks 25% faster with AI but when measured they are 19% slower when using AI.
How buggy is it? How long would it have taken to build something similar by hand?
That’s not jazz. Jazz being what it is, a lot of people in 2025 think it’s “everyone improvising,” but (outside of some free jazz) it’s quite structured and full of shared conventions.
Analogies work when you and your audience both understand the things being compared. In this case, the author doesn’t, and maybe some of the audience shares the same misperception, and so the analogy only works based on shared misunderstanding.
The analogy to jazz actually works better the more you know about it. But that’s accidental.
Second of all, it's easy to fart out some program in a few days vibe coding. How will that fare as more and more features need to be added on? We all used to say "Dropbox that's just FTP wrapped in a nice UI anyone can make that". This protocollie project seems to be a documentation viewer / postman for MCP. Which is cool, but is it something that would have taken a competent dev months to build? Probably not. And eventually the actual value of such things is the extensibility and integrations with various things like corporate SAML etc.
Will the vibe code projects of today be extensible like that, enough to grab market share vs the several similar versions and open source versions anyone can make in a few days, as the author suggests? It can be hard to extend a codebase you don't understand because you didn't write...
There are a lot gotchas with these new models. They get incredibly lazy if you let them. For example, I asked it to do a simple tally by year. I just assumed it’s simple enough I don’t need to ask to write a code. It counted first couple of years and just “guessed” the rest based on pattern it noticed.
Sometimes, it feels like having a lazy coworker that you have to double check constantly and email with repeated details. Other times, I just sit there in awe of how smart it is in my weekly AGI moment and how it’s going to replace me soon.
It sounds like Claude Code is the best UX right now but I don’t want to be locked into a Claude subscription, I want to bring my own key and tap into whatever provider I want.
None of that is actually what makes my favorite tools work. It’s usually some nerds that never stopped using C/C++ and really know hardware.
https://github.com/jerpint/context-llemur
The idea is to track all of the context of a project using git. It’s a CLI and MCP tool, the human guides it but the LLM contributes back to it as the project evolves
I used it to bootstrap the library itself, and have been using it more and more for context management of all sorts of things I care about
The decision making parts of people's brains will atrophy. It will be interesting to see what will happen.
Perhaps that is true, but without any examples I was immediately suspicious of this line.
> Either way, we're in this delicious middle ground where nobody can pretend expertise because the whole thing keeps changing under our feet.
Upon reflection this does in fact remind me of the early days of rocketry when we were just reaching into the upper atmosphere and then orbit. Wild things were being tried because there was not yet any handrails. Exploding a huge tank of water in the ionosphere just because, launching giant mylar balloons into orbit to try and bounce radar signals off of them, etc.
And also to help me troubleshoot my old yacht, it taught me to be an amateur marine electrician
I do not let it into my entire codebase tho. Keep the context small and if I dont get what I want in one or two prompt I dont use it
Right now its all monetization at gravity. As if companies are ready to pour software developer salaries in tools.
I imagine beginners will not have gpu rich environments and AI will not reach mainstream as traditional development did, unless something happens, idk what.
Right now, seniors love the complexity and entry barrier to it, so they can occupy the top of the food chain. History has proven that that does not last long.
In some scenarios as airtable, AI is replacing docs and customer support, eleminating the learning curve.
Maybe walking away is a better choice hah.
I am a bit disillusioned - I find mentoring humans satisfying but I don't get the same satisfaction mentoring AI. I also think it's a probably going to backfire by hamstringing the next generation and 'draining the competence' from the current.
This is interesting. Does Claude have a memory? Is this just a limit on the number of input tokens? It sounds like a fundamental misappropriation of cause, but maybe I just don't understand the latest whizbang feature of Claude. Can anyone clarify?
Namely, you don’t deserve to be paid for working 8 hours if you only worked for 30 minutes over an eight hour period.
I don’t care if you personally agree with that or not, the reality is that businesses believe it.
That means, sooner or later there will be a great rebalancing where people will be required to do significantly more work; probably the work of other developers who will be fired.
It’s fun for home projects; but the somewhat depressing reality is that there is no chance in hell this (sitting around for 7 hours a day reading reddit while Claude codes) will fly in corporate environments; instead, you’re looking at mass layoffs.
So. Enjoy it while you can folks.
In the future you’ll be spending that 8 hours struggling to juggle the context and review 20 different tasks, not playing with your kids.
In fact what I really want to see is a successful product that no one realizes was built by AI vibes until after it was successful. Customers don’t give a shit how something was built.
I don't work like this, I don't want to work like this and maybe most importantly I don't want to work with somebody who works like this.
Also I am scared that any library that I am using through the myriad of dependencies is written like this.
On the other hand... if I look at this as some alternate universe where I don't need to directly or indirectly touch any of this... I am happy that it works for these people? I guess? Just keep it away from me
I've said it before and I'll say it again: there likely isn't a "golden workflow" or "generally accepted best practices" on how to code with AI. The new models and agentic capabilities seem to be very powerful, and they will conform to whatever methodologies you currently use with whatever project you're working on, but that may still be under-utilizing what they are truly capable of.
A true optimum may even require you to adjust the way you work, down to structuring your code and projects differently. In fact you may need to figure out different approaches based on the project, the language, the coding style, the model, the specific task at hand, even your personality. I am convinced this aspect is what's causing the bimodal nature of AI coding discussions: people who stuck at it and figured it out, or just got lucky with the right mix of model / project / task / methodology, are amazed at their newfound superpowers -- whereas people who didn't, are befuddled by the hype.
This may seem like a lot of work, but it makes sense if you stop thinking of this as just a tool and more like working with a new team-mate.
This would take time to write if I’m doing it myself so I decided to vibe code it entirely. I had this idea that a compiled language is less likely to have errors (on account of the compiler giving the LLM quicker feedback than me) and so I chose Tauri with TS (I think).
The experience has been both wonderful and strange. The app was built by Claude Code with me intermittently prompting it between actual work sessions.
What’s funny is the bugs. If you ever played Minecraft during the Alpha days you know that Notch would be like “Just fixed lighting” in one release. And you’d get that release and it’d be weird like rain would now fall through glass.
Essentially the bugs are strange. At least in the MC case you could hypothesize (transparency bit perhaps was used for multiple purposes) but this app is strange. If the LLM configuration modal is fixed, suddenly the MCP/tool tree view will stop expanding. What the heck, why are these two related? I don’t know. I could never know because I have never seen the code.
The compile time case did catch some iterations (I let Claude compile and run the program). But to be honest, the promise of correctness never landed.
Some people have been systematic and documented the prompts they use but I just free flowed it. The results are outstanding. There’s no way I could have had this built for the $50 in Claude credits. But also there’s no way I could interpret the code.
———
> Maybe all methodology is just mutually agreed-upon fiction that happens to produce results?
Good news! *All of computer science is this way.* There’s nothing universally fundamental about transistors, or Turing machines, or OOP, or the HTTP protocol. They’re all just abstractions that fit, because they worked.
———
When should I stop learning and start building? My coworker wrote absolutely ATROCIOUS code, alone, for the past decade. But he was BUILDING that whole time. He’s not up to date on modern web practices, but who cared? He built.
Really helped my understanding of how apps work.
I call it 'Orchestratic Development'.
Edit: Seriously, down voted twice when just commenting on an article? God I hate this arrogant shithole.
Back in the Bitcoin hype days, there were new posts here every single day about the latest and greatest Bitcoin thing. Everyone was using it. It was going to take over the world. Remember all the people on this very site that sincerely thought fiat currency was going away and we'd be doing all of our transactions with Bitcoin? How'd that work out?
It feels exactly the same. Now the big claims are that coding jobs are going away, or if you at least don't use it you'll be left behind. People are posting AI stories every day. Everyone is using it. People say it's going to transform the industry.
Back then there was greater motivation to evangelize Bitcoin, as you could get rich by convincing people to buy in, and it's just to a lesser degree now. People who work for AI companies (like the author), posting AI stuff, trying to drum up more people to give them views/clicks, buy their products.
And of course you'll have people replying to this trying to make the case for why AI coding is already a thing, when in reality those posts are once again going to be carbon copies of similar comments from the Bitcoin days "hey, you're wrong, I bought pizza with Bitcoin last night, it's already taking over, bud!"
Completely new ways of programming are forming, completely new ways of computing and the best the luddites can do is be “against it”.
A revolution came along, a change in history and instead of being excited by the possibilities, joining in, learning, discovering, creating …… the luddites are just “against it all”.
I feel sorry for them. Why be in computing at all if you don’t like new technology?