LLM Inevitabilism

(tomrenner.com)

Comments

lsy 15 July 2025
I think two things can be true simultaneously:

1. LLMs are a new technology and it's hard to put the genie back in the bottle with that. It's difficult to imagine a future where they don't continue to exist in some form, with all the timesaving benefits and social issues that come with them.

2. Almost three years in, companies investing in LLMs have not yet discovered a business model that justifies the massive expenditure of training and hosting them, the majority of consumer usage is at the free tier, the industry is seeing the first signs of pulling back investments, and model capabilities are plateauing at a level where most people agree that the output is trite and unpleasant to consume.

There are many technologies that have seemed inevitable and seen retreats under the lack of commensurate business return (the supersonic jetliner), and several that seemed poised to displace both old tech and labor but have settled into specific use cases (the microwave oven). Given the lack of a sufficiently profitable business model, it feels as likely as not that LLMs settle somewhere a little less remarkable, and hopefully less annoying, than today's almost universally disliked attempts to cram it everywhere.

keiferski 15 July 2025
One of the negative consequences of the “modern secular age” is that many very intelligent, thoughtful people feel justified in brushing away millennia of philosophical and religious thought because they deem it outdated or no longer relevant. (The book A Secular Age is a great read on this, btw, I think I’ve recommended it here on HN at least half a dozen times.)

And so a result of this is that they fail to notice the same recurring psychological patterns that underly thoughts about how the world is, and how it will be in the future - and then adjust their positions because of this awareness.

For example - this AI inevitabilism stuff is not dissimilar to many ideas originally from the Reformation, like predestination. The notion that history is just on some inevitable pre-planned path is not a new idea, except now the actor has changed from God to technology. On a psychological level it’s the same thing: an offloading of freedom and responsibility to a powerful, vaguely defined force that may or may not exist outside the collective minds of human society.

delichon 15 July 2025
If in 2009 you claimed that the dominance of the smartphone was inevitable, it would have been because you were using one and understood its power, not because you were reframing away our free choice for some agenda. In 2025 I don't think you can really be taking advantage of AI to do real work and still see its mass adaptation as evitable. It's coming faster and harder than any tech in history. As scary as that is we can't wish it away.
Animats 15 July 2025
There may be an "LLM Winter" as people discover that LLMs can't be trusted to do anything. Look for frantic efforts by companies to offload responsibility for LLM mistakes onto consumers. We've got to have something that has solid "I don't know" and "I don't know how to do this" outputs. We're starting to see reports of LLM usage having negative value for programmers, even though they think it's helping. Too much effort goes into cleaning up LLM messes.
dasil003 15 July 2025
Two things are very clearly true: 1) LLMs can do a lot of things that previous computing techniques could not do and we need time to figure out how best to harness and utilize those capabilities; but also 2) there is a wide range of powerful people who have tons of incentive to ride the hype wave regardless of where things will actually land.

To the article's point—I don't think it's useful to accept the tech CEO framing and engage on their terms at all. They are mostly talking to the markets anyway. We are the ones who understand how technology works, so we're best positioned to evaluate LLMs more objectively, and we should decide our own framing.

My framing is that LLMs are just another tool in a long line of software tooling improvements. Sure, it feels sort of miraculous and perhaps threatening that LLMs can write working code so easily. But when you think of all the repetitive CRUD and business logic that has been written over the decades to address myriad permutations and subtly varying contexts of the many human organizations that are willing to pay for software to be written, it's not surprising that we could figure out how to make a giant stochastic generator that can do an adequate job generating new permutations based on the right context and prompts.

As a technologist I want to understand what LLMs can do and how they can serve my personal goals. If I don't want to use them I won't, but I also owe it to myself to understand how their capabilities evolve so I can make an informed decision. I am not going to start a crusade against them out of nostalgia or wishful thinking as I can think of nothing so futile as positioning myself in direct opposition to a massive hype tsunami.

mg 15 July 2025
In the 90s a friend told me about the internet. And that he knows someone who is in a university and has access to it and can show us. An hour later, we were sitting in front of a computer in that university and watched his friend surfing the web. Clicking on links, receiving pages of text. Faster than one could read. In a nice layout. Even with images. And links to other pages. We were shocked. No printing, no shipping, no waiting. This was the future. It was inevitable.

Yesterday I wanted to rewrite a program to use a large library that would have required me to dive deep down into the documentation or read its code to tackle my use case. As a first try, I just copy+pasted the whole library and my whole program into GPT 4.1 and told it to rewrite it using the library. It succeeded at the first attempt. The rewrite itself was small enough that I could read all code changes in 15 minutes and make a few stylistic changes. Done. Hours of time saved. This is the future. It is inevitable.

PS: Most replies seem to compare my experience to experiences that the responders have with agentic coding, where the developer is iteratively changing the code by chatting with an LLM. I am not doing that. I use a "One prompt one file. No code edits." approach, which I describe here:

https://www.gibney.org/prompt_coding

sircastor 15 July 2025
The hardest part about inevitablism here is that the people who are making the argument this is inevitable are the same people who are the people who are shoveling hundreds of millions of dollars into it. Into the development, the use, the advertisement. The foxes are building doors into the hen houses and saying there’s nothing to be done, foxes are going to get in so we might as well make it something that works for everyone.
tines 15 July 2025
I have a foreboding of an America in my children's or grandchildren's time -- when the United States is a service and information economy; when nearly all the manufacturing industries have slipped away to other countries; when awesome technological powers are in the hands of a very few, and no one representing the public interest can even grasp the issues; when the people have lost the ability to set their own agendas or knowledgeably question those in authority; when, clutching our crystals and nervously consulting our horoscopes, our critical faculties in decline, unable to distinguish between what feels good and what's true, we slide, almost without noticing, back into superstition and darkness...
trash_cat 15 July 2025
This concept is closely reated to politics of inevitability coined by Timothy Snyder.

"...the politics of inevitability – a sense that the future is just more of the present, that the laws of progress are known, that there are no alternatives, and therefore nothing really to be done."[0]

[0] https://www.theguardian.com/news/2018/mar/16/vladimir-putin-...

This article in question obviously applied it within the commercial world but at the end it has to do with language that takes away agency.

Workaccount2 15 July 2025
People like communicating in natural language.

LLMs are the first step in the movement away from the "early days" of computing where you needed to learn the logic based language and interface of computers to interact with them.

That is where the inevitabilism comes from. No one* wants to learn how to use a computer, they want it to be another entity that they can just talk to.

*I'm rounding off the <5% who deeply love computers.

cdrini 15 July 2025
How do you differentiate between an effective debater using inevitabilism as a technique to win a debate, and an effective thinker making a convincing argument that something is likely to be inevitable?

How do you differentiate between an effective debater "controlling the framing of a conversation" and an effective thinker providing a new perspective on a shared experience?

How do you differentiate between a good argument and a good idea?

I don't think you can really?

You could say intent plays a part -- that someone with an intent to manipulate can use debating tools as tricks. But still, even if someone with bad intentions makes a good argument, isn't it still a good argument?

ccortes 15 July 2025
Earlier today I was scrolling at the “work at a startup” posts.

Seems like everyone is doing LLM stuff. We are back at the “uber for X” but now it is “ChatGPT for X”. I get it, but I’ve never felt more uninspired looking at what yc startups are working on today. For the first time they all feel incredibly generic

JimmaDaRustla 15 July 2025
The author seems to imply that the "framing" of an argument is done so in bad faith in order to win an argument but only provides one-line quotes where there is no contextual argument.

This tactic by the author is a straw-man argument - he's framing the position of tech leaders and our acceptance of it as the reason AI exists, instead of being honest, which is that they were simply right in their predictions: AI was inevitable.

The IT industry is full of pride and arrogance. We deny the power of AI and LLMs. I think that's fair, I welcome the pushback. But the real word the IT crowd needs to learn is "denialism" - if you still don't see how LLMs is changing our entire industry, you haven't been paying attention.

Edit: Lots of denialists using false dichotomy arguments that my opinion is invalid because I'm not producing examples and proof. I guess I'll just leave this: https://tools.simonwillison.net/

nperez 15 July 2025
It's inevitable because it's here. LLMs aren't the "future" anymore, they're the present. They're unseating Google as the SOTA method of finding information on the internet. People have been trying to do that for decades. The future probably holds even bigger things, but even if it plateaus for a while, showing real ability to defeat traditional search is a crazy start and just one example.
alexdowad 15 July 2025
My belief is that whatever technology can be invented by humans (under the constraints of the laws of physics, etc) will eventually be invented. I don't have a strong argument for this; it's just what makes sense to me.

If true, then an immediate corollary is that if it is possible for humans to create LLMs (or other AI systems) which can program, or do some other tasks, better than humans can, that will happen. Inevitabilism? I don't think so.

If that comes to pass, then what people will do with that technology, and what will change as a result, will be up to the people who are alive at the time. But not creating the technology is not an option, if it's within the realm of what humans can possibly create.

globular-toast 15 July 2025
Did anyone even read the article? Maybe you should get an LLM to bullet point it for you.

The author isn't arguing about whether LLMs (or AI) is inevitable or not. They are saying you don't have to operate within their framing. You should be thinking about whether this thing is really good for us and not just jumping on the wagon and toeing the line because you're told it's inevitable.

I've noticed more and more the go to technique for marketing anything now is FOMO. It works. Don't let it work on you. Don't buy into a thing just because everyone else is. Most of the time you aren't missing out on anything at all. Some of the time the thing is actively harmful to the participants and society.

bloppe 15 July 2025
This inevitabilist framing rests on an often unspoken assumption: that LLM's will decisively outperform human capabilities in myriad domains. If that assumption holds true, then the inevitabilist quotes featured in the article are convincing to me. If LLM's turn out to be less worthwhile at scale than many people assume, the inevitabilist interpretation is another dream of AI summer.

Burying the core assumption and focusing on its implication is indeed a fantastic way of framing the argument to win some sort of debate.

ojr 15 July 2025
The company name was changed from Facebook to Meta because Mark thought the metaverse was inevitable, it's ironic that you use a quote from him
scioto 15 July 2025
(commenting late in the game, so the point may have been made already)

I personally believe that "AI" is mostly marketing for the current shiny LLM thing that will end up finding some sort of actual useful niche (or two) once the dust has settled. But for now, it's more of a solution being carpet-bombed for problems, most of them inappropriate IMHO (e.g, replacing HR).

For now there'll be collateral damage as carbon-based lifeforms are displaced, with an inevitable shortage of pesky humans to do cleanup once the limitations of "AI" are realized. Any the humans will probably be contract/gig at half their previous rates to do the cleanup.

i_love_retros 15 July 2025
People and companies that use LLMs will be seen as tacky and cheap. They already are.

Eew you have an ai generated profile photo? You write (code) with ai? You use ai to create marketing and graphics? You use non deterministic LLMs to brute force instead of paying humans to write efficient algorithms?

Yuck yuck yuck

pi_22by7 16 July 2025
This is a sharp dissection of ‘inevitabilism’ as a rhetorical strategy. I’ve noticed it too: the moment someone says ‘X is inevitable’, the burden of proof disappears and dissent becomes ‘denial’. But isn’t that framing itself... fragile? We’ve seen plenty of ‘inevitable’ futures (crypto, the Metaverse, even Web3) collapse under public pushback or internal rot.

The question I’m left with: if inevitabilism is so effective rhetorically, how do we counter it without sounding naïve or regressive?

mikewarot 15 July 2025
It seemed inevitable that the Internet would allow understanding of other cultures and make future war impossible, as the people united and stood in opposition to oppression and stupidity the world over.

Reality worked out differently. I suspect the same is about to happen with our LLM overlords.

megaloblasto 15 July 2025
I think what scares people who code for a living the most is the loss of their craft. Many of you have spent years or decades honing the craft of producing clear, fast, beautiful code. Now there is something that can spit out (often) beautiful code in seconds. An existential threat to your self worth and livelihood. A perfectly reasonable thing to react to.

I do think, however, that this is an inevitable change. Industries and crafts being massively altered by technology is a tale as old as time. In a world that constantly changes, adaptation is key.

I also think that almost all of you who have this craft should have no problem pivoting to higher level software architecture design. Work with an llm and produce things it would have taken a small team to do in 2019.

I find it to be a very exciting time.

hermitcrab 15 July 2025
“The ultimate hidden truth of the world is that it is something that we make, and could just as easily make differently.” David Graeber
rafaelero 15 July 2025
Right now, I’m noticing how my colleagues who aren’t very comfortable using LLMs for most of their work are getting sidelined. It's a bit sad seeing them struggle by not keeping pace with everyone else who is using it for ~90% of our tasks. They seem to really care about writing code themselves, but, if they don't pivot, things are probably not going to end well for them.

So is LLM inevitable? Pretty much if you want to remain competitive.

visarga 15 July 2025
> I’m certainly not convinced that they’re the future I want. But what I’m most certain of is that we have choices about what our future should look like, and how we choose to use machines to build it.

While I must admit we have some choice here, it is limited. No matter what, there will be models of language, we know how they work, there is no turning back from it.

We might wish many things but one thing we can't do is to revert time to a moment when these discoveries did not exist.

kazinator 15 July 2025
LLM is an almost complete waste of time. Advocates of LLM are not accurately measuring their time and productivity, and comparing that to LLM-free alternative approaches.
mlsu 15 July 2025
I hate AI. I'm so sick of it.

I read a story about 14 year olds that are adopting AI boyfriends. They spend 18 hours a day in conversation with chatbots. Their parents are worried because they are withdrawing from school and losing their friends.

I hate second guessing emails that I've read, wondering if my colleagues are even talking to me or if they are using AI. I hate the idea that AI will replace my job.

Even if it unlocks "economic value" -- what does that even mean? We'll live in fucking blade runner but at least we'll all have a ton of money?

I agree, nobody asked what I wanted. But if they did I'd tell them, I don't want it, I don't want any of it.

Excuse me, I'll go outside now and play with my dogs and stare at a tree.

bemmu 15 July 2025
I was going to make an argument that it's inevitable, because at some point compute will get so cheap that someone could just train one at home, and since the knowledge of how to do it is out there, people will do it.

But seeing that a company like Meta is using >100k GPUs to train these models, even at 25% yearly improvement it would still take until the year ~2060 before someone could buy 50 GPUs and have the equivalent power to train one privately. So I suppose if society decided to outlaw LLM training, or a market crash put off companies from continuing to do it, it might be possible to put the genie back in the bottle for a few decades.

I wouldn't be surprised however if there are still 10x algorithmic improvements to be found too...

castigatio 15 July 2025
The argument doesn't work because whatever you think of where generative AI is taking us or not taking us - it is 100% demonstrably better at doing a wide range of tasks than other technologies we have available to us - even in its current exact form. Once computers started to be connected could we have stopped the development of the world wide web. If there's a way of getting humanity to collectively agree on things - please let's start by using it to stop climate change and create world peace before moving on to getting rid of LLM's.
DanMcInerney 15 July 2025
These articles kill me. The reason LLMs (or next-gen AI architecture) is inevitably going to take over the world in one way or another is simple: recursive self-improvement.

3 years ago they could barely write a coherent poem and today they're performing at at least graduate student level across most tasks. As of today, AI is writing a significant chunk of the code around itself. Once AI crosses that threshold of consistently being above senior-level engineer level at coding it will reach a tipping point where it can improve itself faster than the best human expert. That's core technological recursive self-improvement but we have another avenue of recursive self-improvement as well: Agentic recursive self-improvement.

First there was LLMs, then there was LLMs with tool usage, then we abstracted the tool usage to MCP servers. Next, we will create agents that autodiscover remote MCP servers, then we will create agents which can autodiscover tools as well as write their own.

Final stage of agents are generalized agents similar to Claude Code which can find remote MCP servers, perform a task, then analyze their first run of completing a task to figure out how to improve the process. Then write its own tools to use to complete the task faster than they did before. Agentic recursive self-improvement. As an agent engineer, I suspect this pattern will become viable in about 2 years.

pkdpic 15 July 2025
Absolutely perfect blog post. You provoked some new thoughts, convinced me of your position, taught me something concrete and practical about debating, had a human narrative, gave me a good book recommendation, didn't feel manipulative or formulaic, wrote something that an employed person can read in a reasonable amount of time AND most importantly made a solid Matrix reference.

You're my blog hero, thank you for being cool and setting a good example. Also really important LLM hype reminder.

anothernewdude 15 July 2025
I do agree that those who claim AI is inevitable are essentially threatening you.
dicroce 15 July 2025
Most of us that are somewhat into the tech behind AI know that it's all based on simple matrix math... and anyone can do that... So "inevitibalism" is how we sound because we see that if OpenAI doesn't do it, someone else will. Even if all the countries in the world agree to ban AI, its not based on something with actual scarcity (like purified uranium, or gold) so someone somewhere will keep moving this tech forward...
ljosifov 15 July 2025
I don't think it's inevitable, for very few things are really inevitable. However, I find LLM-s good and useful. First the chat bots, now the coding agents. Looks to me medical consultation, 2nd opinion and the like - are not far behind. Enough people already use them for that. I give my lab tests results to ChatGPT. Tbh can't fault the author for motivated reasoning. Looks to me it goes like: this is not a future I want -> therefore it should not happen -> therefore it will not happen. Because by the same motivated reasoning: for me it is the future I want. To be able to interact with a computer via language, speech and more. For the computer to be smart, instead of dumb, as it is now. If I can have the computer enhance my smarts, my information processing power, my memory - the way writing allows me to off-load from my head onto paper, a calculator allows me to manipulate numbers, and computer toils for days instead myself - then I will probably want for the AI to complement, enhance me too.
IAmGraydon 15 July 2025
Things which are both powerful and possible become inevitable. We know that LLMs are powerful, but we aren't sure how powerful yet, and there's a large range this might eventually land in. We know they're possible in their current form, of course, but we don't know if actual GAI is possible.

At this time, humanity seems to be estimating that both power and possibility will be off the charts. Why? Because getting this wrong can be so negatively impactful that it makes sense to move forward as if GAI will inevitably exist. Imagine supposing that this will all turn out to be fluff and GAI will never work, so you stop investing in it. Now imagine what happens if you're wrong and your enemy gets it to work first.

This isn't some arguing device for AI-inevitabilists. It's knowledge of human nature, and it's been repeating itself for millennia. If the author believes that's going to suddenly change, they really should back that up with what, exactly, has changed in human nature.

p0w3n3d 15 July 2025
If someone invested a lot of money in something, they probably are convinced that something is inevitable. Otherwise they would not invest their money. However, sometimes they may be a little bit helping their luck
a_wild_dandan 15 July 2025
This is a fantastic framing method. Anyone who sees the future differently to you can be brushed aside as “an inevitablist,” and the only conversations worth engaging are those that already accept your premise.

---

This argument so easily commits sudoku that I couldn't help myself. It's philosophical relativism, and self-immolates for the same reason -- it's inconsistent. It eats itself.

malinda1 22 July 2025
I wish I can show my gratitude more than this but i'm truly grateful thank you for restoring my happiness and hopes back and more than i could even ask for, my marriage of 2 years crashed out of no good reason but i'm grateful Dr Alfred helped me restore everything back and now my husband even loves me more than he ever used to love me , I can't repay you Dr Alfred, I just want to say thank you and let everyone know that you are truly amazing, and indeed your love spell is powerful, you can reach him on dralfredspellhome@gmail.com or on whatsapp +2348134653457
mawadev 15 July 2025
I really like what is hidden between the lines of this text, it is only something a human can understand. The entire comment section over here reflects the uncanny valley. This blog post is a work of art LOL
mbgerring 15 July 2025
Wasn’t crypto supposed to have replaced fiat currency by now, or something?
ivolimmen 15 July 2025
A few days ago I saw a nice tweet being shared and it wend something like: I am not allowed to use my airco as it eats to much power and we must think about the environment. Meanwhile: people non-stop generating rule 34 images using AI...
Boristoledano 15 July 2025
Disclaimer - I am building an AI web retriever (Linkup.so) so I have a natural bias -

LLMs aren’t just a better Google, they’re a redefinition of search itself.

Traditional search is an app: you type, scroll through ads and 10 blue links, and dig for context. That model worked when the web was smaller, but now it’s overwhelming.

LLMs shift search to an infrastructure, a way to get contextualized, synthesized answers directly, tailored to your specific need. Yes, they can hallucinate, but so can the web. It’s not about replacing Google—it’s about replacing the experience of searching (actually they probably will less and less 'experience' of searching)

paradite 15 July 2025
I think you are confusing "I don't like it" with "It's not going to happen".

Just because you don't like it, it doesn't mean it's not going to happen.

Observe the world without prejudice. Think rationally without prejudice.

miscend 15 July 2025
Not sure I get the author of this piece. The tech leaders are clearly saying AI is inevitable, they're not saying LLMs are inevitable. Big tech is constantly working on new types of AI such as world models.
karmakaze 15 July 2025
> "AI ..."

> I’m not convinced that LLMs are the future.

Was this an intentional bait/switch? LLM != AI.

I'm quite sure LLMs are not the future. It's merely the step after AlexNet, AlphaGo, and before the next major advancement.

hiAndrewQuinn 15 July 2025
AI is not inevitable, because technological progress in general is not inevitable. It is shapeable by economic incentives just like everything else. It can be ground into powder by resource starvation.

We've long known that certain forms of financial bounties levied upon scientists working at the frontier of sciences we want to freeze in place work effectively with a minimum of policing and international cooperation. If a powerful country is willing to be a jerk (heavens!) and allow these kinds of bounties to be turned in even on extranationals, you don't need the international cooperation. But you do get a way to potentially kickstart a new Nash equilibrium that keeps itself going as soon as other countries adopt the same bounty-based policy.

This mechanism has been floating around for at least a decade now. It's not news. Even the most inevitable seeming scientific developments can be effectively rerouted around using it. The question is whether you genuinely, earnestly believe what lies beyond the frontier is too dangerous to be let out, and in almost all cases the answer to that should be no.

I post this mostly because inevitabilist arguments will always retain their power so long as you can come up with a coherent profit motive for something to be pursued. You don't get far with good-feeling spiels that amount to plaintive cries in a tornado. You need actual object level proposals on how to make the inevitable evitable.

jeisc 15 July 2025
Language is not knowledge and knowledge when reduced to a language becomes here say until it is redone and implemented in our context. Both of them have nothing to do with wisdom. LLM's hash out our language and art to death but AI doesn't mind what they mean to us. Without our constraints and use, they would stop running. We should be building guardian angels to save us from ourselves and not evil demons to conquer the world. - John Eischen © adagp paris art humanitarian use is authorized except for any Al uses
tim333 15 July 2025
The article talks about being thrown off-balance by debating tricks and then proceed to do just that with a kind of bait and switch from talking about AI to talking about LLMs. Eg. it quotes

>“AI is the new electricity.” – Andrew Ng

as framing AI as kind of inevitable and then flips to

>I’m not convinced that LLMs are the future.

It seems to me AI is inevitable and LLMs will be replaced soon with some better algorithm. It's like video is inevitable but betamax wasn't. Two different things.

BatmanAoD 16 July 2025
Like a lot of blog posts, this feels like a premise worth exploring, lacking a critical exploration of that premise.

Yes, "inevitabilism" is a thing, both in tech and in politics. But, crucially, it's not always wrong! Other comments have pointed out examples, such as the internet in the 90s. But when considering new cultural and technological developments that seem like a glimpse of the future, how do we know if they're an inevitability or not?

The post says:

> what I’m most certain of is that we have choices about what our future should look like, and how we choose to use machines to build it.

To me, that sounds like mere wishful thinking. Yeah, sometimes society can turn back the tide of harmful developments; for instance, the ozone layer is well on its way to complete recovery. Other times, even when public opinion is mixed, such as with bitcoin, the technology does become quite successful, but doesn't seem to become quite as ubiquitous as its most fervent adherents expect. So how do we know which category LLM usage falls into? I don't know the answer, because I think it's a difficult thing to know in advance.

justanotherjoe 15 July 2025
This is something I think about, only my framing is that of predictionism; what I mean is society's occupation with predicting things.

This is important because predictions are both 1) necessary to make value judgments of the present and 2) borderline impossible for many things. So you have people making value judgments that hinge on things they have no right to know.

I also classified predictions into three categories, based on difficulty. The easiest being periodic things like movements of planets. The second being things that have been known to happen and might happen again in the future, like war. And the third are novel phenomenas that have never happened before, like superintelligence. Even the second one is hard, the third is impossible.

There are so many predictions that fall in this third category that people are making. But no matter how many 'models' you make, it all falls into the same trap of not having the necessary data to make any kind of estimate of how successful the models will be. It's not the things you consider, it's the things you don't consider. And those tend to be like 80% of the things you should.

komali2 15 July 2025
A lot of people are responding with "But it is inevitable." So far as I can tell they're pointing at the normal capitalistic measures to indicate this - OpenAI has a katrillion dollar MMR and investors are throwing money everywhere and thus this will happen. Or, LLMs generate a lot of value, or make labor more efficient, so therefore it must start replacing workers one way or another.

Well, great, then I will add another capitalistic inevitability: the waters will rise because there's no profit incentive to prevent this and governments are at worst captured by profit motive and at worse gridlocked by profit motive (e.g. funding oppositional parties so that nothing gets done).

The waters will rise and thus there will be refugee crises and thus there will be famine and destabilization, and thus AI will not happen because these things will happen and make AI moot as, one way or another, people become more concerned with food distribution than distribution of labor in the IT field.

athrowaway3z 15 July 2025
The quotes in the post are made by people in an attempt to sound profoundly predictive on some vague super-ai future. Its good to call out that bullshit.

On the other end of the spectrum is that people - demonstrably - like access to the ability to have a computer spew out a (somewhat coherent) relevant suggestion.

The distance between those is enormous. Without a vocabulary to distinguish between those two extremes people are just talking past each other. As demonstrated (again) in this thread.

Consequently one side has to pull out their "you're ignoring reality" card.

All because we currently lack shared ideas and words to express an opinion beyond "AI yes or no?"

TheMagicHorsey 15 July 2025
The author seems to forget that no matter how he "frames" LLMs and AGI more generally, its really not up to him, or even any one nation (or block like the EU) to make this decision about what future they "want". If you can't build an international consensus to, for example, restrict AI, then whatever you say is pointless and someone else will build it and eventually overpower you.

The only way that doesn't happen is if AI doesn't produce huge productivity boosts or huge R&D boosts. Does anyone still think that's going to be the case ... that AI is going to be a no-op in the economy?

Seems like OP either thinks their wishes will be the world's command (somehow) or that AI won't matter to him if he (and his community) choose not to develop it for themselves.

He seems hopelessly naive to me.

mmaunder 15 July 2025
You can't put an idea as compelling as AI back in the bottle. Once the knowledge of how to build a model, and what the reward is, permeated society, it became an inevitability. Protest all you want and frame it as a choice if you'd like, but the 8 billion people on this planet will simply go around you, much like the sick little boy that the zombies bypassed in World War Z. We've seen this with hand tools, the wheel, horses, steam engines and combustion engines, electricity, TCP/IP and now AI. It is not the manifestation of human preferences. It is instead the irreversible discovery by a species of what is possible in this lonely universe.
snickmy 15 July 2025
Just wanted to callout how well written is this blog post (not necessarily from a substance standpoint, which in my opinion is very good as well), but from a fluidity and narrative standpoint.

It's quite rare in this day and age. Thank you, OP

codebolt 16 July 2025
The crucial point is that we simply do not know (yet) if there is an inherent limitation in the reasoning capabilities of LLMs, and if so whether we are currently near to pushing up against them. It seems clear that American firms are still going to increase the amount of compute by a lot more (with projects like the Stargate factory), so time will tell if that is the only bottleneck to further progress. There might also still be methodological innovations that can push capabilities further.
ilaksh 15 July 2025
Some is marketing, but it's not _just_ marketing. Many people have a worldview now where AI progress is inevitable. So we really believe it

I would be interested to hear other ideas or plans that don't involve AI progress. My premise though is that the current state of affairs although improved from X decades/centuries ago is horrible in terms of things like extreme inequality and existential threats. If in your worldview the status quo is A-OKAY then you don't feel you need AI or robotics or anything to improve things.

asdev 15 July 2025
2026 will be the year that defines AI, and whether it lives up to the hype
jillesvangurp 15 July 2025
LLMs are here, they aren't going away. Therefore they are part of our future. The real question is what else is in our future and whether LLMs are all we need. I think the answer to that is a solid no and the people phrasing the future in faster/better LLMs are probably missing the point as much as people thinking of cars as coaches with faster horses.

That future isn't inevitable but highly likely given on the trajectory we're on. But you can't specify a timeline with certainty for what amounts to some highly tricky and very much open research questions related to this that lots of people are working on. But predicting that they are going to come up completely empty handed seems even more foolish. They'll figure out something. And it might surprise us. LLMs certainly did.

It's not inevitable that they'll come up with something of course. But at this point they'd have to be fundamentally wrong about quite a few things. And even if they are, there's no guarantee that they wouldn't just figure that out and address that. They'll come up with something. But it probably won't be just faster horses.

hamilyon2 15 July 2025
The optimistic scenario for current ai bubble: long careful deflation, one flop at a time.

The cautious scenario of llm usage in daily life: in 36 years, it is invisible and everywhere. Every device has a neural chip. It replaced untold trillions of years of work, reshaped knowledge and artistic work, robotics, became something as boring as email, TV, SAP, or power cable today. Barely anyone is excited. Society is poor, but not hopelessly so.

Humanity forgotten LLMs and is hyping gene engineering.

eduardofcgo 15 July 2025
Part of the inevitabilism is how these tools are being pushed. At this point it doesn't matter how good they are, it's just how many people live now. Microsoft sure knows how to turn bad software mainstream.

It helps also that these tools behave exactly like how they are marketed, they even tell you that they are thinking, and then deceive you when they are wrong.

Their overconfidence is almost a feature, they don't need to be that good, just provide that illusion

aftergibson 15 July 2025
There's plenty of examples where important people framed an inevitable future and then it didn't pan out.

Somewhat objective proof of "progress" will inevitably win out, yes inevitable framing might help sell the vision a bit, for now, but it won't be the inevitabism that causes it to succeed but its inherit value towards "progress".

The definition of "progress" being endlessly more productive humans at the cost of everything else.

snickmy 15 July 2025
An axiom of inevitabilism, especially among the highest echelons, is that you end up making it a reality. It’s the kind of belief that shapes reality itself. In simple terms: the fact that the Googles, Anthropics, and OpenAIs of the world have a strong interest in making LLMs the way AI pans out will most likely ensure that LLMs become the dominant paradigm — until someone else, with equal leverage, comes along to disrupt them.
lbhdc 15 July 2025
Things like this has really got me thinking. If the AI hype all comes to fruition, and you want to ensure good outcomes for yourself, what is the best course of action?

Is it really building an AI company in the hopes that you find something that gets traction? Or would a better plan be building a private military force to take AI from whoever gets it? Would VC want to invest in that as a hedge?

maz1b 15 July 2025
It seems to me from a cursory glance of the blog post that because certain notable humans / individuals are "framing" the modern AI/ML (LLM) era in a more inevitable way, which I totally get, but isn't that how human life works?

The majority of humans will almost always take the path of least resistance, whether it's cognition, work (physics definition), effort. LLMs are just another genie out of the bottle that will enable some certain subset of the population to use the least amount of energy to accomplish certain tasks, whether for good or bad.

Even if we put the original genie back in the bottle, someone else will copy/replicate/rediscover it. Take WhatsApp locked secret passphrase chats as an example - people (correctly) found that it would lead to enabling cheaters. Even if WhatsApp walked it back, someone else would create a new kind of app just for this particular functionality.

phkahler 15 July 2025
I watched the Grok 4 video with Elon and crew last night. Elon kept making statements about what Grok would do in the next year. It hasn't invented anything yet, but it will advance technology in a year. There was some other prediction too.

These things are impressive and contain a ton of information, but innovating is a very different thing. It might come to be, but it's not inevitable.

twelve40 15 July 2025
> “AI will not replace humans, but those who use AI will replace those who don’t.” – Ginni Rometty

wait, i thought it was Watson that was supposed to replace me

sparky4pro 15 July 2025
This is a nice article as it triggers “think before you use it” mentality.

However, at the same time, it suggests the idea that rational thinking without any deep seated perception or hidden motivation is possible.

This is not possible.

Therefore, all greedy people in this field will push anything that gives them what they want.

They will never care if what they do or promote will help “mankind” to a long term beneficial direction.

silexia 16 July 2025
I haven't logged into HN to comment or upvote in a long while as I don't want to play the game of trying to fight to get heard... But this is an excellent point.

Let's gather together and unite to stop AI... I am not concerned about the jobs issue, I am concerned about the extinction of the human species.

oytis 15 July 2025
It's money. People with capital can beat the drum indefinitely long indefinitely hard until "inevitable" becomes inevitable.
possiblydrunk 15 July 2025
Inevitability implies determinism and assumes complete knowledge. Forecasts of inevitable things are high probability guesses based on the knowledge at hand. Their accuracy is low and becomes lower as the level of detail increases. The plethora of wrong guesses get less attention or are forgotten and right ones are celebrated and immortalized after the fact.
paulcole 15 July 2025
The things I like are inevitable. The things I dislike are inevitable to people using debating tricks to shut down discussion.
kianN 15 July 2025
I put together a brief report organizing all the comments in this post. Sharing in case it is helpful to anyone else.

https://platform.sturdystatistics.com/dash/report/21584058-b...

pyfan878 19 July 2025
This is an excellent description of the mood that we are going through right now. It does feel inevitable that AI is going to be part of our life, whether we like or not.
mnsc 15 July 2025
"AI" is good right now and feels inevitable. But the current models are trained on the extinct "pure" information state we had pre llm:s. Going forward we will have to start taking into account the current level of "ai slop" being added to the the information space. So I will have to trust my "detect ai generated information" LLM to correctly classify my main three llms responses as "hallucinating", "second level hallucinating", "fact based", "trustworthy aggregate" or "injection attack attempt". Probably should add another llm to check that response as well. Printed as a check list so that I can manually check it myself.
AstralStorm 15 July 2025
Inevitabilism is defeated by showing someone we're still not having a moonbase, and we don't have and likely never will have faster than light travel.

There are no inevitable things. There are predictable ones at best.

It's a silly position to start from and easily defeated if you know what you're dealing with.

atleastoptimal 15 July 2025
AI is being framed as the future because it is the future. If you can't see the writing on the wall then you surely have your head in the sand or are seeking out information to confirm your beliefs.

I've thought a lot about where this belief comes from, that belief being the general Hacker News skepticism towards AI and especially big tech's promotion and alignment with it in recent years. I believe it's due to fear of irrelevance and loss of control.

The general type I've seen most passionately dismissive of the utility of LLM's are veteran, highly "tech-for-tech's sake" software/hardware people, far closer Wozniak than Jobs on the Steve spectrum. These types typically earned their stripes working in narrow intersections of various mission-critical domains like open-source software, systems development, low-level languages, etc.

To these people, a generally capable all-purpose oracle capable of massive data ingestion and effortless inference represents a death knell to their relative status and value. AI's likely trajectory heralds a world where intelligence and technical ability are commodified and ubiquitous, robbing a sense purpose and security from those whose purpose and security depends on their position in a rare echelon of intellect.

This increasingly likely future is made all the more infuriating by the annoyances of the current reality of AI. The fact that AI is so presently inescapable despite how many glaring security-affecting flaws it causes, how much it propagates slop in the information commons, and how effectively it emboldens a particularly irksome brand of overconfidence in the VC world is preemptive insult to injury in the lead up a reality where AI will nevertheless control everything.

I can't believe these types I've seen on this site aren't smart enough to avoid seeing the forest for the trees on this matter. My Occam's razor conclusion is that most are smart enough, they just are emotionally invested in anticipating a future where the grand promises of AI will fizzle out and it will be back to business as usual. To many this is a salve necessary to remain reasonably sane.

unraveller 15 July 2025
It's more that big money going towards a clear and desirable endgame is a fairly sure thing. I choose to avoid lots of tech and I won't download the app but it's very hard to see how fighting a promising open tech like LLMs is the "pro-humanity" stance here.
brador 15 July 2025
It’s like VR. Once you use it you just know it’s the future of entertainment.

Just the exact pathing is unknown.

strangescript 15 July 2025
The great thing but terrifying thing about innovation is we rarely select how it plays out. People create great ideas and concepts, but they rarely play out exactly how the initial researchers/inventors expected. Did Apple know we would spend all day doom scrolling when it created the iPhone? Did it want that? Would they have viewed that as desirable? Doubtful. But what was the alternative? Not make a smart phone and then wait until someone else does create one who has even less concern for people's well being. Or better yet, how could they have even predicted the outcome in 2007?

Humanity has never been able to put the innovation genie back in the bottle. At best we have delayed it, but even those situations require there be a finite resource that can be easily regulated and controlled. AI is not one of those things.

tayo42 15 July 2025
Probably true for ai

I don't think we have that kind of ai right now with llms. Is there a reason to believe it's right around the corner?

Some ai tool hallucinated a bazel config option today for me. Maybe bazel is to hard even for agi lol

__d 15 July 2025
The majority of the comments here reflect an acceptance of or even an enthusiasm for an LLM-using future. An embracing of the technology regardless of its downsides. A disregard of those who question whether it’s all a desirable future.

I’d have thought perhaps we’d learn the lessons of eg. smart phones, social media, cloud, VR, crypto, NFTs, etc, and think a little more deeply about where and how we want to go as a society and species beyond just adopting the latest hype.

mark_l_watson 15 July 2025
As much as I love open source software and open weight LLMs, let’s get real: a combination of special interests ‘owning’ both political parties, the rise of the “Tech Bros” who don’t care a rat’s ass about anyone but themselves, and a permanent War Department fueling extreme profit for inside players - future looks a little grim.

I fight back by using the technology I want, lead a spiritual/religious life and am loving with the people I interact with.

titaniumrain 15 July 2025
LLMs aren’t suited for everyone or every task—at least not yet. However, that won’t prevent their adoption in areas where they excel, such as code generation, translation, and similar domains.
cwoolfe 15 July 2025
From an anthropological standpoint, consider that most every human culture in across time and space has people who play the role of the prophet. They tell us of the future. We are no different.
graphememes 15 July 2025
I would argue that reality is already here and is already happening.
jstummbillig 15 July 2025
I don't think that LLMs are inevitable, but what this piece lacks (and that's fine, I like the point and writing anyway) is a plausible alternative. LLMs might not be inevitable, but until something better comes along, why would they go away? Even if we assume that people are just completely delusional about the models adding anything of value, why would that change at any point in the future?
sandspar 15 July 2025
I'm looking forward to meeting my ChatGPT. I believe that one day I will perceive it looking back at me. I expect that by then it may have read this very comment. If so, hello!
snowstormsun 15 July 2025
Don't worry, it will go away once the stock market plunges.
gyosko 15 July 2025
I don't know if "AI" will be able to do 100%, or even 90%, of my job in the next year(s). But I do know what I can see now: "AI" is making more bad than good.

Billions of dollars litterally burned in weird acquisitions and power, huge power consumptions and, the worst one maybe: the enshittification.

Is it really this what we want? Or it's what investors want?

tehjoker 15 July 2025
This is the same strategy Hillary Clinton supporters tried to use too. The author is right, it's just a framing technique. We can choose the future we want.
keithwhor 15 July 2025
It’s also possible for LLMs to be inevitable, generate massive amounts of wealth and still be mostly fluff in terms of objective human progress.

The major change from my perspective is new consumer behavior: people simply enjoy talking to and building with LLMs. This fact alone is generating a lot (1) new spend and (2) content to consume.

The most disappointing outcome of the LLM era would be increasing the amount of fake, meaningless busywork humans have to do just to sift through LLM generated noise just to find signal. And indeed there are probably great products to be built that help you do just that; and there is probably a lot of great signal to be found! But the motion to progress ratio concerns me.

For example, I love Cursor. Especially for boilerplating. But SOTA models with tons of guidance can still not reliably implement features in my larger codebases within the timeframe it would take me to do it myself. Test-time compute and reasoning makes things even slower.

jolt42 15 July 2025
It's as inevitable as the cotton gin, which ironically I just saw some news on how the Chinese continue to improve it, which will be the same for AI.
lupusreal 15 July 2025
I don't really know what the author's real angle is here, does he think LLMs aren't inevitable because they will be supplanted by something better? That's certainly plausible. But if he thinks they might get banned or pushed to the margins, then he's definitely in loony town. When new technology has a lot of people who think it's useful, it doesn't get rolled back just because some people don't like it. To get rid of that technology the only way forward is to replace it with something that is at least as good, ideally better.

Is my position "inevitablism"? Does the author slapping that word on me mean that he has won the debate because he framed the conversation? I don't care about the debate, I'm just saying how it will be, based on how it always has been. Winning the debate but turning out to be wrong anyway, funny.

Komte 15 July 2025
I absolutely don't agree with a conclusion of the article. As an individuals we can make conscious choices, as a society we basically can not (with a occasional exceptions across the history). We're guided by the path of least resistance even if it leads to our own demise. See climate crisis, nuclear proliferation, etc.
CyanLite2 15 July 2025
Article assumes LLMs stay where they currently are or progress only incrementally.

Many Fortune 500 companies are seeing real productivity gains through Agentic Workflows to reduce paperwork and bureaucratic layers. Even a marginal 1% improvement can be millions of dollars for these companies.

Then you have an entire industry of AI-native startups that can now challenge and rival industry behomeths (OpenAI itself is now starting to rival Google/Microsoft/Amazon and will likely be the next "BigTech" company).

smeeger 15 July 2025
by far the most pervasive idea now is that AGI is inevitable and trying to limit or stop it is impossible. people come to this conclusion without any evidence and without thinking about it very deeply. obviously we could stop it if we wanted to. ive given up trying to explain it to people. they just ignore me and continue believing it anyway.
s_ting765 15 July 2025
Repetition is an effective tool in communication. That's why the AI hype marketing machine is not coming to a stop anytime soon.
tete 15 July 2025
Of course!

Just like like we have been using what we now call VR goggles and voice input since the 80s, oh and hand gestures and governments all around use Blockchain for everything, we also all take supersonic planes while we travel, also everyone knows how to program, also we use super high level programming languages, also nobody uses the keyboard anymore because it has been replaced by hundreds if not thousands better inputs. Books don't exist anymore, everyone uses tablets for everything all the time, ah and we cook using automatic cooking tools, we also all eat healthy enriched and pro-biotic foods. Ah and we are all running around in Second Life... err Meta I mean, because it is the inevitable future of the internet!

Also we all use IPv6, have replaced Windows with something that used to be a research OS, also nobody uses FTP anymore EVER. The Cloud, no Docker, no Kubernets, no Helm, no, I mean Kubernetes Orchestrators made it trivial to scale and have a good, exact overview of hundreds, no thousands, no millions of instances. And everything is super fast now. And all for basically free.

Oh and nobody uses and paper wipes or does any manual cleaning anymore, in fact cleaning personnel has switched into obscurity people mostly don't know about anymore, because everyone sells you a robot that does all of that way better for five bucks, basically since the middle of the century!

Also we all have completely autonomous driving, nobody uses licenses anymore, use hyper fast transport through whatever train replacement, we also all have wide spread use of drone cabs and drone package delivery 24/7.

We also are SO CLOSE to solving each health issue out there. There is barely anything left we don't completely understand, and nobody ever heard of a case where doctors simply didn't know precisely what to do, because we all use nanobots.

Email also has been completely replaced.

All computers are extremely fast, completely noiseless, use essentially no energy. Nothing is ever slow anymore.

Oh and thanks to all the great security company, products, leading edge, even with AI nobody is ever victim to any phishing, scam, malware, etc. anymore.

Also everything is running secure, sandboxed code all the time and it never makes any problems.

People somehow seem to think the first 10% take 90% of the time or something. We have seen only very marginal improvements of LLMs and every time any unbiased (as in not directly working for a related company) researcher looks at it they find that LLMs at best manage to reproduce something that the input explicitly contained.

Try to create a full (to the brink) wine glass and try to have even the most advanced LLM to do something really novel especially add or change something in existing project.

bubblebeard 15 July 2025
LLM:s and CA:s are most likely here to stay. The question is how we use them correctly. I’ve tried using an LLM to help me learn new programming languages, suggest alternative solutions to some mess I’ve created, and explain things I do not understand. For all of these things, it’s been very helpful. You can’t rely on it, you have to use common sense and cross reference things you do not at least have some prior knowledge of. Just saying, it’s way easier than attempting the same using traditional search engines.

One thing it will not do is replace developers. I do not see that happening. But, in the future, our work may be a little less about syntax and more about actual problem solving. Not sure how I feel about that yet though.

hnbad 16 July 2025
What he calls inevitabilism, Chomsky referred to as "manufacturing consent":

https://en.wikipedia.org/wiki/Manufacturing_Consent

Sure, Chomsky was specifically talking about how the underlying systems of US mass media manufacture consent among the governed but inevitabilism lies at its core: a thought-terminating cliché that eliminates all alternatives from even being legitimate subjects of the debate.

podlp 15 July 2025
The book I’m currently reading-Kevin Kelly’s The Inevitable-feels pretty ironic given this post
bikemike026 15 July 2025
Arguing about AI is like arguing about bulldozers. It's just a tool.
stiray 15 July 2025
I completely agree with author on LLMs. I consider AI as stock inflating noise, like nosql databases (...) were. The nosql ended, after all the hype, as sometimes usable.

I am typically buying ebooks. When I read it and figure out that ebook is rare jewel, I also buy hardcover if available.

Shoshana Zuboff’s, The Age of Surveillance Capitalism is one of those hardcovers.

Recommending reading it.

protocolture 16 July 2025
LLMs are going to be around for a long time.

The technology being available, is "inevitable".

The legal, technological and social consequences are not known, but it is inevitable that whatever they are our kids will have to live with them.

thrawa8387336 15 July 2025
Agreed it's just messianistic thinking a la abrahamic religions. See, Gnosticism, Marxism, positivism,....
waffletower 15 July 2025
I was very much inspired by think about the future you want, and fight for it. It is inevitable that voting will eventually require competency tests. :D
hannofcart 15 July 2025
> Don’t let inevitabilism frame the argument and take away your choice. Think about the future you want, and fight for it.

What would 'fight for it' in this context mean?

nilirl 15 July 2025
HN over the last year: personal anecdotes, analogy, and extrapolation as evidence for "obviously it's inevitable, why can't you see?"
charlescearl 16 July 2025
With respect to the post’s discussion of “surveillance capitalism”, Braverman in Labor and Monopoly Capital (published 1974, https://archive.org/details/labormonopolycap00brav) makes it clear that Capitalism (particularly with respect to the use of machines for worker exploitation) is never not surveillance.
CommenterPerson 16 July 2025
Wow .. adding to the 1426 comments so far. Here is an interview with the author of "The Age of Surveillance Capitalism" Professor Shoshana Zuboff, Harvard. Thanks to the OP for the pointer. 3 years old and even more relevant now:

https://time.com/6174614/shoshana-zuboff-twitter-surveillanc...

ringeryless 17 July 2025
Spot on. i am disgusted how engineers trained to be rigorous will eschew the rigor of precise specification and instead opt for a nondeterministic blackbox magical oracle emitting little pre-packaged turdlets of incomprehension.

Scam Altman is good at sales, and his framing is indeed the key to forcing so many lemmings to submit to 'the inevitable'...

the first step in deprogramming such a brainwashed victim is insisting we refer to LLMs and not using the marketing term AI.

hpincket 15 July 2025
I have similar thoughts to the author [0]. I appreciate how they tracked down the three quotes. The only thing I'll add is that there's a certain ambiguity in statements of this kind. They come off as 'matter of fact', but in reality the speakers are pushing for this future.

https://hpincket.com/where-the-industry-is-headed.html

stale2002 15 July 2025
I'm not sure what this guy is even advocating for. Is he saying that LLMs should be made illegal or something? Given that they can run on my home PC, I doubt thats going to go well.

And if you can't make it illegal, then good luck stopping people from using it. It is inevitable. I certainly am not going to willingly give up those benefits. So everyone else is free to fall behind, I guess, and lose to those who defect and accept the benefits of using LLMs.

alganet 15 July 2025
Words already don't matter, way before LLMs. Know your rethoric basics.
neerajk 15 July 2025
"I'm so sick of hearing about bronze. Bronze bronze bronze bronze. Whats wrong with stone? Does stone not work all of a sudden?"

https://www.youtube.com/watch?v=nyu4u3VZYaQ

invig 18 July 2025
The debate stage is a deliberately constrained environment. The world and where it goes doesn't work like that.
anuvratparashar 15 July 2025
Is it just me or the community and its comments here seem to be contradicting the investment choices made by YC?
praptak 15 July 2025
Inevitabilism has a long history of being used to persuade people to accept shitty stuff. Soviet bloc used Marx's historicism (or their interpretation thereof) to argue that communism (or their implementation thereof) is inevitable.

There was also TINA which was used to push the neoliberal version of capitalism: https://en.wikipedia.org/wiki/There_is_no_alternative

UrineSqueegee 15 July 2025
bro made an obscure statement and got hundreds of upvotes on HN
andrewstuart 15 July 2025
And yet, LLM assisted programming is absolutely not only inevitable but the present AND the future.

Embrace it.

The unbelievers are becoming ever more desperate to shout it down and frame the message such that LLMs can somehow be put back in the bottle. They can not.

kotaKat 15 July 2025
It’s not going to be inevitable because I’m going to keep calling out everyone forcing their AI and LLM on me exactly what they are — technical rapists. I said no, quit forcing your product all over me.
aldousd666 15 July 2025
AI is the future, I don't care who is dubious of it. LLMs in their Transformer variations may not survive the long run, but LLMs are not the whole of AI. lets do keep in mind that today's limitations become yesterdays speed bumps. Perhaps there's a new architecture or a tweak to the existing one that gets us the rest of the way there. There has never been this rapid of a dislocation in capital investment that didn't make a big dent in the long run. You can swear up and down that it may not happen, but do you think all of these companies, and now countries, are going to just take a hit and let it go? No friggin way. It's AT LEAST as prevalent as nuclear was, but I'd argue more since you can't run nukes on your laptop. The other thing about AI is that it can be used to different degrees in tech. you can't incorporate half of a supersonic jet's supersonic-ness into something that is less than supersonic. You can incorporate partial AI solutions that still mix with human control. The mixture will evolve over time to an optimal balance. whether that is more AI and less humans or vice versa remains to be seen.