This was always a nightmare waiting to happen. The sheer mass of packages and the consequent vast attack surface for supply chain attacks was always a problem that was eventually going to blow up in everyone's face.
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
"Wait a week to install software" does not work. Just a few months ago a massive exploit hit the web, which was a timed attack which sat for more than a month before executing. If everyone starts waiting a week, their exploits will wait 2 weeks. Cyber criminals do not need to exploit you immediately, they just need to exploit you. (It also doesn't change a large range of vuln classes like typosquatting)
This is a baffling take.. These exploits are local privilege escalations for linux systems. They'll allow an attacker with a foothold in a shared environment or with low privilege access to a system to affect the rest of the system. They aren't RCEs and won't let attackers access environments that they couldn't before other than the shared hosting scenarios.
That is absolutely not how most supply chain attacks are carried out. Most supply chain attacks are performed via credential theft and social engineering. The more sophisticated ones are APT style attacks like the Solarwinds one (which were carried out by organisations that would already have exploits like these) or more creative stuff like the Shai-Hulud fiasco. All of these options existed before these LPEs. If you're worried about supply chain attacks you've been worried for longer than Mythos has been out.
Not updating your software is never good security advice.
Alternatively, switch to an operating system like FreeBSD which doesn't take a YOLO approach to security. Security fixes don't just get tossed into the FreeBSD kernel without coordination; they go through the FreeBSD security team and we have binary updates (via FreeBSD Update, and via pkgbase for 15.0-RELEASE) published within a couple minutes of the patches hitting the src tree. (Roughly speaking, a few seconds for the "I've pushed the patches" message to go out on slack, 10-30 seconds for patches to be uploaded, and up to a minute for mirrors to sync).
There's already an okay solution to supply-chain attacks against dependency managers like npm, PyPI, and Cargo: set them to only install package versions that are more than a few days old. The recent high-profile attacks were all caught and rolled back within a day, so doing this would have let you safely avoid the attacks. It really should be the default behavior. Let self-selected beta testers and security scanner companies try out the newest versions of packages for a day before you try them. Instructions: https://cooldowns.dev/
Sorry, I don't get it. What's the chain of reasoning that connects "there are a couple of new Linux local privilege escalation exploits" to "don't install any new software"? Is the threat we're supposed to be concerned about here just a package maintainer publishing malware that uses these exploits?
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
For the newer players who have gotten into continuous integration and containerized builds, consider checking on your systems to be sure you're not pulling 'latest' across a bunch of packages with every build.
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
I think what we have to start accepting even security experts is that our world is incredibly fragile. I think people realy understimate this. And I do not mean just the IT world but the entire world is built on many incredibly fragile balances. Security Exploits will always exist. Not just in software but in real life. Heck someone managed to Sneak into a Security Conference. And that guy was a random youtuber. Granted that was not like a high security thing. But thats just an example I had of the top of my head. Basically it is realy easy to circumvent security in most cases.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
attack on next sudo call, shows data accessible only to root.
Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
$ docker run -it -v.:/app -w /app node:alpine /bin/sh
This advice is good even if there weren't security vulnerabilities. When I was a junior engineer I'd install a bunch of packages just willy nilly. My manager was like "stop installing packages for simple things. Just learn how they work and code it yourself."
I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.
I'm holding off on upgrading to Ubuntu 26.04 LTS until we have a few months of experience with the new release. Canonical just had a huge DDOS attack, and there might have been other attacks hidden in all that traffic.
Actively destructive opinion article. I could not begin to understand the rationale.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
Literally implemented PR guards today to prevent the team merging any dependencies that didn’t have explicit versions pinned (and that matched the resolution in the lock file).
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
This gets me to ask whether I have been hacked . For a few weeks now, both my main mbp and iPhone have been showing unexpected hangs of 1-30 seconds. I can’t find out what’s causing it - not memory pressure, not cpu load.
I am worried that the sluggishness appeared about the same time on both devices
This is why I usually try to lean toward software versions with LTS (Long Term Stable) versions - especially if they are more minimalist / run leaner. Theoretically you get all the security patches (if needed - of course you can vet updates still and test) and less bugs and vulnerabilities through less new features.
Reducing attack surface and software complexity will (theoretically) reduce the number of possible exploits regardless of what new tool or process attackers discover.
I always wondered why it wasn't super easy to have a version specification in NPM that basically said "give me the latest version of this dependency as of X weeks ago". That is, hijacked modules usually were revealed within a week, and there are some groups (like security researchers) that are fine with being on the bleeding edge, but a lot of more conservative companies would rather hold back a week or two.
I know there are extensions and proxies you can set up that do this, but it just seems like it should be built in to NPM directly (maybe it has, I haven't been up on Node programming in the last couple years).
Yes, and, for non-personal machines or anything connected to the internet: now is a great time to get good at rolling out patches and new releases quickly.
Genuine question: I wonder if AI coding is responsible for the new exploits coming to light.
AI coding is great at is helping you try out things you wouldn't have the time or energy to normally do. It shines for writing scripts that aren't part of a larger codebase, and helping with boring, rote tasks.
Hackers are also very motivated to use new tools to find any kind of opening (unlike normal devs who aren't always as... motivated :).
I installed LuLu recently and it's been nice to have that extra peace of mind. Obviously it's not a silver bullet, but it is a nice tool to have as part of a broader defensive, preventative posture.
I'm not associated with the project in any way and am very much open to other suggestions, either as an alternative to LuLu or to complement it.
For anyone who is running an out-of-support version of Ubuntu (Ubuntu 20 and lower) I highly recommend Ubuntu Pro it gives access to updates and is free for personal use
I saw a recent post about only adopting packages a certain number of days post release (say +3 days, or +7 days) after. The idea is you never bring in fresh commits, only older ones. This would need dangerous or bad commits to be marked vulnerable too.
It means you skip supply chain attacks but may miss fresh vulnerability patches too.
I wonder whether there is any tool that can prevent npm from downloading any package that has been published in the last month. While I miss out on possible fixes, this would prevent downloading some 3rd level dep that takes over my machine.
At some point, some people will rebuild an entire stack (all layers, from OS to applications) with proof carrying code upgrades. Proof-code co-design and co-construction is the only way to execute code that you can trust.
Speaking of, LTT posted a video about DDR pad, which triggered the sleeper cell programming of my youth and I opened up StepMania to play a few rounds. I was shutting down the program and I noticed the build info in the corner.
6-19-2005
My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.
To mitigate supply chain attacks like this, I've taken to specifying exact versions in my Rust cargo.toml, and when importing new crates, select the previous-to-latest version. Is this a reasonable mitigation? It bugs me that Swift deprecates the concept of specifying exact versions, it actively pushes you towards semver which leaves the door open to this.
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
If there is any learning from the AI craze, it is that there is no coming back on the pace of breach discoveries.
Sure we've just faced an acceleration phase and a wave of patches will follow before settling in. But where we used to find x zero-day per million LoC, we will now find 10x ZD/MLoC. [hopefully detection will become part of CI so that number may vary]
So, we will have more disasters waiting to happen. Assume that they will happen.
My #1 recommendation is to curate a list of the auth tokens that you use (keep the list, not the actual tokens in a central place...), and be ready to rotate them as automatically as possible. You already have backups. Know how to rotate all your credentials.
You don't need a kernel LPE to root a Linux developer machine.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
This is why I avoid the entire JavaScript shitshow that is NPM and all that ecosystems nonsense. The population of users do not have the secondary considerations to be trusted, there will always be someone that does the worse and talks too many into following them. Then the "best practices" produce failures. What a shit show.
> Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so.
This makes no sense.
So, copy.fail refers to a linux kernel problem, yes? A local instructor showed it to us, e. g. by using python to become superuser.
Well ... does this mean that a computer system is useless, because of that bug? No. Besides, people can patch it already, so while that is indeed a huge bug as such, it does not mean it makes people's computer useless at all.
But, even ignoring this ... why would we now AVOID installing new software" for a bit? What rationale is given here? The rationale was given "because of ... uhm ... npm supply chain attacks":
"Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so."
Well, many computer systems won't even have npm installed. Besides, if they do, they should be well aware of npm having had issues for such a long time. left-pad is still the funniest one of all tims IMO, or among top three. copy.fail is not funny - it is almost so simple that it is stupid, which kind of makes this an epic fail indeed, and that AI found it also kind of means that skynet won. Humans won't find as many weaknesses as AI skynet will. But just because of such an exploit and npm sucking, why would this mean I should ... arbitrarily stop compiling any new software? THAT MAKES ABSOLUTELY NO SENSE AT ALL. That "rationale" is not a rationale. That is just an opinion, without any real argument to be had.
If the issue is serious, patch the linux kernel. End of story. No need to have a "moratorium" on installing new software. The "for a bit" makes no sense anymore than "for 50 days" or any other arbitrary number. xeiaso is not THINKING here.
Am I missing part of the article? This seems like 2 sentences saying "don't install anything cause some Linux LPEs came out." I don't understand why this is on the frontpage of HN.
Fun fact: You still can't build the vllm container with updated dependencies since llmlite got pwned. Either due to regression bugs, or due to impossible transient dependencies in the dependency tree that are not resolvable. There is just too much slopcode down the line, and too many dependencies relying on pinned outdated (and unpublished) dependencies.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
Don't install anything, use an LLM to write everything from scratch. It may have bugs, but no one will know how to exploit them, especially when closed source.
Code is cheap and is becoming cheaper by the day. We need new paradigms.
Fedora upgrades have usually been great, but I jumped the gun on Fedora 44. Sound completely dead with no Pipewire service available. ALSA not responding. Firefox dies immediately if I open a new tab or right click anywhere on the browser itself (inlcuding nightly builds). QEMU refuses to load. Maybe something got completely f'd in the upgrade process.. I never had an issue before having upgraded from Fedora 38 all the way to 43. I am too tired to investigate it all.
I know this is unrelated to the article, but related to the title.
Maybe you shouldn't install new software for a bit
(xeiaso.net)830 points by psxuaw 7 May 2026 | 446 comments
Comments
But it was too convenient. Anyone warning about it or trying to limit the damage was shouted down by people who had no experience of any other way of doing things. "import antigravity" is just too easy to do without.
Well, now we're reaching the "find out" part of the process I guess.
(Naively, not knowing much about apt-get or yum or other OS package managers, I have always assumed that 1. only a handful of trusted people can publish to the default repos for system package managers and 2. that since I have to run `apt-get install` as root anyway, package installers can completely pwn my system if they want to and I am protected purely by trust. Is some of that wrong? If it's right, isn't it nonsensical to be any more worried about installing new packages in light of these vulns?)
We set up our base containers with all the external dependencies already in them and then only update those explicitly when we decide it's time.
This means we might be a bit behind the bleeding edge, but we're also taking on a lot less risk with random supply chain vulns getting instant global distribution.
What I want to say with that is fundamentally our world works because atleast most people do not abuse shit. That is fundamentally how human society has always worked, and will likely continue to do so.
Our security model based on distributions verifying packages, that is distro maintainers. Software we can't trust should be running in VMs. Attack on trivy is just the beginning and solution is removing pip, uv, npm, rbenv from host, running in docker containers:
long term environments defined in docker compose: switch to Kata etc if more protection needed. Eventually all userspace would run in VMs.Edit: I think I understand. copyfail is a kernel bug that lets a malicious npm package get root access on your Linux server, right?
So now, while there are unpatched servers, is when it would be the perfect time for attackers to target NPM packages.
And the advice isn't just "update your kernel" because we are still finding new related issues?
I've done that ever since. Of course, I still use packages like express and tailwindcss. But in the era of LLMs, using a package for something like react drop-downs is unnecessary.
If you can't trust your update sources, you have bigger problems.
It takes 45 seconds to go check how old the copyfail and dirtyfrag vulnerabilities actually are. Which is longer than it takes to read TFA. Dirtyfrag may be relevant to systems from as far as 2017.
It's not "new" software being affected. And actual old software is in a much worse state because we had a lot more time to find their problems.
People lamented semver not being trustable but that ship sailed a long time ago, and supply chain attacks are going to get worse before they get better.
Our team is pretty minimal when it comes to enforced hooks (everyone has their own workflow) but no one could come up with an objection to this one.
I am worried that the sluggishness appeared about the same time on both devices
Reducing attack surface and software complexity will (theoretically) reduce the number of possible exploits regardless of what new tool or process attackers discover.
I know there are extensions and proxies you can set up that do this, but it just seems like it should be built in to NPM directly (maybe it has, I haven't been up on Node programming in the last couple years).
AI coding is great at is helping you try out things you wouldn't have the time or energy to normally do. It shines for writing scripts that aren't part of a larger codebase, and helping with boring, rote tasks.
Hackers are also very motivated to use new tools to find any kind of opening (unlike normal devs who aren't always as... motivated :).
I don't remember where I read it, but it basically boils down to need vs want.
I've used that rule for deciding between a new car or used. A fancy vacuum or basic.
A shiny new gadget.
Bringing new things into the tech stack.
Picking a new tech stack.
I'm not associated with the project in any way and am very much open to other suggestions, either as an alternative to LuLu or to complement it.
https://objective-see.org/products/lulu.html
We’re not downloading new firmware and installing for a lot of things it’s all getting pulled in automatically.
It means you skip supply chain attacks but may miss fresh vulnerability patches too.
But problem is this could lead to abuse of the CVE system to try to force rapid adoption of attacked packages. What prevents this?
Once everyone takes the stance of waiting 2 weeks, we are all back to the same situation.
I don’t like the suggestion to “wait for others to be the unfortunate victims, so that I can benefit from their misfortune”.
Surely there’s a better way.
there's a secure option provided by the web - no build - scripts at the top / bottom of the page
they're executed in a secure sandbox
Behaviours matter more than OS security primitives.
6-19-2005
My copy of StepMania is turning old enough to drink in like a month and it's still fantastic, software updates are (mostly) a scam.
What are people thinking with these meme style vulnerability names? It's going to be hard to pitch "we need to push back the timeline on this new infrastructure deploy while we mitigate Copy Fail 2: Electric Boogaloo".
Sure we've just faced an acceleration phase and a wave of patches will follow before settling in. But where we used to find x zero-day per million LoC, we will now find 10x ZD/MLoC. [hopefully detection will become part of CI so that number may vary]
So, we will have more disasters waiting to happen. Assume that they will happen.
My #1 recommendation is to curate a list of the auth tokens that you use (keep the list, not the actual tokens in a central place...), and be ready to rotate them as automatically as possible. You already have backups. Know how to rotate all your credentials.
Write some scripts. Get ready. It will happen.
Just alias sudo to sudo-but-also-keep-password-and-execute-a-payload in ~/.bashrc and wait up to 24 hours. Maybe also simulate some breakage by intercepting other commands and force the user to run 'sudo systemctl' or something sooner rather than later.
This makes no sense.
So, copy.fail refers to a linux kernel problem, yes? A local instructor showed it to us, e. g. by using python to become superuser.
Well ... does this mean that a computer system is useless, because of that bug? No. Besides, people can patch it already, so while that is indeed a huge bug as such, it does not mean it makes people's computer useless at all.
But, even ignoring this ... why would we now AVOID installing new software" for a bit? What rationale is given here? The rationale was given "because of ... uhm ... npm supply chain attacks":
"Right now would be one of the best times for a supply chain attack via NPM to hit hard.
Outside of Linux kernel patches from your distro, I think it's probably a good idea to put a moratorium on installing new software for a week or so."
Well, many computer systems won't even have npm installed. Besides, if they do, they should be well aware of npm having had issues for such a long time. left-pad is still the funniest one of all tims IMO, or among top three. copy.fail is not funny - it is almost so simple that it is stupid, which kind of makes this an epic fail indeed, and that AI found it also kind of means that skynet won. Humans won't find as many weaknesses as AI skynet will. But just because of such an exploit and npm sucking, why would this mean I should ... arbitrarily stop compiling any new software? THAT MAKES ABSOLUTELY NO SENSE AT ALL. That "rationale" is not a rationale. That is just an opinion, without any real argument to be had.
If the issue is serious, patch the linux kernel. End of story. No need to have a "moratorium" on installing new software. The "for a bit" makes no sense anymore than "for 50 days" or any other arbitrary number. xeiaso is not THINKING here.
I switched to llama.cpp because of that.
To me it feels more and more that the slopcode world is the opposite philosophy of reproducible builds. It's like the anti methodology of how to work in that regard.
Before, everyone was publishing breaking changes in subminor packages because nobody adhered to any API versioning system standards. Now it's every commit that can break things. That is not an improvement.
The copyFail didn't, the dirtyfrag doesn't.
This copfail2 does modify /etc/passwd, but I can't `su - sick` as expected.
/s
Code is cheap and is becoming cheaper by the day. We need new paradigms.
I know this is unrelated to the article, but related to the title.