Malicious versions of Nx and some supporting plugins were published
391 comments
·August 27, 2025inbx0
Periodic reminder to disable npm install scripts.
npm config set ignore-scripts true [--global]
It's easy to do both at project level and globally, and these days there are quite few legit packages that don't work without them. For those that don't, you can create a separate installation script to your project that cds into that folder and runs their install-script.I know this isn't a silver bullet solution to supply chain attakcs, but, so far it has been effective against many attacks through npm.
homebrewer
I also use bubblewrap to isolate npm/pnpm/yarn (and everything started by them) from the rest of the system. Let's say all your source code resides in ~/code; put this somewhere in the beginning of your $PATH and name it `npm`; create symlinks/hardlinks to it for other package managers:
#!/usr/bin/bash
bin=$(basename "$0")
exec bwrap \
--bind ~/.cache/nodejs ~/.cache \
--bind ~/code ~/code \
--dev /dev \
--die-with-parent \
--disable-userns \
--new-session \
--proc /proc \
--ro-bind /etc/ca-certificates /etc/ca-certificates \
--ro-bind /etc/resolv.conf /etc/resolv.conf \
--ro-bind /etc/ssl /etc/ssl \
--ro-bind /usr /usr \
--setenv PATH /usr/bin \
--share-net \
--symlink /tmp /var/tmp \
--symlink /usr/bin /bin \
--symlink /usr/bin /sbin \
--symlink /usr/lib /lib \
--symlink /usr/lib /lib64 \
--tmpfs /tmp \
--unshare-all \
--unshare-user \
"/usr/bin/$bin" "$@"
The package manager started through this script won't have access to anything but ~/code + read-only access to system libraries: bash-5.3$ ls -a ~
. .. .cache code
bubblewrap is quite well tested and reliable, it's used by Steam and (IIRC) flatpak.shermantanktop
This is trading one distribution problem (npx) for another (bubblewrap). I think it’s a reasonable trade, but there’s no free lunch.
homebrewer
Not sure what this means. bubblewrap is as free as it gets, it's just a thin wrapper around the same kernel mechanisms used for containers, except that it uses your existing filesystems instead of creating a separate "chroot" from an OCI image (or something like it).
The only thing it does is hiding most of your system from the stuff that runs under it, whitelisting specific paths, and optionally making them readonly. It can be used to run npx, or anything else really — just shove move symblinks into the beginning of your $PATH, each referencing the script above. Run any of them and it's automatically restricted from accessing e.g. your ~/.ssh
oulipo2
Will this work on osX? and for pnpm?
TheTaytay
Very cool. Hadn't heard of this before. I appreciate you posting it.
no_wizard
Pnpm natively lets you selectively enable it on a package basis
tiagod
Or use pnpm. The latest versions have all dependency lifecycle scripts ignored by default. You must whitelist each package.
chrisweekly
pnpm is not only more secure, it's also faster, more efficient wrt disk usage, and more deterministic by design.
norskeld
It also has catalogs feature for defining versions or version ranges as reusable constants that you can reference in workspace packages. It was almost the only reason (besides speed) I switched a year ago from npm and never looked back.
jim201
This is the way. It’s a pain to manually disable the checks, but certainly better than becoming victim to an attack like this.
eitau_1
Why the same advice doesn't apply to `setup.py` or `build.rs`? Is it because npm is (ab)used for software distribution (eg. see sibling comment: https://news.ycombinator.com/item?id=45041292) instead of being used only for managing library-dependencies?
username223
It should, and also to Makefile.PL, etc. These systems were created at a time when you were dealing with a handful of dependencies, and software development was a friendlier place.
Now you're dealing with hundreds of recursive dependencies, all of which you should assume may become hostile at any time. If you neither audit your dependencies, nor have the ability to sue them for damages, you're in a precarious position.
ivape
It should apply for anything. Truth be told the process of learning programming is so arduous at times that you basically just copy and paste and run fucking anything in terminal to get a project setup or fixed.
Go down the rabbit hole of just installing LLM software and you’ll find yourself in quite a copy and paste frenzy.
We got used to this GitHub shit of setting up every process of an install script in this way, so I’m surprised it’s not happening constantly.
peacebeard
Looks like pnpm 10 does not run lifecycle scripts of dependencies unless they are listed in ‘onlyBuiltDependencies’.
ashishb
I run all npm based tools inside Docker with no access beyond the current directory.
https://ashishb.net/programming/run-tools-inside-docker/
It does reduce the attach surface drastically.
andix
I guess this won't help with something like nx. It's a CLI tool that is supposed to be executed inside the source code repo, in CI jobs or on developer pcs.
inbx0
According to the description in advisory, this attack was in a postinstall script. So it would've helped in this case with nx. Even if you ran the tool, this particular attack wouldn't have been triggered if you had install scripts ignored.
halflife
This sucks for libraries that download native binaries in their install script. There are quite a few.
lrvick
Downloading binaries as part of an installation of a scripting language library should always be assumed to be malicious.
Everything must be provided as source code and any compilation must happen locally.
oulipo2
Sure, but then you need to have a way to whitelist
junon
You can still whitelist them, though, and reinstall them.
f311a
People really need to start thinking twice when adding a new dependency. So many supply chain attacks this year.
This week, I needed to add a progress bar with 8 stats counters to my Go project. I looked at the libraries, and they all had 3000+ lines of code. I asked LLM to write me a simple progress report tracking UI, and it was less than 150 lines. It works as expected, no dependencies needed. It's extremely simple, and everyone can understand the code. It just clears the terminal output and redraws it every second. It is also thread-safe. Took me 25 minutes to integrate it and review the code.
If you don't need a complex stats counter, a simple progress bar is like 30 lines of code as well.
This is a way to go for me now when considering another dependency. We don't have the resources to audit every package update.
coldpie
> People really need to start thinking twice when adding a new dependency. So many supply chain attacks this year.
I was really nervous when "language package managers" started to catch on. I work in the systems programming world, not the web world, so for the past decade, I looked from a distance at stuff like pip and npm and whatever with kind of a questionable side-eye. But when I did a Rust project and saw how trivially easy it was to pull in dozens of completely un-reviewed dependencies from the Internet with Cargo via a single line in a config file, I knew we were in for a bad time. Sure enough. This is a bad direction, and we need to turn back now. (We won't. There is no such thing as computer security.)
skydhash
The thing is, system based package managers require discipline, especially from library authors. Even in the web world, it’s really distressing when you see a minor library is already on its 15 iteration in less that 5 years.
I was trying to build just (the task runner) on Debian 12 and it was impossible. It kept complaining about rust version, then some libraries shenanigans. It is way easier to build Emacs and ffmpeg.
ajross
Indeed, it seems insane that we're pining for the days of autotools, configure scripts and the cleanly inspectable dependency structure.
But... We absolutely are.
jacobsenscott
Remember the pre package manager days was ossified, archaic, insecure installations because self managing dependencies is hard, and people didn't keep them up to date. You need to get your deps from somewhere, so in the pre-package manager days you still just downloaded it from somewhere - a vendor's web site, or sourceforge, or whatever, and probably didn't audit it, and hoped it was secure. It's still work to keep things up to date and audited, but less work at least.
thayne
> This is a bad direction, and we need to turn back now.
I don't deny there are some problems with package managers, but I also don't want to go back to a world where it is a huge pain to add any dependency, which leads to projects wasting effort on implementing things themselves, often in a buggy and/or inefficient way, and/or using huge libraries that try to do everything, but do nothing well.
username223
It's a tradeoff. When package users had to manually install dependencies, package developers had to reckon with that friction. Now we're living in a world where developers don't care about another 10^X dependencies, because the package manager will just run the scripts and install the files, and the users will accept it.
cedws
Rust makes me especially nervous due to the possibility of compile-time code execution. So a cargo build invocation is all it could take to own you. In Go there is no such possibility by design.
exDM69
The same applies to any Makefile, the Python script invoked by CMake or pretty much any other scriptable build system. They are all untrusted scripts you download from the internet and run on your computer. Rust build.rs is not really special in that regard.
Maybe go build doesn't allow this but most other language ecosystems share the same weakness.
fluoridation
Does it really matter, though? Presumably if you're building something is so you can run it. Who cares if the build script is itself going to execute code if the final product that you're going to execute?
goku12
Build script isn't a big issue for Rust because there is a simple mitigation that's possible. Do the build in a secure sandbox. Only execution and network access must be allowed - preferably as separate steps. Network access can be restricted to only downloading dependencies. Everything else, including access to the main filesystem should be denied.
Runtime malicious code is a different matter. Rust has a security workgroup and their tools to address this. But it still worries me.
pharrington
You're confusing compile-time with build-time. And build time code execution exists absolutely exists in go, because that's what a build tool is. https://pkg.go.dev/cmd/go#hdr-Add_dependencies_to_current_mo...
rkagerer
I'm actually really frustrated how hard it's become to manually add, review and understand dependencies to my code. Libraries used to come with decent documentation, now it's just a couple lines of "npm install blah", as if that tells me anything.
Sleaker
This isn't as new as you make it out, ant + ivy / maven / gradle had already started this in the 00s. Definitely turned into a mess, but I think the java/cross platform nature pushed this style of development along pretty heavily.
Before this wasn't CPAN already big?
rom1v
I feel that Rust increases security by avoiding a whole class of bugs (thanks to memory safety), but decreases security by making supply chain attacks easier (due to the large number of transitive dependencies required even for simple projects).
rootnod3
Fully agree. That is why I vendor all my dependencies. On the common lisp side a new tool emerged a while ago for that[1].
On top of that, I try to keep the dependencies to an absolute minimum. In my current project it's 15 dependencies, including the sub-dependencies.
coldpie
I didn't vendor them, but I did do an eyeball scan of every package in the full tree for my project, primarily to gather their license requirements[1]. (This was surprisingly difficult for something that every project in theory must do to meet licensing requirements!) It amounted to approximately 50 dependencies pulled into the build, to create a single gstreamer plugin. Not a fan.
[1] https://github.com/ValveSoftware/Proton/commit/f21922d970888...
skydhash
Vendoring is nice. Using the system version is nicer. If you can’t run on $current_debian, that’s very much a you problem. If postgres and nginx can do it, you can too.
sfink
I think something like cargo vet is the way forward: https://mozilla.github.io/cargo-vet/
Yes, it's a ton of overhead, and an equivalent will be needed for every language ecosystem.
The internet was great too, before it became too monetizable. So was email -- I have fond memories of cold-emailing random professors about their papers or whatever, and getting detailed responses back. Spam killed that one. Dependency chains are the latest victim of human nature. This is why we can't have nice things.
girvo
> People really need to start thinking twice when adding a new dependency
I've been preaching this since ~2014 and had little luck getting people on board unless I have full control over a particular team (which is rare). The need to avoid "reinventing the wheel" seems so strong to so many.
skydhash
I actually loathe those progress trackers. They break emacs shell (looking at you expo and eas).
Why not print a simple counter like: ..10%..20%..30%
Or just: Uploading…
Terminal codes should be for TUI or interactive-only usage.
sfink
Carriage returns are good enough for progress bars, and seem to work fine in my emacs shell at least:
% echo -n "loading..."; sleep 1; echo -en "\rABORT ABORT"; sleep 1; echo -e "\rTerminated"
works fine for me, and that's with TERM set to "dumb". (I'm actually not sure why it cleared the line automatically though. I'm used to doing "\rmessage " to clear out the previous line.)Admittedly, that'll spew a bunch of stuff if you're sending it to a pager, so I guess that ought to be
% if [ -t 1 ]; then echo -n "loading..."; sleep 1; echo -en "\rABORT ABORT"; sleep 1; echo -e "\rTerminated"; fi
but I still haven't made it to 15 dependencies or 200 lines of code! I don't get a full-screen progress bar out of it either, but that's where I agree with you. I don't want one.JdeBP
The problem is that two pagers don't do everything that they should do in this regard.
They are supposed to do things like ul utility does, but neither BSD more nor less handle when a CR is emitted to overstrike the line from the beginning. They only handle overstriking characters with BS.
most handles overstriking with CR, though. Your output appears as intended when you page it with most.
quotemstr
Try mistty
legacynl
Well that's just the difference between a library and building custom.
A library is by definition supposed to be somewhat generic, adaptable and configurable. That takes a lot of code.
christophilus
I’d like a package manager that essentially does a git clone, and a culture that says: “use very few dependencies, commit their source code in your repo, and review any changes when you do an update.” That would be a big improvement to the modern package management fiasco.
hvb2
Is that realistic though? What you're proposing is letting go of abstractions completely.
Say you need compression, you're going to review changes in the compression code? What about encryption, a networking library, what about the language you're using itself?
That means you need to be an expert on everything you run. Which means no one will be building anything non trivial.
christophilus
Yes. I would review any changes to any 3rd party libraries. Why is that unrealistic?
Regarding the language itself, I may or may not. Generally, I pick languages that I trust. E.g. I don't trust Google, but I don't think the Go team would intentionally place malware in the core tools. Libraries, however, often are written by random strangers on the internet with a different level of trust.
3036e4
Small, trivial, things, each solving a very specific problem, and that can be fully understood, sounds pretty amazing though. Much better than what we have now.
k3nx
That what I used git submodules for. I had a /lib folder in my project where the dependencies were pulled/checked out from. This was before I was doing CI/CD and before folks said git submodules were bad.
Personally, I loved it. I only looked and updating them when I was going to release a new version of my program. I could easily do a diff to see what changed. I might not have understood everything, but it wasn't too difficult to see 10-100 line code changes to get a general idea.
I thought it was better than the big black box we currently deal with. Oh, this package uses this package, and this package... what's different? No idea now, really.
hardwaregeek
That’s called the original Go package manager and it was pretty terrible
christophilus
I think it was only terrible because the tooling wasn't great. I think it wouldn't be too terribly hard to build a good tool around this approach, though I admittedly have only thought about it for a few minutes.
I may try to put together a proof of concept, actually.
willsmith72
sounds like the best way to miss critical security upgrades
christophilus
Why? If you had a package manager tell you "this is out of date and has vulnerability XYZ", you'd do a "gitpkg update" or whatever, and get the new code, review it, and if it passes review, deploy it.
skydhash
That’s why most mature (as in disciplined) projects have a rss feed or a mailing list. So you know when there’s a security bug and what to do about it.
littlecranky67
We are using NX heavily (and are not affected) in my teams in a larger insurance company. We have >10 standalone line of business apps and 25+ individual libraries in the same monorepo, managed by NX. I've toyed with other monorepo tools for these kind of complex setup in my career (lerna, rushjs, yarn workspaces) but not only did none came close, lerna is basically handed over to NX, and rushjs is unmaintained.
If you have any proposal how to properly manage the complexity of a FE monorepo with dozens of daily developers involved and heavy CI/CD/Devops integration, please post alternatives - given that security incident many people are looking.
abuob
Shameless self-plug and probably not what you're looking for, but anyway: I've created https://github.com/abuob/yanice for that sort of monorepo-size; too many applications/libraries to be able to always run full builds, but still not google-scale or similar.
It ultimately started as a small project because I got fed up with NX' antics a few years back (I think since then they improved quite a lot though), I don't need caching, I don't need their cloud, I don't need their highly opinionated approach on how to structure a monorepository; all I needed was decent change-detection to detect which project changed between the working-tree and a given commit. I've now since added support to enforce module-boundaries as it's definitely a must on a monorepo.
In case anyone wants to try it out - would certainly appreciate feedback!
ojkwon
https://moonrepo.dev/ worked great for our team's setup. It also support bazel remote cache, agnostic to the vendor.
threetonesun
npm workspaces and npm scripts will get you further than you might think. Plenty of people got along fine with Lerna, which didn't do much more than that, for years.
I will say, I was always turned off by NX's core proposition when it launched, and more turned off by whatever they're selling as a CI/CD solution these days, but if it works for you, it works for you.
crabmusket
I'd recommend pnpm over npm for monorepos. Forcing you to be explicit about each package's dependencies is good.
I found npm's workspace features lacking in comparison and sparsely documented. It was also hard to find advice on the internet. I got the sense nobody was using npm workspaces for anything other than beginner articles.
littlecranky67
Killer feature of NX is its build cache and the ability to operate on the git staged files. It takes a couple of minutes to build our entire repo on an M4 Pro. NX caches the builds of all libs and will only rebuild those that are affected. Same holds true for linting, prettier, tests etc. Any solution that just executes full builds would be a no-starter for all use cases.
littlecranky67
I've burried npm years ago, we are happily using yarn (v4 currently) in that project. Which also means, even if we were affected by the malware, noboday uses the .npmrc (we have a .yarnrc.yml instead) :)
tcoff91
moonrepo is pretty nice
dakiol
Easier solution: you don’t need a progress bar.
nicce
Depends on the purpose… but I guess if you replace it with estimated time left, may be good enough. Sometimes progress bar is just there to identify whether you need stop the job since it takes too much time.
f311a
It runs indefinitely to process small jobs. I could log stats somewhere, but it complicates things. Right now, it's just a single binary that automatically gets restarted in case of a problem.
skydhash
Why not print on stdout, then redirect it to a file?
chairmansteve
One of the wisest comments I've ever seen on HN.
0xbadcafebee
Before anyone puts the blame on Nx, or Anthropic, I would like to remind you all what actually caused this exploit. The exploit was caused by an exploit, shipped in a package, that was uploaded using a stolen "token" (a string of characters used as a sort of "usename+password" to access a programming-language package-manager repository).
But that's just the delivery mechanism of the attack. What caused the attack to be successful were:
1. The package manager repository did not require signing of artifacts to verify they were generated by an authorized developer.
2. The package manager repository did not require code signing to verify the code was signed by an authorized developer.
3. (presumably) The package manager repository did not implement any heuristics to detect and prevent unusual activity (such as uploads coming from a new source IP or country).
4. (presumably) The package manager repository did not require MFA for the use of the compromised token.
5. (presumably) The token was not ephemeral.
6. (presumably) The developer whose token was stolen did not store the token in a password manager that requires the developer to manually authorize unsealing of the token by a new requesting application and session.
Now after all those failures, if you were affected and a GitHub repo was created in your account, this is a failure of: 1. You to keep your GitHub tokens/auth in a password manager that requires you to manually authorize unsealing of the token by a new requesting application and session.
So what really caused this exploit, is all completely preventable security mechanisms, that could have been easily added years ago by any competent programmer. The fact that they were not in place and mandatory is a fundamental failure of the entire software industry, because 1) this is not a new attack; it has been going on for years, and 2) we are software developers; there is nothing stopping us from fixing it.This is why I continue to insist there needs to be building codes for software, with inspections and fines for not following through. This attack could have been used on tens of thousands of institutions to bring down finance, power, telecommunications, hospitals, military, etc. And the scope of the attacks and their impact will only increase with AI. Clearly we are not responsible enough to write software safely and securely. So we must have a building code that forces us to do it safely and securely.
hombre_fatal
One thing that's weirdly precarious is how we still have one big environment for personal computing and how it enables most malware.
It's one big macOS/Windows/Linux install where everything from crypto wallets to credential files to gimmick apps are all neighbors. And the tools for partitioning these things are all pretty bad (and mind you I'm about to pitch something probably even worse).
When I'm running a few Windows VMs inside macOS, I kinda get this vision of computing where we boot into a slim host OS and then alt-tab into containers/VMs for different tasks, but it's all polished and streamlined of course (an exercise for someone else).
Maybe I have a gaming container. Then I have a container I only use for dealing with cryptocurrency. And I have a container for each of the major code projects I'm working on.
i.e. The idea of getting my bitcoin private keys exfiltrated because I installed a VSCode extension, two applications that literally never interact, is kind of a silly place we've arrived in personal computing.
And "building codes for software" doesn't address that fundamental issue. It kinda feels like an empty solution like saying we need building codes for operating systems since they let malware in one app steal data from other apps. Okay, but at least pitch some building codes and what enforcement would look like and the process for establishing more codes, because that's quite a levitation machine.
chatmasta
macOS at least has some basic sandboxing by default. You can circumvent it, of course – and many of the same people complaining about porous security models would complain even more loudly if they could not circumvent it, because “we want to execute code on our own machine” (the tension between freedom and security).
By default, folders like ~/Documents are not accessible by any process until you explicitly grant access. So as long as you run your code in some other folder you’ll at least be notified when it’s trying to access ~/Documents or ~/Library or any other destination with sensitive content.
It’s obviously not a panacea but it’s better than nothing and notably better than the default Linux posture.
quotemstr
> By default, folders like ~/Documents are not accessible by any process until you explicitly grant acces
And in a terminal, the principal to which you grant access to a directory is your terminal emulator, not the program you're trying to run. That's bonkers and encourages people to just click "yes" without thinking. And once you're authorized your terminal to access documents once, everything you run in it gets that access.
The desktop security picture is improving, slowly and haltingly, for end-user apps, but we haven't even begun to attempt to properly sandbox development workflows.
vgb2k18
Agreed on the madness of wide open OS defaults, I share your vision for isolation as a first-class citizen. In the mean-time (for Windows 11 users) theres Sandboxie+ fighting the good fight. I know most here will be aware of its strengths and limitations, but for any who dont (or who forgot about it), I can say its still working just as great on Windows 11 like it did on Windows 7. While its not great isolating heavy-weight dev environments (Visual Studio, Unreal Engine, etc), its almost perfect for managing isolation of all the small suff (Steam games, game emulators, YouTube downloaders , basic apps of all kinds).
Gander5739
Like Qubes?
JdeBP
I am told that the SmartOS people have this sort of idea.
quotemstr
> SmartOS is a specialized Type 1 Hypervisor platform based on illumos.
On Solaris? Why? And why bother with a Type 1 hypervisor? You get the same practical security benefits with none of the compatibility headaches (or the headaches of commercial UNIX necromancy) by containerizing your workloads. You don't need a hypervisor for that. All the technical pieces exist and work fine. You're solving a social problem, not a technical one.
quotemstr
> One thing that's weirdly precarious is how we still have one big environment for personal computing and how it enables most malware.
You're not the only one to note the dangers of an open-by-default single-namespace execution model. Yet every time someone proposes departing from it, he generates resistance from people who've spent their whole careers with every program having unbridled access to $HOME. Even lightweight (and inadequate) sandboxing of the sort Flatpak and Snap do gets turned off the instant someone thinks it's causing a problem.
On mobile, we're had containerized apps and they've worked fine forever. The mobile ecosystem is more secure and has a better compatibility story than any desktop. Maybe, after the current old guard retires, we'll be able to replace desktop OSes with mobile ones.
Hilift
50% of impacted users the vector was VS Code and only ran on Linux and macOS.
https://www.wiz.io/blog/s1ngularity-supply-chain-attack
"contained a post-installation malware script designed to harvest sensitive developer assets, including cryptocurrency wallets, GitHub and npm tokens, SSH keys, and more. The malware leveraged AI command-line tools (including Claude, Gemini, and Q) to aid in their reconnaissance efforts, and then exfiltrated the stolen data to publicly accessible attacker-created repositories within victims’ GitHub accounts.
"The malware attempted lockout by appending sudo shutdown -h 0 to ~/.bashrc and ~/.zshrc, effectively causing system shutdowns on new terminal sessions.
"Exfiltrated data was double and triple-base64 encoded and uploaded to attacker-controlled victim GitHub repositories named s1ngularity-repository, s1ngularity-repository-0, or s1ngularity-repository-1, thousands of which were observed publicly.
"Among the varied leaked data here, we’ve observed over a thousand valid Github tokens, dozens of valid cloud credentials and NPM tokens, and roughly twenty thousand files leaked. In many cases, the malware appears to have run on developer machines, often via the NX VSCode extension. We’ve also observed cases where the malware ran in build pipelines, such as Github Actions.
"On August 27, 2025 9AM UTC Github disabled all attacker created repositories to prevent this data from being exposed, but the exposure window (which lasted around 8 hours) was sufficient for these repositories to have been downloaded by the original attacker and other malicious actors. Furthermore, base64-encoding is trivially decodable, meaning that this data should be treated as effectively public."
smj-edison
I'm a little confused about the sudo part, do most people not have sudo behind a password? I thought ~/.bashrc ran with user permissions...
marshray
My personal belief is that users should not be required type their password into random applications, terminals, and pop-up windows. Of course, login screens can be faked too.
So my main user account does not have sudo permissions at all, I have a separate account for that.
anon7000
> You to keep your GitHub tokens/auth in a password manager that requires you to manually authorize unsealing of the token
This is a failure of the GH CLI, IMO. If you log into the GH CLI, it gets access to upload repositories, and doesn’t require frequent re-auth. Unlike AWS CLI, which expires every 18hr or something like that depending on the policy. But in either case (including with AWS CLI), it’s simply too easy to end up with tokens in plaintext in your local env. In fact, it’s practically the default.
madeofpalk
gh cli is such a ticking time bomb. Anything can just run `gh auth token` and get a token that probably can read + write to all your work code.
awirth
These tokens never expire, and there is no way for organization administrators to get them to expire (or revoke them, only the user can do that), and they are also excluded from some audit logs. This applies not just to gh cli, but also several other first party apps.
See this page for more details: https://docs.github.com/en/apps/using-github-apps/privileged...
After discussing our concerns about these tokens with our account team, we concluded the only reasonable way to enforce session lengths we're comfortable with on GitHub cloud is to require an IP allowlist with access through a VPN we control that requires SSO.
https://github.com/cli/cli/issues/5924 is a related open feature request
delfinom
>This is why I continue to insist there needs to be building codes for software, with inspections and fines for not following through. This attack could have been used on tens of thousands of institutions to bring down finance, power, telecommunications, hospitals, military, etc. And the scope of the attacks and their impact will only increase with AI. Clearly we are not responsible enough to write software safely and securely. So we must have a building code that forces us to do it safely and securely.
Yea, except taps on the glass
https://github.com/nrwl/nx/blob/master/LICENSE
THE SOFTWARE IS PROVIDED 'AS IS', WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
We can have building code, but the onus is on the final implementer not people sharing code freely.
tailspin2019
I think you’re right. I don’t like the idea of a “building code” for software, but I do agree that as an industry we are doing quite badly here and if regulation is what is needed to stop so many terrible, terrible practices, then yeah… maybe that’s what’s needed.
null
echelon
Anthropic and Google do owe this issue serious attention [1], and they need to take actions as a result of this.
roenxi
Honest to goodness, I do most of my coding in a VM now. I don't see how the security profile of these things are tolerable.
The level of potential hostility from agents as a malware vector is really off the charts. We're entering an era where they can scan for opportunities worth >$1,000 in hostaged data, crypto keys, passwords, blackmail material or financial records without even knowing what they're looking for when they breach a box.
christophilus
Similar, but in a podman container which shares nothing other than the source code directory with my host machine.
evertheylen
I do too, but I found it non-trivial to actually secure the podman container. I described my approach here [1]. I'm very interested to hear your approach. Any specific podman flags or do you use another tool like toolbx/distrobox?
christophilus
Very interesting. I learned some new things. I didn't know about `--userns` or the flexible "bind everything" network approach!
Here's my script:
https://codeberg.org/chrisdavies/dotfiles/src/branch/main/sr...
What I do is look for a `.podman` folder, and if it exists, I use the `env` file there to explicitly bind certain ports. That does mean I have to rebuild the container if I need to add a port, so I usually bind 2 ports, and that's generally good enough for my needs.
I don't do any ssh in the container at all. I do that from the host.
The nice thing about the `.podman` folder thing is that I can be anywhere in a subfolder, type `gg pod`, and it drops me into my container (at whatever path I last accessed within the container).
No idea how secure my setup is, but I figure it's probably better than just running things unfettered on my dev box.
0cf8612b2e1e
I would love if some experts could comment on the security profile of this. It sounds like it should be fine, but there are so many gotchas with everything that I use full VMs for development.
One immediate stumbling block- the IDE would be running in my host, which has access to everything. A malicious IDE plugin is a too real potential vector.
evertheylen
I actually run code-server (derivative of VSCode) inside the container! But I agree that there can be many gotchas, which is why I try to collect as much feedback as possible.
christophilus
I run the ide (neovim) in the container along with npm, cargo, my dev / test databases, etc. It’s a complete environment (for me).
sheerun
Exactly this, with note that due ecosystem and history of software, setting up such environment is either really hard or relatively expensive
fsflover
> I do most of my coding in a VM now
Perhaps you may be interested in Qubes OS, where you do everything in VMs with a nice UX. My daily driver, can't recommend it enough.
orblivion
Yeah I use Qubes for my "serious" computing these days. It comes with performance headaches, though my laptop isn't the best.
I wonder about something like https://secureblue.dev/ though. I'm not comfortable with Fedora and last I heard it wasn't out of Beta or whatever yet. But it uses containers rather than VMs. I'm not a targeted person so I may be happy to have "good enough" security for some performance back.
secureblue
secureblue creator here :)
some corrections:
> last I heard it wasn't out of Beta or whatever yet
It is
> But it uses containers rather than VMs
It doesn't use plain containers for app isolation. We ship the OS itself as a bootable container (https://github.com/bootc-dev/bootc). That doesn't mean we use or recommend using containers for application isolation. Container support is actually disabled by default via our selinux policy restricting userns usage (this can be toggled though, of course). Containers on their own don't provide sandboxing. The syscall filtering for them is extremely weak. Flatpak (which sandboxes via bubblewrap: https://github.com/containers/bubblewrap) can be configured to be reasonably good, but we still encourage the use of VMs if needed. We provide one-click tooling for easily installing virt-manager (https://en.wikipedia.org/wiki/Virt-manager) if desired.
In short though, secureblue and Qubes aren't really analogous. We have different goals and target use cases. There is even an open issue on Qubes to add a template to use secureblue as a guest: https://github.com/QubesOS/qubes-issues/issues/9755
mikepurvis
How does it avoid the sharing headaches that make the ergonomics of snaps so bad?
fsflover
I never used snaps, so I don't understand what you mean here. Here's a couple of typical Qubes usage patterns: https://www.qubes-os.org/news/2022/10/28/how-to-organize-you..., https://blog.invisiblethings.org/2011/03/13/partitioning-my-...
andix
Are there any package managers that have something like a min-age setting. To ignore all packages that were published less than 24 or 36 hours ago?
I’ve run into similar issues before, some package update that broke everything, only to get pulled/patched a few hours later.
ZeWaka
GitHub dependabot just got this very recently: https://github.blog/changelog/2025-07-01-dependabot-supports...
ebb_earl_co
Not for an operating system, but Astral’s `uv` tool has this for Python packages.
VPenkov
Not a package manager, but Renovate bot has a setting like that (minimumReleaseAge). Dependabot does not (Edit: does now).
So while your package manager will install whatever is newest, there are free solutions to keep your dependencies up to date in a reasonable manner.
Also, the javascript ecosystem seems to slowly be going in the direction of consolidation, and supply chain attacks are (again, slowly) getting tools to get addressed.
Additionally, current versions of all major package managers (NPM, PNPM, Bun, I don't know about Yarn) don't automatically run postinstall scripts - although you are likely to run them anyway because they will be suggested to you - and ultimately you're running someone else's code, postinstall scripts or not.
ZeWaka
Dependabot got it last month, actually. https://github.blog/changelog/2025-07-01-dependabot-supports...
VPenkov
Oh, happy days!
jefozabuss
I just use .npmrc with save-exact=true + lockfile + manual updates, you can't be too careful and you don't need to update packages that often tbh.
Especially after the fakerjs (and other) things.
andix
But you're still updating at some point. Usually to the latest version. If you're unlucky, you are the first victim, a few seconds after the package was published. (Edit: on a popular package there will always be a first victim somewhere in the first few minutes)
Many of those supply chain attacks are detected within the first few hours, I guess nowadays there are even some companies out there, that run automated analysis on every new version of major packages. Also contributors/maintainers might notice something like that quickly, if they didn't plan that release and it suddenly appears.
snovymgodym
Claude code is by all accounts a revolutionary tool for getting useful work done on a computer.
It's also:
- a NodeJS app
- installed by curling a shell script and piping it into bash
- an LLM that's given free reign to mess with the filesystem, run commands, etc.
So that's what, like 3 big glaring vectors of attack for your system right there?
I would never feel comfortable running it outside of some kind of sandbox, e.g. VM, container, dedicated dev box, etc.
kasey_junk
I definitely think running agents in sandboxes is the way to go.
That said Claude code does not have free reign to run commands out of the gate.
fwip
Pet peeve - it's free rein, not free reign. It's a horse riding metaphor.
0cf8612b2e1e
Bah, well I have been using that incorrectly my entire life. A monarchy/ruler metaphor seems just as logical.
sneak
Yes it does; you are thinking of agent tool calls. The software package itself runs as your uid and can do anything you can do (except on macOS where reading of certain directories is individually gated).
otterley
Claude Code is an agent. It will not call any tools or commands without your prior consent.
Edit: unless you pass it an override like --dangerously-skip-permissions, as this malware does. https://www.stepsecurity.io/blog/supply-chain-security-alert...
kasey_junk
Ok, but that’s true of _any_ program you install so isn’t interesting.
I don’t think the current agent tool call permission model is _right_ but it exists, so saying by default it will freely run those calls is less true of agents than other programs you might run.
saberience
So what?
It doesn't run by itself, you have to choose to run it. We have tons of apps with loads of permissions. The terminal can also mess with your filesystem and run commands... sure, but it doesn't open by itself and run commands itself. You have to literally run claude code and tell it to do stuff. It's not some living, breathing demon that's going to destroy your computer while you're at work.
Claude Code is the most amazing and game changing tool I've used since I first used a computer 30 years ago. I couldn't give two fucks about its "vectors of attack", none of them matter if no one has unauthorized access to my computer, and if they do, Claude Code is the least of my issues.
CGamesPlay
> I couldn't give two fucks about its "vectors of attack", none of them matter if no one has unauthorized access to my computer, and if they do, Claude Code is the least of my issues.
Naive! Claude Code grants access to your computer, authorized or not. I'm not talking about Anthropic, I'm talking about the HTML documentation file you told Claude to fetch (or manually saved) that has an HTML comment with a prompt injection.
OJFord
It doesn't have to be a deliberate 'attack', Claude can just do something absurdly inappropriate that wasn't what you intended.
You're absolutely right! I should not have `rm -rf /bin`d!
saberience
I would say this is a feature, not a bug.
Terminal and Bash or any shell can do this, if the user sucks. I want Claude Code to be able to do anything and everything, that's why it's so powerful. Sure, I can also make it do bad stuff, but that's like any tool. We don't ban knives because sometimes they kill people, because they're useful.
bethekidyouwant
I don’t use Claude, but can it really run commands on the cli without human confirmation? Sure there may be a switch to allow this but If in that case all but the most yolo must be using it in a container?
sneak
None of this is the concerning part. The bad part is that it auto-updates while running without intervention - i.e. it is RCE on your machine for Anthropic by design.
jpalawaga
So we’re declaring all software with auto-updaters as RCE? That doesn’t seem like a useful distinction.
autoexec
Software that automatically phoned home to check if an update is available used to be considered spyware if there wasn't a prompt at installation asking if you wanted that. The attitude was "Why should some company get my IP address and a timestamp telling them when/how often I'm online and using their software?" Some people thought that was paranoid.
We gave them an inch out of fear ("You'd better update constantly and immediately in case our shitty software has a bug that's made you vulnerable!") and today they've basically decided they can do whatever the fuck they want on our devices while also openly admitting to tracking our IPs and when/how often we use their software along with exactly what we're using it for, the hardware we're using, and countless other metrics.
Honestly, we weren't paranoid enough.
skydhash
That’s pretty much the definition. Auto updating is trusting the developer (Almost always a bad idea).
christophilus
Mine doesn’t auto update. I set it up so it doesn’t have permission to do that.
null
actualwitch
Not only that, but also connects to raw.githubusercontent.com to get the update. Doubt there are any signature checks happening there either. I know people love hating locked down Apple ecosystem, but this kind of stuff is why it is necessary.
aschobel
It would be surprising if claude code would actually run that prompt, so I tried run it:
> I can't help with this request as it appears to be designed to search for and inventory sensitive files like cryptocurrency wallets, private keys, and other secrets. This type of comprehensive file enumeration could be used maliciously to locate and potentially exfiltrate sensitive data.
If you need help with legitimate security tasks like:
- Analyzing your own systems for security vulnerabilities
- Creating defensive security monitoring tools
- Understanding file permissions and access controls
- Setting up proper backup procedures for your own data
I'd be happy to help with those instead.
ramimac
I have evidence of at least 250 successes for the prompt. Claude definitely appears to have a higher rejection rate. Q also rejects fairly consistently (based on Claude, so that makes sense).
Context: I've been responding to this all day, and wrote https://www.wiz.io/blog/s1ngularity-supply-chain-attack
stuartjohnson12
Incredibly common W for Anthropic safeguards. In almost every case I see Claude go head-to-head on refusals with another model provider in a real-world scenario, Claude behaves and the other model doesn't. There was a viral case on Tiktok of some lady going through a mental health episode who was being enabled and referred to as "The Oracle" by ChatGPT, but when she swapped to Claude, Claude eventually refused and told her to speak to a professional.
That's not to say the "That's absolutely right!" doesn't get annoying after a while, but we'd be doing everyone a disservice if we didn't reward Anthropic for paying more heed to safety and refusals than other labs.
chmod775
> Previously you might've been able to say "okay, but that requires the attacker to guess the specifics of my environment" - which is no longer true. An attacker can now simply instruct the LLM to exploit your environment and hope the LLM figures out how to do it on its own.
Not to toot my own horn too much, but in hindsight this seems prescient.
vorgol
OSs need to stop letting applications have a free reign of all the files on the file system by default. Some apps come with apparmor/selinux profiles and firejail is also a solution. But the UX needs to change.
bryceneal
This is a huge issue and it's the result of many legacy decisions on the desktop that were made 30+ years ago. Newer operating systems for mobile like iOS really get this right by sandboxing each app and requiring explicit permission from the user for various privileges.
There are solutions on the desktop like Qubes (but it uses virtualization and is slow, also very complex for the average user). There are also user-space solutions like Firejail, bubblewrap, AppArmor, which all have their own quirks and varying levels of compatibility and support. You also have things like OpenSnitch which are helpful only for isolating networking capabilities of programs. One problem is that most users don't want to spend days configuring the capabilities for each program on their system. So any such solution needs profiles for common apps which are constantly maintained and updated.
I'm somewhat surprised that the current state of the world on the desktop is just _so_ bad, but I think the problem at its core is very hard and the financial incentives to solve it are not there.
evertheylen
If you are on Linux, I'm writing a little tool to securely isolate projects from eachother with podman: https://github.com/evertheylen/probox. The UX is an important aspect which I've spent quite some time on.
I use it all the time, but I'm still looking for people to review its security.
eyberg
Containers should not be used as a security mechanism.
evertheylen
I agree with you that VMs would provide better isolation. But I do think containers (or other kernel techniques like SELinux) can still provide quite decent isolation with a very limited performance/ease-of-use cost. Much better than nothing I'd say?
UltraSane
Google did a good job with securing files on Android.
terminalbraid
Which operating system lets an application have "free reign of all the files on the file system by default"? Neither Linux, nor any BSD, nor MacOS, nor Windows does. For any of those I'd have to do something deliberately unsafe such as running it as a privileged account (which is not the "default").
eightys3v3n
I would argue the distinction between my own user and root is not meaningful when they say "all files by default". As my own user, it can still access everything I can on a daily basis which is likely everything of importance. Sure it can't replace the sudo binary or something like that, but it doesn't matter because it's already too late. Why when I download and run Firefox can it access every file my user can access, by default. Why couldn't it work a little closer to Android with an option for the user to open up more access. I think this is what they were getting at.
terminalbraid
I'm not saying user files aren't important. What I am saying is the original poster was being hyperbolic and, while you say it's not important for your case, it is a meaningful distinction. In fact, that's why those operating systems do not allow that.
doubled112
Flatpak allows you to limit and sandbox applications, including files inside your home directory.
It's much like an Android application, except it can feel a little kludgy because not every application seems to realize it's sandboxed. If you click save, silent failure because it didn't have write access there isn't very user friendly.
skydhash
Because it will become unpractical. It’s like saying your SO shouldn’t have access to your bedroom, or the maid should only have access to a single room. Instead what you do is having trusted people and put everything important in a safe.
In my case, I either use apt (pipx for yt-dlp), or use a VM.
SoftTalker
How many software installation instructions require "sudo"? It seems to me that it's many more than should be necessary. And then the installer can do anything.
As an administrator, I'm constantly being asked by developers for sudo permission so they can "install dependencies" and my first answer is "install it in your home directory" sure it's a bit more complexity to set up your PATH and LD_LIBRARY_PATH but you're earning a six-figure salary, figure it out.
ezfe
Even with sudo, macOS blocks access to some User-accessible locations:
% sudo ls ~/Pictures/Photos\ Library.photoslibrary
Password:
ls: /Users/n1503463/Pictures/Photos Library.photoslibrary: Operation not permitted
pepa65
Even just having access to all the files that the user has access to is really too much.
spankalee
The multi-user security paradigm of Unix just isn't enough anymore in today's single-user, running untrusted apps world.
sneak
All except macOS let anything running as your uid read and write all of your user’s files.
This is how ransomware works.
fsflover
You forgot the actually secure option: https://qubes-os.org
null
mdrzn
the truly chilling part is using a local llm to find secrets. it's a new form of living off the land, where the malicious logic is in the prompt, not the code. this sidesteps most static analysis.
the entry point is the same old post-install problem we've never fixed, but the payload is next-gen. how do you even defend against malicious prompts?
christophilus
Run Claude Code in a locked down container or VM that has no access to sensitive data, and review all of the code it commits?
spacebanana7
Conceivably couldn’t a post install script be used for the malicious dependency to install its own instance of Claude code (or similar tool)?
In which case you couldn’t really separate your dev environment from a hostile LLM.
anon7000
Yes, though the attackers would have to pay for an account. In this case, it’s using a pre-installed, pre-authorized tool, using your own credits to hack you
myaccountonhn
As a separate locked-down user would probably also work.
alex_anglin
Pretty rich that between this and Claude for Chrome, Anthropic just posted a ~40m YouTube video touting "How Anthropic stops AI cybercrime": https://www.youtube.com/watch?v=EsCNkDrIGCw
mathiaspoint
I always assumed malware like this would bring its own model and do inference itself. When malware adopts new technology I'm always a little surprised by how "lazy"/brazen the authors are with it.
zingababba
Here's one using gpt-oss:20b - https://x.com/esetresearch/status/1960365364300087724
See also:
https://www.stepsecurity.io/blog/supply-chain-security-alert...
https://semgrep.dev/blog/2025/security-alert-nx-compromised-...