Skip to content(if available)orjump to list(if available)

Shai-Hulud Returns: Over 300 NPM Packages Infected

vintagedave

Serious question: should someone develop new technologies using Node any more?

A short time ago, I started a frontend in Astro for a SaaS startup I'm building with a friend. Astro is beautiful. But it's build on Node. And every time I update the versions of my dependencies I feel terrified I am bringing something into my server I don't know about.

I just keep reading more and more stories about dangerous npm packages, and get this sense that npm has absolutely no safety at all.

sph

It's not "node" or "Javascript" the problem, it's this convenient packaging model.

This is gonna ruffle some feathers, but it's only a matter of time until it'll happen on the Rust ecosystem which loves to depend on a billion subpackages, and it won't be fault of the language itself.

The more I think about it, the more I believe that C, C++ or Odin's decision not to have a convenient package manager that fosters a cambrian explosion of dependencies to be a very good idea security-wise. Ambivalent about Go: they have a semblance of packaging system, but nothing so reckless like allowing third-party tarballs uploaded in the cloud to effectively run code on the dev's machine.

TheFlyingFish

I've worried about this for a while with Rust packages. The total size of a "big" Rust project's dependency graph is pretty similar to a lot of JS projects. E.g. Tauri, last I checked, introduces about 600 dependencies just on its own.

Like another commenter said, I do think it's partially just because dependency management is so easy in Rust compared to e.g. C or C++, but I also suspect that it has to do with the size of the standard library. Rust and JS are both famous for having minimal standard libraries, and what do you know, they tend to have crazy-deep dependency graphs. On the other hand, Python is famous for being "batteries included", and if you look at Python project dependency graphs, they're much less crazy than JS or Rust. E.g. even a higher-level framework like FastAPI, that itself depends on lower-level frameworks, has only a dozen or so dependencies. A Python app that I maintain for work, which has over 20 top-level dependencies, only expands to ~100 once those 20 are fully resolved. I really think a lot of it comes down to the standard library backstopping the most common things that everybody needs.

So maybe it would improve the situation to just expand the standard library a bit? Maybe this would be hiding the problem more than solving it, since all that code would still have to be maintained and would still be vulnerable to getting pwned, but other languages manage somehow.

QuiEgo

It's already happening: https://cyberpress.org/malicious-rust-packages/

My personal experience (YMMV): Rust code takes 2x or 3x longer to write than what came before it (C in my case), but in the end you usually get something much more likely to work, so overall it's kind of a wash, and the product you get is better for customers - you basically front load the cost of development.

This is terrible for people working in commercial projects that are obsessed with time to market.

Rust developers on commercial projects are under incredible schedule pressure from day 0, where they are compared to expectations from their previous projects, and are strongly motivated to pull in anything and everything they can to save time, because re-rolling anything themselves is so damn expensive.

mx7zysuj4xew

It won't, it's a culture issue

Most rust programmers are mediocre at best and really need the memory safety training wheels that rust provides. Years of nodejs mindrot has somehow made pulling into random dependencies irregular release schedules to become the norm for these people. They'll just shrug it off come up with some "security initiative* and continue the madness

wongarsu

I wouldn't call the Rust stdlib "small". "Limited" I could agree with.

On the topics it does cover, Rust's stdlib offers a lot. At least on the same level as Python, at times surpassing it. But because the stdlib isn't versioned it stays away from everything that isn't considered "settled", especially in matters where the best interface isn't clear yet. So no http library, no date handling, no helpers for writing macros, etc.

You can absolutely write pretty substantial zero-dependency rust if you stay away from the network and async

Whether that's a good tradeoff is an open question. None of the options look really great

kibwen

> Rust and JS are both famous for having minimal standard libraries

I'm all in favor of embiggening the Rust stdlib, but Rust and JS aren't remotely in the same ballpark when it comes to stdlib size. Rust's stdlib is decidedly not minimal; it's narrow, but very deep for what it provides.

skydhash

C standard library is also very small. The issue is not the standard library. The issue is adding libraries for snippets of code, and in the name of convenience, let those libraries run code on the dev machine.

metaltyphoon

This is a reason why so many enterprises use C#. Most of the time you just use Microsoft made libraries and rarely brings in 3rd party.

moomin

It might solve the problem, in as much as the problem is that not only can it be done, but it’s profitable to do so. This is why there’s no Rust problem (yet).

gorgoiler

And yet of course the world and their spouse import requests to fetch a URL and view the body of the response.

It would be lovely if Python shipped with even more things built in. I’d like cryptography, tabulate/rich, and some more featureful datetime bells and whistles a la arrow. And of course the reason why requests is so popular is that it does actually have a few more things and ergonomic improvements over the builtin HTTP machinery.

Something like a Debian Project model would have been cool: third party projects get adopted into the main software product by a sworn-in project member who who acts as quality control / a release manager. Each piece of software stays up to date but also doesn’t just get its main branch upstreamed directly onto everyone’s laps without a second pair of eyes going over what changed. The downside is it slows everything down, but that’s a side-effect of, or rather a synonym for stability, which is the problem we have with npm. (This looks sort of like what HelixGuard do, in the original article, though I’ve not heard of them before today.)

larusso

I agree partly. I love cargo and can’t understand why certain things like package namespaces and proof of ownership isn’t added at a minimum. I was mega annoyed when I had to move all our Java packages from jcenter, which was a mega easy setup and forget affair, to maven central. There I suddenly needed to register a group name (namespace mostly reverse domain) and proof that with a DNS entry. Then all packages have to be signed etc. In the end it was for this time way ahead. I know that these measures won’t help for all cases. But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming. Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning. But I don’t want to go away from package managers or easy to use/sharable packages either.

kibwen

> But the fact that at least on npm it was possible that someone else grabs a package ID after an author pulled its packages is kind of alarming.

Since your comment starts with commentary on crates.io, I'll note that this has never been possible crates.io.

> Dependency confusion attacks are still possible on cargo because the whole - vs _ as delimiter wasn’t settled in the beginning.

I don't think this has ever been true. AFAIK crates.io has always prevented registering two different crates whose names differ only in the use of dashes vs underscores.

> package namespaces

See https://github.com/rust-lang/rust/issues/122349

> proof of ownership

See https://github.com/rust-lang/rfcs/pull/3724 and https://blog.rust-lang.org/2025/07/11/crates-io-development-...

gnfargbl

I'm a huge Go proponent but I don't know if I can see much about Go's module system which would really prevent supply-chain attacks in practice. The Go maintainers point [1] at the strong dependency pinning approach, the sumdb system and the module proxy as mitigations, and yes, those are good. However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

The lack of package install hooks does feel somewhat effective, but what's really to stop an attacker putting their malicious code in `func init() {}`? Compromising a popular and important project in this way would likely be noticed pretty quickly. But compromising something widely-used but boring? I feel like attackers would get away with that for a period of time that could be weeks.

This isn't really a criticism of Go so much as an observation that depending on random strangers for code (and code updates) is fundamentally risky. Anyone got any good strategies for enforcing dependency cooldown?

[1] https://go.dev/blog/supply-chain

devttyeu

In Go you know exactly what code you’re building thanks to gosum, and it’s much easier to audit changed code after upgrading - just create vendor dirs before and after updating packages and diff them; send to AI for basic screening if the diff is >100k loc and/or review manually. My projects are massive codebases with 1000s of deps and >200MB stripped binaries of literally just code, and this is perfectly feasible. (And yes I do catch stuff occasionally, tho nothing actively adversarial so far)

I don’t believe I can do the same with Rust.

PunchyHamster

> However, I can't see what those features do to defend against an attack vector that we have certainly seen elsewhere: project gets compromised, releases a malicious version, and then everyone picks it up when they next run `go get -u ./...` without doing any further checking. Which I would say is the workflow for a good chunk of actual users.

You can't, really, aside from full on code audits. By definition, if you trust a maintainer and they get compromised, you get compromised too.

Requiring GPG signing of releases (even by just git commit signing) would help but that's more work for people to distribute their stuff, and inevitably someone will make insecure but convenient way to automate that away from the developer

asmor

The Go standard library is a lot more comprehensive and usable than Node, so you need less dependencies to begin with.

chuckadams

> It's not "node" or "Javascript" the problem, it's this convenient packaging model.

That and the package runtime runs with all the same privileges and capabilities as the thing you're building, which is pretty insane when you think about it. Why should npm know anything outside of the project root even exists, or be given the full set of environment variables without so much as a deny list, let alone an allow list? Of course if such restrictions are available, why limit them to npm?

The real problem is that the security model hasn't moved substantially since 1970. We already have all the tools to make things better, but they're still unportable and cumbersome to use, so hardly anything does.

pas

pnpm (maybe yarn too?) requires explicit allowlisting of build scripts, hopefully npm will do the same eventually

> security model

yep, some kind of seccomp or other kind of permission system for modules would help a lot. (eg. if the 3rd party library is parsing something and its API only requires a Buffer as input and returns some object then it could be marked "pure", if it supports logging then that could be also specified, and so on)

dotancohen

Historically, arguments of "it's popular so that's why it's attacked" have not held up. Notable among them was addressing Windows desktop security vulnerabilities. As Linux and Mac machines became more popular, not to mention Android, the security vulnerabilities in those burgeoning platforms never manifested to the extent that they were in Windows. Nor does cargo or pip seem to be infected with these problems to the extent that npm is.

whizzter

Compared to the JS ecosystem and number of users both Python and Rust are puny, also the the NPM ecosystem also allowed by default for a lot of post-install actions since they wanted to enable a smooth experience with compiling and installing native modules (Not entirely sure how Cargo and PIP handles native library dependencies).

As for Windows vs the other OS's, yes even the Windows NT family grew out of DOS and Win9x and tried to maintain compatiblity for users over security up until it became untenable. So yes, the base _was_ bad when Windows was dominant but it's far less bad today (why people target high value targets via NPM,etc since it's an easier entry-point).

Android/iOS is young enough that they did have plenty of hindsight when it comes to security and could make better decisions (Remember that MS tried to move to UWP/Appx distribution but the ecosystem was too reliant on newer features for it to displace the regular ecosystem).

Remember that we've had plenty of annoyed discourse about "Apple locking down computers" here and on other tech forums when they've pushed notarization.

I guess my point is that, people love to bash on MS but at the same time complain about how security is affecting their "freedoms" when it comes to other systems (and partly MS), MS is better at the basics today than they were 20-25 years ago and we should be happy about that.

mschuster91

> Nor does cargo or pip seem to be infected with these problems to the extent that npm is.

Easy reason. The target for malware injections is almost always cryptocurrency wallets and cloud credentials (again, mostly to mine cryptocurrencies). And the utter utter majority of stuff interacting with crypto and cloud, combined with a lot of inexperienced juniors who likely won't have the skill to spot they got compromised, is written in NodeJS.

dwroberts

I think this is right about Rust and Cargo, but I would say that Rust has a major advantage in that it implements frozen + offline mode really well (which if you use, obviously significantly decreases the risks).

Any time I ever did the equivalent with NPM/node world it was basically unusable or completely impractical

bhouston

Pnpm (a very popular npm replacement) makes completely locked packages easy and natural and ultra fast:

https://pnpm.io/cli/install

Benchmarks:

https://pnpm.io/benchmarks

kunley

Why the word "semblance" with regard to Go modules? Are you trying to say this system is lacking something?

rafaelmn

There are ecosystems that have package managers but also well developed first party packages.

In .NET you can cover a lot of use cases simply using Microsoft libraries and even a lot of OSS not directly a part of Microsoft org maintained by Microsoft employees.

CharlieDigital

2020 State of the Octoverse security report showed that .NET ecosystem has on average the lowest number of transitive dependencies. Big part of that is the breadth and depth of the BCL, standard libraries, and first party libraries.

WorldMaker

I've started to feel it is much more an npm problem than a node problem. One of the things I've started leaning on more is prioritizing packages from JSR [0]. JSR is a part of Deno's efforts, so is often easiest to use in Deno packages, but most of the things with high scores on JSR get cross-published to npm and the few that prefer JSR only there's an alright JSR bridge to npm.

Of course using more JSR packages does start to add more reason to prefer Deno to Node. Also, there are still some packages that are deno.land/x/ only (sort of the first version of JSR, but no npm cross-compatibility) worth checking out. For instance, I've been impressed with Lume [1], a thoughtful SSG that's sort of the opposite of Astro in that it iterates at a slow, measured pace, and doesn't try to be a kitchen sink but more of workbench with a lot of tools easy to find. It's deno.land/x/ only for now for reasons I don't entirely agree with but I can't deny that JSR can be quite a step up in publishing complexity for not exactly obvious gain.

[0] https://jsr.io/

[1] https://lume.land/

Gigachad

The problem isn't specific to node. NPM is just the most popular repo so the most value for attacks. The same thing could happen on RubyGems, Cargo, or any of the other package managers.

vintagedave

The concern is not 'could' happen, but _does_ happen. I know this could occur in many places. But where it seems highly prevalent is NPM.

And I am genuinely thinking to myself, is this making using npm a risk?

cluckindan

Just use dependency cooldown. It will mitigate a lot of risk.

Ygg2

NPM is the largest possible target for such an attack.

Attack an important package, and you can get into the Node and Electron ecosystem. That's a huge prize.

gred

NPM has about 4 million packages, Maven Central has about 3 million packages.

If this were true, wouldn't there have been at least one Maven attack by now, considering the number of NPM attacks that we've seen?

chha

Been a while since I looked into this, but afaik Maven Central is run by Sonatype, which happens to be one of the major players for systems related to Supply Chain Security.

From what I remember (a few years old, things may have changed) they required devs to stage packages to a specific test env, packages were inspected not only for malware but also vulnerabilities before being released to the public.

NPM on the other hand... Write a package -> publish. Npm might scan for malware, they might do a few additional checks, but at least back when I looked into it nothing happened proactively.

pimterry

As of 2024, Maven had 1.5 trillion requests annually vs npm's 4.5 trillion - regardless of package count, 3x more downloads in total does make it a very big target (numbers from https://www.sonatype.com/state-of-the-software-supply-chain/...).

viraptor

There were. They're just not as popular here. For example https://www.sonatype.com/blog/malware-removed-from-maven-cen...

Maven is also a bit more complex than npm and had an issue in the system itself https://arxiv.org/html/2407.18760v4

skwee357

One speculation would be is that most Java apps in the wild use way older Java versions (say 17/11, while the latest will LTS is 21).

AndroTux

Okay then, explain to me why this is only possible with NPM? Does it have a hidden "pwn" button that I don't know about?

master-lincoln

No. Having many packages might not be the only reason to start an attack. This post shows it is/was possible in the Maven ecosystem: https://blog.oversecured.com/Introducing-MavenGate-a-supply-...

throwawayffffas

Hoe many daily downloads does Maven have?

PunchyHamster

Value is one thing but the average user (by virtue of being popular) will be just less clued in on any security practices that could mitigate the problem.

skwee357

I’m not a node/js apologist, but every time there is a vulnerability in NPM package, this opinion is voiced.

But in reality it has nothing to do with node/js. It’s just because it’s the most used ecosystem. So I really don’t understand the argument of not using node. Just be mindful of your dependencies and avoid updating every day.

shortrounddev2

it's interesting that staying up to date with your dependencies is considered a vulnerability in Node

bichiliad

Having a cooldown is different from never updating. I don’t think waiting a few days is a bad security practice in any environment, node or otherwise.

skwee357

People who live on the edge of updates always risk vulnerabilities and incompatibility issues. It’s not about node, but anything software related.

reconnecting

We chose to write our platform for product security analytics (1) with PHP, primarily because it still allows us to create a platform without bringing in over 100 dependencies just to render one page.

I know this is a controversial approach, but it still works well in our case.

"require": { "php": ">=8.0",

        "ext-mbstring": "*",

        "bcosca/fatfree-core": "3.9.1",

        "phpmailer/phpmailer": "6.9.3",

        "ruler/ruler": "0.4.0",

        "matomo/device-detector": "6.4.7" }
1. https://github.com/tirrenotechnologies/tirreno

embedding-shape

Not sure what the language has anything to do with it, we've built JavaScript applications within pulling in 100s of NPM packages before NPM was a thing, people and organizations can still do so today, without having to switch language, if they don't want to.

Does it require disciple and a project not run by developers who just learned program? You betcha.

reconnecting

I might say that every interpreter has a different minimum dependency level just to create a simple application. If we're talking about Node.js, there's a long list of dependencies by default.

So yes, in comparison, modern vanilla PHP with some level of developer discipline (as you mentioned) is actually quite suitable, but unfortunately not popular, for low-dependency development of web applications.

Zagitta

Ah yes PHP, the language known for its strong security...

reconnecting

Oh yes, let's remember PHP 4.3 and all the nostalgic baggage from that era.

zwnow

Modern PHP is leagues above Javascript

Cthulhu_

Node is fine, the issue lies in its package model and culture:

* Many dependencies, so much you don't know (and stop caring) what is being used.

* Automatic and regular updates, new patch versions for minor changes, and a generally accepted best practice of staying up to date on the latest versions of things, due to trauma from old security breaches or big migrations after not updating for a while.

* No review, trust based self-publishing of packages and instant availability

* untransparent pre/postinstall scripts

The fix is both cultural and technological:

* Stop releasing for every fart; once a week is enough, only exception being critical security reasons.

* Stop updating immediately whenever there's an update; once a week is enough.

* Review your updates

* Pay for a package repository that actually reviews changes before making them widely available. Actually I think the organization between NPM should set that up, there's trillion dollar companies using the Node ecosystem who would be willing and able to pay for some security guarantees.

dboreham

Microsoft owns npmjs.com. They could pay for AI analysis of published version deltas, looking for backdoors and malware.

littlecranky67

Professionally I am a fulltime FE Dev using Typescript+React. My Backends for my side projects are all done in C#, even so I'd be fluent in node+typescript for that very reason. In a current side project, my backend only has 3 external package dependencies, 2 of which are SQLite+ORM related. The frontend for that sideproject has over 50 (React/Typescript/MaterialUI/NextJS/NX etc.)

noveltyaccount

.NET being so batteries-included is one of its best features. And when vulnerabilities do creep in, it's nice to know that Microsoft will fix it rather than hoping a random open source project will.

paradite

There's only two kind of technologies.

The ones that most people use and some people complain about, and the ones that nobody uses and people keep advocating for.

monooso

This a common refrain on HN, frequently used to dismiss what may be perfectly legitimate concerns.

It also ignores the central question of whether NPM is more vulnerable to these attacks than other package managers, and should therefore be considered an unreasonable security risk.

null

[deleted]

darkamaul

The "use cooldown" [0] blog post looks particularly relevant today.

I'd argue automated dependency updates pose a greater risk than one-day exploits, though I don't have data to back that up. That's harder to undo a compromised package already in thousands of lock files, than to manually patch a already exploited vulnerability in your dependencies.

[0] https://blog.yossarian.net/2025/11/21/We-should-all-be-using...

plomme

Why not take it further and not update dependencies at all until you need to because of some missing feature or systems compatibility you need? If it works it works.

bigstrat2003

That is indeed what one should do IMO. We've known for a long time now in the ops world that keeping versions stable is a good way to reduce issues, and it seems to me that the same principle applies quite well to software dev. I've never found the "but then upgrading is more of a pain" argument to be persuasive, as it seems to be equally a pain to upgrade whether you do it once every six months or once every six years.

skybrian

The arguments for doing frequent releases partially apply to upgrading dependencies. Upgrading gets harder the longer you put it off. It’s better to do it on a regular schedule, so there are fewer changes at once and it preserves knowledge about how to do it.

A cooldown is a good idea, though.

tim1994

Because updates don't just include new features but also bug and security fixes. As always, it probably depends on the context how relevant this is to you. I agree that cooldown is a good idea though.

ryandrake

> Because updates don't just include new features but also bug and security fixes.

This practice needs to change, although it will be almost impossible to get a whole ecosystem to adopt. You shouldn’t have to take new features (and associated new problems) just to get bug fixes and security updates. They should be offered in parallel. We need to get comfortable again with parallel maintenance branches for each major feature branch, and comfortable with backporting fixes to older releases.

shermantanktop

For any update:

- it usually contains improvements to security

- except when it quietly introduces security defects which are discovered months later, often in a major rev bump

- but every once in a while it degrades security spectacularly and immediately, published as a minor rev

theptip

IMO for “boring software” you usually want to be on the oldest supported main/minor version, keeping an eye on the newest point version. That will have all the security patches. But you don't need to take every bug fix blindly.

jacquesm

But even then you are still depending on others to catch the bugs for you and it doesn't scale: if everybody did the cooldown thing you'd be right back where you started.

vintagedave

That worried me too, a sort of inverse tragedy of the commons. I'll use a weeklong cooldown, _someone else_ will find the issue...

Until no-one does, for a week. To stretch the original metaphor, instead of an overgrazed pasture, we grow a communally untended thicket which may or may not have snakes when we finally enter.

falcor84

I don't think that this Kantian argument is relevant in tech. We've had LTS versions of software for decades and it's not like every single person in the industry is just waiting for code to hit LTS before trying it. There are a lot of people and (mostly smaller) companies who pride themselves on being close to the "bleeding edge", where they're participating more fully in discovering issues and steering the direction.

woodruffw

The assumption in the post is that scanners are effective at detecting attacks within the cooldown period, not that end-device exploitation is necessary for detection.

(This may end up not being true, in which case a lot of people are paying security vendors a lot of money to essentially regurgitate vulnerability feeds at them.)

Sammi

Pretty easy to do using npm-check-update:

https://www.npmjs.com/package/npm-check-updates#cooldown

In one command:

  npx npm-check-updates -c 7

tragiclos

The docs list this caveat:

> Note that previous stable versions will not be suggested. The package will be completely ignored if its latest published version is within the cooldown period.

Seems like a big drawback to this approach.

nfriedly

I could see it being a good feature. If there have been two versions published within the last week or two, then there are reasonable odds that the previous one had a bug.

Ygg2

I don't buy this line of reasoning. There are zero/one day vulnerabilities that will get extra time to spread. Also, if everyone switches to the same cooldown, wouldn't this just postpone the discovery of future Shai-Huluds?

I guess the latter point depends on how are Shai-Huluds detected. If they are discovered by downstreams of libraries, or worse users, then it will do nothing.

wavemode

Your line of reasoning only makes sense if literally almost all developers in the world adopt cooldowns, and adopt the same cooldown.

That would be a level of mass participation yet unseen by mankind (in anything, much less something as subjective as software development). I think we're fine.

hyperpape

For zero/one days, the trick is that you'd pair dependency cooldowns with automatic scanning for vulnerable dependencies.

And in the cases where you have vulnerable dependencies, you'd force update them before the cooldown period had expired, while leaving everything else you can in place.

__s

There are companies like Helix Guard scanning registries. They advertise static analysis / LLM analysis, but honeypot instances can also install packages & detect certain files like cloud configs being accessed

Yokohiii

But relying on the goodwill of commercial sec vendors is it's own infrastructure risk.

timgl

co-founder of PostHog here. We were a victim of this attack. We had a bunch of packages published a couple of hours ago. The main packages/versions affected were:

- posthog-node 4.18.1, 5.13.3 and 5.11.3

- posthog-js 1.297.3

- posthog-react-native 4.11.1

- posthog-docusaurus 2.0.6

We've rotated keys and passwords, unpublished all affected packages and have pushed new versions, so make sure you're on the latest version of our SDKs.

We're still figuring out how this key got compromised, and we'll follow up with a post-mortem. We'll update status.posthog.com with more updates as well.

silverlight

[delayed]

bilalq

You're probably already planning this, but please setup an alarm to fire off if a new package release is published that is not correlated with a CI/CD run.

euph0ria

Very nice way of putting it, kudos!

brabel

If anything people should use an older version of the packages. Your newest versions had just been compromised, why should anyone believe this time and next time it will be different?!

timgl

The packages were published using a compromised key directly, not through our ci/cd. We rolled the key, and published a new clean version from our repo through our CI/CD: https://github.com/PostHog/posthog-js/actions/runs/196303581...

progbits

Why do you keep using token auth? This is unacceptable negligence these days.

NPM supports GitHub workflow OIDC and you can make that required, disabling all token access.

Y_Y

> so make sure you're on the latest version of our SDKs.

Probably even safer to not have been on the latest version in the first place.

Or safer again not to use software this vulnerable.

meesles

As a user of Posthog, this statement is absurd: > Or safer again not to use software this vulnerable.

Nearly all software you use is susceptible to vulnerabilities, whether it's malicious or enterprise taking away your rights. It's in bad taste to make a comment about "not using software this vulnerable" when the issue was widespread in the ecosystem and the vendor is already being transparent about it. The alternative is you shame them into not sharing this information, and we're all worse for it.

tclancy

Popularity and vulnerability go hand in hand though. You could be pretty safe by only using packages with zero stars on GitHub, but would you be happy or productive?

spiderfarmer

If we don't know how it got compromised, chances are this attack is still spreading?

_alternator_

Glad you updated on this front-page post. Your Twitter post is buried on p3 for me right now. Good luck on the recovery and hopefully this helps someone.

gonepivoting

We're monitoring this activity as well and updating the list of affected packages here: https://www.wiz.io/blog/shai-hulud-2-0-ongoing-supply-chain-...

Currently reverse engineering the malicious payload and will share our findings within the next few hours.

das_keyboard

Slightly OT, but who is HelixGuard?

The website is a mess (broken links, broken UI elements, no about section)

There is no history on webarchive. There is no information outside of this website and their "customers" are crypto exchanges and some japanese payment provider.

This seems a bit fishy to me - or am I too paranoid?

bhouston

Based in Singapore / Japan according to X: https://x.com/HelixGuard_ai

bodash

I compiled a list of NPM best practices one can adopt to reduce supply chain attack risks (even if there's no perfect security preventions, _always_): https://github.com/bodadotsh/npm-security-best-practices

Discussion on HN last time: https://news.ycombinator.com/item?id=45326754

herpdyderp

For anyone publishing packages for others to use: please don't pin exact dependency versions. Doing so requires all your users to set "overrides" in their own package.json when your dependencies have vulnerabilities.

giantg2

Do you know of anything similar for pip?

kernc

No.1: Run untrusted code in a sandbox! https://github.com/sandbox-utils/sandbox-venv

bodash

Most of the best practices can be translated to python ecosystem. It’s not exact 1:1 mapping but change few key terms and tools, the underlying practices should be the same.

Or copy that repo’s markdown into an llm and ask it to map to the pip ecosystem

kunley

Why the biggest package mess is always with the Node ecosystem?

Why in particular this community still insists on preemptively updating all deps always, on running complicated extra hooks together with package installation and pretending this all is good engineering practices? ("Look, we have so plenty of things and are so busy, thus it must be good")

Why certain kind of mindset is typical to this community?

Why the Node creator abandoned his creation years ago?

Why, oh why?

jalapenos

Feels good, just for a second, to type pretence you're above everyone doesn't it? Just for those few seconds, you're better than a big whole arbitrary collection of people, and for those few seconds you have relief from the reality of your life.

kunley

No. I am tired of seeing people dragged into poor quality environments for the wrong reasons. These people would do much better using other tools.

Your attempt to make it personal does not compute.

rglover

This is a good sign that it's time to get packages off of NPM and come up with an alternative. For those who haven't heard of or tried Verdaccio [1], it may be an option. Relatively easy to point at your own server via NPM once you set it up.

[1] https://verdaccio.org/

hedora

I've had decent luck running it locally, but claude keeps screwing up the cool-down settings in my monorepo.

This is probably a common problem. Has anyone gotten verdaccio to enforce cool-down policies?

I also waste a ton of time because post-install scripts are disabled. Being able to cut them off from network access, and just run a local server with 2-4 week cool-down would help me sleep better at night + simplify the hell out of my build.

gdotdesign

- There is a single root dependency somewhere which gets overtaken

- A new version of this dependency is published

- A CI somewhere of another NPM package uses this new version dependency in a build, which trigger propagation by creating a new modified version of this dependency?

- And so on...

Am I getting this right?

vintagedave

The list of packages looks like these are not just tiny solo-person dependencies-of-dependencies. I see AsyncAPI and Zapier there. Am I right that this seems quite a significant event?

AsyncAPI is used as the example in the post. It says the Github repo was not affected, but NPM was.

What I don't understand from the article is how this happened. Were the credentials for each project leaked? Given the wide range of packages, was it a hack on npm? Or...?

merelysounds

There is an explanation in the article:

> it modifies package.json based on the current environment's npm configuration, injects [malicious] setup_bun.js and bun_environment.js, repacks the component, and executes npm publish using stolen tokens, thereby achieving worm-like propagation.

This is the second time an attack like this happens, others may be familiar with this context already and share fewer details and explanations than usual.

Previous discussions: https://news.ycombinator.com/item?id=45260741

tasuki

I don't get this explanation. How does it force you to run the infection code?

Yes, if you depend on an infected package, sure. But then I'd expect not just a list, but a graph outlining which package infected which other package. Overall I don't understand this at all.

merelysounds

Look at the diff in the article, it shows the “inject” part: the malicious file is added to the “preinstall” attribute in the package.json.

vintagedave

Thanks. I saw that sentence but somehow didn't parse it. Need a coffee :/

throw-the-towel

My understanding is, it's a worm that injects itself into the current package and publishes infected code to npm.

amiga386

"No Way To Prevent This" Says Only Package Manager Where This Regularly Happens

thih9

Parent comment is an indirect reference to US mass shootings:.

> "'No Way to Prevent This,' Says Only Nation Where This Regularly Happens" is the recurring headline of articles published by the American news satire organization The Onion after mass shootings in the United States.

Source: https://en.wikipedia.org/wiki/%27No_Way_to_Prevent_This,%27_...

jamietanna

See also Xe Iaso's posts about CVEs in the C ecosystem (https://xeiaso.net/shitposts/no-way-to-prevent-this/CVE-2025...)

globalise83

No Preventative Measures (NPM)

zenmac

You can host your own NPM reg, and examine every package, but your manager probably is NOT going to go for that.

null

[deleted]

creata

There's nothing technically different between NPM and, say, Cargo, here that would save Cargo, is there?

nagisa

I would say that npm likely has easier solutions here compared to Cargo.

Well before the npm attacks were a thing, we within the Rust project, have discussed a lot of using wasm sandboxing for build-time code execution (and also precompiled wasm for procedural macros, but that's its own thing.) However the way build scripts are used in the Rust ecosystem makes it quite difficult enforce sandbox while also enabling packages to build foreign code (C, C++ invoke make, cmake, etc.) The sandbox could still expose methods to e.g. "run the C compiler" to the build scripts, but once that's done they have an arbitrary access to a very non-trivial piece of code running in a privileged environment.

Whereas for Javascript rarely does a package invoke anything but other javascript code during the build time. Introduce a stringent sandbox for that code (kinda deno style perhaps?) and a large majority of the packages are suddenly safe by default.

tjpnz

This is a cultural problem created through a fundamental misunderstanding (and mis-application) of Unix philosophy. As far as I'm aware the Rust ecosystem doesn't have a problem appropriately sizing packages which in turn reduces the overall attack surface of dependencies.

creata

I agree, but imo the Rust ecosystem has the same problem. Not to the extent of NPM, but worse than C/C++.

junon

This has nothing to do with package sizes. Cargo was just hit with a phishing campaign not too long ago, and does still use tokens for auth. NPM just has a wider surface area.

AndroTux

Okay then, tell me a way to prevent this.

amiga386

An example: Java Maven artifacts typically name the exact version of their dependencies. They rarely write "1.2.3 or any newer version in the 1.2.x series", as is the de-facto standard in NPM dependencies. Therefore, it's up to each dependency-user to validate newer versions of dependencies before publishing a new version of their own package. Lots of manual attention needed, so a slower pace of releases. This is a good thing!

Another example: all Debian packages are published to unstable, but cannot enter testing for at least 2-10 days, and also have to meet a slew of conditions, including that they can be and are built for all supported architectures, and that they don't cause themselves or anything else to become uninstallable. This allows for the most egregious bugs to be spotted before anyone not directly developing Debian starts using it.

jml78

You forgot to mention it is also tied to provable namespaces. People keep saying that NPM is just the biggest target...

Hate to break it to you but from targeting enterprises, java maven artifacts would be a MASSIVE target. It is just harder to compromise because NPM is such shit.

pabs3

Build packages from source without any binaries (all the way down) and socially audit the source before building.

https://bootstrappable.org/ https://reproducible-builds.org/ https://github.com/crev-dev

Balinares

Other languages seem to publish dependencies as self-contained packages whose installation does not require running arbitrary shell scripts.

This does not prevent said package from shipping with malware built in, but it does prevent arbitrary shell execution on install and therefore automated worm-like propagation.

seethishat

I think some system would need to dynamically analyze the code (as it runs) and record what it does. Even then, that may not catch all malicious activity. It's sort of hard to define what malicious activity is. Any file read or network conn could, in theory, be malicious.

As a SW developer, you may be able to limit the damage from these attacks by using a MAC (like SELinux or Tomoyo) to ensure that your node app cannot read secrets that it is not intended to read, conns that it should not make, etc. and log attempts to do those things.

You could also reduce your use of external packages. Until slowly, over time you have very little external dependencies.

blueflow

The same way it always has been done - vendor your deps.

joshstrange

That literally makes no difference at all. You’ll just vendor the malicious versions. No, a lock file with only exact versions is the safe path here. We haven’t seen a compromise to existing versions that I know of, only patch/minor updates with new malicious code.

I maintain that the flexibility in npm package versions is the main issue here.

sph

To be fair this does only work in ecosystems where libraries are stable and don't break every 3 months as it often happens on the JS world.

You can vendor your left-pad, but good luck doing that with a third-party SDK.

hu3

that's what I do whenever feasible. Which is often

codedokode

Hire an antivirus company to provide a safe and verified feed of packages. Use ML and automatic scanners to send packages to manual review. While Halting problem prevents us from 100% reliably detecting malware, at least we can block everything suspicious.

tmvnty

Other than general security practices, here are few NPM ecosystem specific ones: https://github.com/bodadotsh/npm-security-best-practices

AmazingTurtle

I looked through some of the GH repositories and - dear god - there are some crazy sensitive secrets in there. AWS Prod database credentials, various API keys (stripe, google, apple store, ...), passwords for databases, encryption keys, ssh keys, ...

I think hijacked NPM packages are just the tip of the ice berg.