Skip to content(if available)orjump to list(if available)

You too can run malware from NPM (I mean without consequences)

jefozabuss

Seems like people already forgot about Jia Tan.

By the way why doesn't npm have already a system in place to flag sketchy releases where most of the code looks normal and there is a newly added obfuscated code with hexadecimal variable names and array lookups for execution...

mystifyingpoi

Detecting sketchy-looking hex codes should be pretty straightforward, but then I imagine there are ways to make sketchy code non-sketchy, which would be immediately used. I can imagine a big JS function, that pretends to do legit data manip, but in the process creates the payload.

hombre_fatal

Yeah, It’s merely a fluke that the malware author used some crappy online obfuscator that created those hex code variables. It would have been less work and less suspicious if they just kept their original semantic variables like “originalFetch”.

nicce

It is just about bringing the classic non-signature based antivirus software to the release cycle. Hard to say how useful it is, but usually it is endless cat-and-mouse play like with everything else.

Cthulhu_

It wouldn't be just one signal, but several - like a mere patch version that adds several kilobytes of code, long lines, etc. Or a release after a long silent period.

cluckindan

A complexity per line check would have flagged it.

Even a max line length check would have flagged it.

chatmasta

That would flag a huge percentage of JS packages that ship with minified code.

cchance

Feels like a basic light weight 3b AI model could easily spot shit like this on commit

tom1337

It would also be great if a release needs to be approved by the maintainer via a second factor or an E-Mail verification. Once a release has been published to npm, you have an hour to verify it by clicking a link in an email and then enter another 2FA (separate OTP than for login, Passkey, Yubikey whatever). That would also prevent publishing with lost access keys. If you do not verify the release within the first hour it gets deleted and never published.

naugtur

That's why we never went with using keys in CI for publishing. Local machine publishing requires a 2fa.

automated publishing should use something like Pagerduty to signal that a version is being published to a group of maintainers and it requires an approval to go through. And any one of them can veto within 5 minutes.

But we don't have that, so gotta be careful and prepare for the worst (use LavaMoat for that)

Cthulhu_

Not through e-mail links though, that's what caused this in the first place. E-mail notification, sure, but they should also do a phishing training mail - make it legit, but if people press the link they need to be told that NPM will never send them an email with a link.

dist-epoch

> flag sketchy releases

Because the malware writers will keep tweaking the code until it passes that check, just like virus writers submit their viruses to VirusTotal until they are undetected.

AtNightWeCode

The problem is that it is even possible to push builds from dev machines.

madeofpalk

With NPM now supporting OIDC, you can just turn this off now https://docs.npmjs.com/trusted-publishers

CyberMacGyver

Looks like OP is one of the contributors to LavaMoat

naugtur

Yes, I am. I came up with the first successful attempt at integrating the Principle of Least Authority software in LavaMoat with Webpack and wrote the LavaMoat Webpack Plugin.

Also, together with a bunch of great folks at TC39 we're trying to get enough building blocks for the same-realm isolation primitives into the language.

see hardenedjs.org too

I'm doing the rounds promoting the project today because at this point all we need to eliminate certain types of malware is get LavaMoat a lot more adoption in the ecosystem.

( and that'll give me bug reports and maybe even contributions? :) )

hn92726819

I think most people are fine with promoting a cool project you work on, but it's best practice to disclose that in the article. Even something like "If your project was set up with LavaMoat (a project I've been working on), ..." would be enough.

I think that's why they made the comment.

naugtur

Yup, and thanks - I should have made the comment myself but got distracted.

btown

I'm often curious about how effective runtime quasi-sandboxing is in practice (at least until support at the TC39 level lands).

My understanding is that if you can run with a CSP that prevents unsafe-eval, and you lock a utility package down to not be able to access the `window` object, you can prevent it from messing with, say, window.fetch.

But what about a package that does assume the existence of window or globalThis? Say, a great many packages bridging non-React components into the React ecosystem. Once a package needs even read-only access to `window`, how do you protect against supply-chain attacks on that package? Even if you read-only proxy that object, for instance, can you ensure that nothing in `window` itself holds a reference to the non-proxied `window`?

Don't get me wrong - this project is tremendously useful as defense-in-depth. But curious about how much of a barrier it creates in practice against a determined attacker.

naugtur

It's based on HardenedJS.org

The sandbox itself is tight, there's a bug bounty even.

The same technology is behind metamask snaps - plugins in a browser extension.

And Moddable has their own implementation

The biggest problem is endowing too powerful capabilities.

We've got ambitious plans for isolating DOM, but that already failed once before.

mohsen1

npm should take responsibility and up their game here. It’s possible to analyze the code and mark it as suspicious and delay the publish for stuff like this. It should prevent publishing code like this even if I have a gun to my head

sesm

I think malware check should be opt-in for package authors, but provide some kind of 'verified' badge to the package.

Edit: typo

yjftsjthsd-h

> but provide some kind of 'verified' badge to the package

I would worry that that results in a false sense of security. Even if the actual badge says "passes some heuristics that catch only the most obvious malicious code", many people will read "totally 100% safe, please use with reckless abandon".

Cthulhu_

I always thought this would be the ideal monetization path for NPM; enterprises pay them, NPM only supplies verified package releases, ideally delayed by hours/days after release so that anything that slips through the cracks has a chance to get caught.

johannes1234321

That would put them into liability or be a quite worthless agreement taking no responsibility.

chrisweekly

Enterprises today typically use a custom registry, which can include any desired amount of scans and rigorous controls.

naugtur

npm is on life support by msft. But there's socket.dev that can tell you if a package is malicious within hours of it being published.

shreddit

“within hours” is at least one hour too late, and most likely multiple hours.

azemetre

Why would npm care? They're basically a monopoly in the JS world and under the stewardship of a company that doesn't even care when its host nation gets hacked when using their software due to their ineptitude.

untitaker_

i can guarantee you npm will externalize the cost of false-positive malware scans to package authors.

null

[deleted]

nodesocket

Or at a minimum support yubikey for 2fa.

mcintyre1994

They do, I use a yubikey and it requires me to authenticate with it whenever I publish. They do support weaker 2fa methods as well, but you can choose.

worthless-trash

Original author could be evil. 2fa does nothing.

jamesnorden

If my grandma had wheels she'd be a bike. You don't need to attack the problem from only one angle.

riazrizvi

Glad to see this article raising awareness.

Without fairness in the marketplace, the talent loses the will to play and the economy will further deteriorate. We are all suffering from an international trust breakdown from Covid, and now also from AI spam. If we don’t turn this tide, jobs and business opportunities are going to keep shrinking.

4ndrewl

If you're not vendoring, there's an argument to say that some portion of your source code is fair game to anyone who has commit rights to a variety of repos.

erpderp

In the example snippets from OP, the code shown is in the browser. I'm failing to see how the interception, as described, couldn't be handled by a decent Content Security Policy - instead of requiring yet another npm package. Seems safer than installing another package to address risk from ... installing packages.

ghrl

I suppose if you're using a bundler, you will ship JS bundles including the malicious packages from your own trusted domain. How could CSP prevent this or similar attacks?

erpderp

According to the OP, in this specific case, the malware was mostly just intercepting legitimate fetch(), etc calls. With CSP `connect-src`, I don't think that would be possible unless the new fetch targets are themselves on allow-listed domains (which is a totally separate issue).

For example, consider a CSP of: `Content-Security-Policy: connect-src 'self' https://api.example.com;`: This policy would allow fetch() requests only to the same origin ('self') and to https://api.example.com, blocking any attempts to connect to other domains (typically with a corresponding warning/error in the browser dev console).

That said, in fairness, CSP is of course only applicable to frontend code (not to backend JS, where anecdotally I've seen a lot more usage of `chalk` and some of the other pwned packags), but frontend code and the `window` object is what the OP used in their examples and seems like they're targeting w/ webpack, hence my mentioning CSP.

cluckindan

LavaMoat looks great on paper, but not supporting Webpack HMR is a dealbreaker.

naugtur

You're using HMR in your app's production bundle? How?

naugtur

If you mean during development - you can opt out of using lavamoat in development for your webpack bundle (I'm assuming you're not running your untested code on valuable data)

cluckindan

Well, that’s not exactly reassuring. Having a very different runtime environment in production is grounds for hard to debug issues.

Is it possible to generate the allowlist at development time without having the webpack plugin loaded? If it’s only generated at build time, it won’t protect against malicious packages getting installed in CI just before the build happens.

null

[deleted]

clbrmbr

How much money have the attackers stolen so far? Has someone done an analysis of the blockchains for the destination addresses?

naugtur

click through to the article, it has a link to a view that lists the laughable profit

clbrmbr

Huh. I read TFA in detail (and shared with my team), but I didn’t see any analysis. (?)

wodenokoto

> I won't go into this either, but you can take a look at the summary of "donations" some other friends linked to here: https://intel.arkm.com/explorer/entity/61fbc095-f19b-479d-a0...

>Pretty low impact for an attack this big. Some of it seems to be people mocking the malware author with worthless transfers.

I believe this is the section. As far as I understand the link, it's about $500. I don't understand how you read if a donation is a worthless mockery donation.

hiccuphippo

It seems to be this: https://intel.arkm.com/explorer/entity/61fbc095-f19b-479d-a0...

500 USD, not bad for a month of work if the author is from a 3rd world country.

nodesocket

I'm actually shocked they have not stolen more seeing the breach impact radius? Perhaps we can thank wallets and exchanges for blacklisting the addresses and showing huge warnings like the one shown in the article.

shreddit

It was discovered pretty quickly, i don’t think most “big” projects update their packages within minutes of publication.

p2detar

I’ve been out of the loop with npm for a while, but are there still no package namespaces?

stby

I am also out of the loop here, how would namespaces have helped?

diggan

Namespaces have existed since ~2016 at least in npm, but since it's not enforced and people want "nice looking" package names, the ecosystem still hasn't fully embraced it. It seems like more and more projects are using them (probably because all "good" names are already taken), but probably way less than half of all popular packages are scoped/namespaced.

keysdev

Well there is jsr now....

herpdyderp

I’m intrigued but is that compartmentalization not incredibly expensive?

naugtur

It's within the same process and realm (window) It has a cost, but it's nothing compared to putting every dependency of a large app in a separate iframe/process and figure out a way for them to communicate.

cluckindan

Have you tried to find ways to break it?

Plenty of objects in the browser API contain references to things that could be used to defeat the compartmentalization.

If one were to enumerate all properties on window and document, how many would be objects with a reference back to window, document or some API not on the allowed list?

cowbertvonmoo

I maintain ses, the compartment primitive LavaMoat relies on. The ses shim for hardenedjs.org creates compartments that deny guest code the ability to inspect the true global object or lexically reference any of its properties. By default, each compartment only sees the transitively frozen intrinsics like Array and Object, and no way to reach the genuine evaluators. The compartment traps the module loader as well, so you can only import modules that are explicitly injected. That leaves a lot of room for the platform to make mistakes and endow the compartment with gadgets, but also gives us a place to stand to mount a defense that is not otherwise prohibitively expensive.

AtNightWeCode

I think JS should be all source and no packages at all.

phil294

What about complex SPAs? Database drivers? Polyfills? TypeScript?