DuckDB NPM packages 1.3.3 and 1.29.2 compromised with malware
203 comments
·September 9, 2025elric
feross
Disclosure: I’m the founder of https://socket.dev
We analyzed this DuckDB incident today. The attacker phished a maintainer on npmjs.help, proxied the real npm, reset 2FA, then immediately created a new API token and published four malicious versions. A short publish freeze after 2FA or token changes would have broken that chain. Signed emails help, but passkeys plus a publish freeze on auth changes is what would have stopped this specific attack.
There was a similar npm phishing attack back in July (https://socket.dev/blog/npm-phishing-email-targets-developer...). In that case, signed emails would not have helped. The phish used npmjs.org — a domain npm actually owns — but they never set DMARC there. DMARC is only set on npmjs.com, the domain they send email from. This is an example of the “lack of an affirmative indicator” problem. Humans are bad at noticing something missing. Browsers learned this years ago: instead of showing a lock icon to indicate safety, they flipped it to show warnings only when unsafe. Signed emails have the same issue — users often won’t notice the absence of the right signal. Passkeys and publish freezes solve this by removing the human from the decision point.
parliament32
> it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it
USE PASSKEYS. Passkeys are phishing-resistant MFA, which has been a US govt directive for agencies and suppliers for three years now[1]. There is no excuse for infrastructure as critical as NPM to still be allowing TOTP for MFA.
[1]https://www.whitehouse.gov/wp-content/uploads/2022/01/M-22-0...
smw
This is the way! Passkeys or FIDO2 (yubikey) should be required for supply chain critical missions like this.
SoftTalker
I think you just have to distrust email (or any other "pushed" messages), period. Just don't ever click on a link in an email or a message. Go to the site from your own previously bookmarked shortcut, or type in the URL.
I got a fraud alert email from my credit card the other day. It included links to view and confirm/deny the suspicious charge. It all looked OK, the email included my name and the last digits of my account number.
I logged in to the website instead. When I called to follow up I used the phone number printed on my card.
Turns out it was a legit email, but you can't really know. Most people don't understand public key signing well enough to rely on them only trusting signed emails.
Also, if you're sending emails like this to your users, stop including links. Instead, give them instructions on what to do on your website or app.
sroussey
I get Coinbase SMS all the time with a code not to share. But also… “call this phone number if you did not request the code”.
sgc
This does nothing for the case of receiving a fake coinbase sms with a fake contact phone number.
I have had people attempt fraud in my work with live calls as follow up to emails and texts. I only caught it because it didn't pass the smell test so I did quite a bit of research. Somebody else got caught in the exact same scam and I had to extricate them from it. They didn't believe me at first and I had to hit them over the head a bit with the truth before it sank in.
evantbyrne
The email was sent from the 'npmjs dot help' domain. I'm not saying you're wrong, but also basic due diligence would have prevented this. If not by email, the maintainer may have been able to be compromised over text or some other medium. And today maintainers of larger projects can avoid these problems by not importing and auto-updating a bunch of tiny packages that look like they could have been lifted from stack overflow
chrisweekly
Re: "npmjs dot help", way too many companies use random domains -- effectively training their users to fall for phishing attacks.
InsideOutSanta
This exactly. It's actually wild how much valid emails can look like phishing emails, and how confusing it is that companies use different domains for critical things.
One example that always annoys me is that the website listing all of Proton's apps isn't at an address you'd expect, like apps.proton.me. It's at protonapps.com. Just... why? Why would you train your users to download apps from domains other than your primary one?
It also annoys me when people see this happening and point out how the person who fell for the attack missed some obvious detail they would have noticed. That's completely irrelevant, because everyone is stupid sometimes. Everyone can be stressed out and make bad decisions. It's always a good idea to make it harder to make bad decisions.
0cf8612b2e1e
Too many services will send you 2FA codes from different numbers per request.
nikcub
* passkeys
* signed packages
enforce it for the top x thousand most popular packages to start
some basic hygiene about detecting unique new user login sessions would help as well
SAI_Peregrinus
Requiring signed packages isn't enough, you have to enforce that signing can only be done with the approval of a trusted person.
People will inevitably set up their CI system to sign packages, no human intervention needed. If they're smart & the CI system is capable of it they'll set it up to only build when a tag signed by someone approved to make releases is pushed, but far too often they'll just build if a tag is pushed without enforcing signature verification or even checking which contributors can make releases. Someone with access to an approved contributor's GitHub account can very often trigger the CI system to make a signed release, even without access to that contributor's commit signing key.
zokier
Spf/dkim already authenticates the sender. But it doesn't help if the user doesn't check who the email is from. But in that case gpg would not help that much either.
elric
SPF & DKIM are all but worthless in practice, because so many companies send emails from garbage domains, or add large scale marketing platforms (like mailchimp) to their SPF records.
Like Citroen sends software update notifications for their cars from mmy-customerportal.com. That URL looks and sounds like a phisher's paradise. But somehow, it's legit. How can we expect any user to make the right decision when we push this kind of garbage in their face?
JimDabell
The problem is there is no continuity. An email from an organisation that has emailed you a hundred times before looks the same as an email from somebody who has never emailed you before. Your inbox is a collection of legitimate email floating in a vast ocean of email of dubious provenance.
I think there’s a fairly straightforward way of fixing this: contact requests for email. The first email anybody sends you has an attachment that requests a token. Mail clients sort these into a “friend request” queue. When the request is accepted, the sender gets the token, and the mail gets delivered to the inbox. From that point on, the sender uses the token. Emails that use tokens can skip all the spam filters because they are known to be sent by authorised senders.
This has the effect of separating inbound email into two collections: the inbox, containing trustworthy email where you explicitly granted authorisation to the sender; and the contact request queue.
If a phisher sends you email, then it will end up in the new request queue, not your inbox. That should be a big glaring warning that it’s not a normal email from somebody you know. You would have to accept their contact request in order to even read the phishing email.
I went into more detail about the benefits of this system and how it can be implemented in this comment:
zokier
The same problem applies to gpg. If companies can not manage to use consistent from addresses then do you really expect them to do any better with gpg key management?
"All legitimate npm emails are signed with GPG key X" and "All legitimate npm emails come from @npmjs.com" are equally strong statements.
vel0city
There's little reason to think these emails didn't pass SPF/DKIM. They probably "legitimately" own their npmjs[.]help domain and whatever server they used to send the emails is probably approved by them to send for that domain.
zokier
But in the same vein the phishing email can easily be gpg signed too. The problem is to check if the gpg key used to sign the email is legitimate, but that is exactly the same problem as checking if the from address is legitimate.
progx
TRUE! A simple self defined word in an email and you will see, if the mail is fake or not.
ignoramous
> Can package publishing platforms PLEASE start SIGNING emails
I am skeptical this solves phising & not add to more woes (would you blindly click on links if the email was signed?), but if we are going to suggest public key cryptography, then: NPM could let package publishers choose if only signed packages must be released and consumers decide if they will only depend on signed packages.
I guess, for attackers, that moves the target from compromising a publisher account to getting hold of the keys, but that's going to be impossible... as private keys never leave the SSM/HSM, right?
> Get them to distrust any unsigned email, no matter how convincing it looks.
For shops of any important consequence, email security is table stakes, at this point: https://www.lse.ac.uk/research/research-for-the-world/societ...
elric
I don't think signed email would solve phishing in general. But for a service by-and-for programmers, I think it at least stands a chance.
Signing the packages seems like low hanging fruit as well, if that isn't already being done. But I'm skeptical that those keys are as safe as they should be; IIRC someone recently abused a big in a Github pipeline to execute arbitrary code and managed to publish packages in that way. Which seems like an insane vulnerability class to me, and probably an inevitable consequence of centralising so many things on github.
vitonsky
Just for context. DuckDB team is consistently ignores any security practices.
The single one method how to install DuckDB on laptop is to run
`curl https://install.duckdb.org | sh`
I've requested to deliver CLI as standard package, they have ignored it. Here is the thread https://github.com/duckdb/duckdb/issues/17091
As you can see that it isn't single slip due to "human factor", but DuckDB management consistently puts users at risk.
throwaway127482
Genuine question: why is `curl https://trusted-site.com | sh` a security risk?
Fundamentally, doesn't the security depend entirely on whether https is working properly? Even the standard package repos are relying on https right?
Like, I don't see how it's different than going to their website, copying their recommended command to install via a standard repo, then pasting that command into your shell. Either way, you are depending entirely on the legitimacy of their domain right?
dansmith1919
I assume OP's point is "you're running a random script directly into your shell!!"
You're about to install and run their software. If they wanted to do something malicious, they wouldn't hide it in their plaintext install script.
kevinrineer
`curl URL | sudo sh` doesn't have a means of verification of what the contents of the URL points to.
Sure a binary can be swapped in other places, but they generally can be verified with hashes and signatures. Also, a plaintext install script often has this problem in another layer of recursion (where the script usually pulls from URLs that the runner of the script cannot verify with this method)
tomsmeding
It is sometimes possible to detect server-side whether the script is being run immediately with `| sh` or not. The reason is that `sh` only reads from its input as far as it got in the script, so it takes longer to get to the end than if you'd curl show the result in the terminal directly (or pipe it to a file).
A server can use this to maliciously give you malware only if you're not looking at the code.
Though your point about trust is valid.
vitonsky
Current incident confirms that we can't trust to authors of DuckDB, because they can't evade a trivial phishing attack.
Tomorrow they will do it again, and attackers will replace binary files that users download with this random script. Or this script will steal crypto/etc.
To make attack vector difficult for hackers, it's preferable to download any software as packages. On linux it looks like `apt install python3`.
The benefits is
1. Repositories are immutable, so attacker can't replace binary for specific version, even if they will hack all infrastructure of DuckDB. Remote script may be replaced anytime to run any code
2. Some repositories have strict review process, so there are external reviewers who will require to pass security processes to upload new version
speedgoose
I also don’t know why using a unix pipe instead of saving in the file system and executing the file is a significant security risk. Perhaps an antivirus could scan the file without the pipe.
0cf8612b2e1e
They also publish binaries on their GitHub if you prefer that.
greatgib
What is funny is again how many "young developers" had fun at old timers package managers like Debian being so slow to release new versions of packages.
But never ever anyone was rooted because of malware that was snuck into an official .deb package.
That was the concept of "stable" in the good old time, when software was really an "engineering" field.
diggan
So far, it seems to be a bog-standard phishing email, with not much novelty or sophistication, seems the people running the operation got very lucky with their victims though.
I'm starting to think we haven't even seen the full scope of it yet, two authors confirmed as compromised, must be 10+ out there we haven't heard of yet?
IshKebab
Probably the differentiating factor here is that the phishing message was very plausible. Normally they're full of spelling mistakes and unprofessional grammar. The domain was also plausible.
I think where they got lucky is
> In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.
A huge red flag. I wonder if browsers should actually detect if you're putting login details for site A manually into site B, and give you a "are you sure this isn't phishing" warning or something?
I don't quite understand how the chalk author fell for it though. They said
> This was mobile, I don't use browser extensions for the password manager there.
So are there mobile password managers that don't even check the URL? I dunno how that works...
jasode
> In hindsight, the fact that his browser did not auto-complete the login should have been a red flag.
>A huge red flag.
It won't be a red flag for people who often see auto-complete not working for legitimate websites. The usual cause is legitimate websites not working instead of actual phishing attempts.
This unintended behavior of password managers changes the Bayesian probabilities in the mind such that username/password fields that remain unfilled becomes normal and expected. It inadvertently trains sophisticated people to lower their guard. I wrote more on how this happens to really smart technical people: https://news.ycombinator.com/item?id=45179643
>So are there mobile password managers that don't even check the URL? I dunno how that works...
Strongbox pw manager on iOS by default doesn't autofill. You have to go settings to specifically enable that feature. If you don't, it's copy&paste.
cosmic_cheese
Even standard autofill (as in that built into Safari, Firefox, Chrome etc) gets tripped up on 100% legit sites shockingly often. Usually the cause is the site being botched, with mislabeled fields or some unnecessarily convoluted form design that otherwise prevents autofill from doing its thing.
Please people, build your login forms correctly! It’s not rocket science.
diggan
> It won't be a red flag for people who often see auto-complete not working for legitimate websites. The usual cause is legitimate websites not working instead of actual phishing attempts.
Yeah, that's true, I hit this all the time with 1Password+Firefox+Linux (fun combo).
Just copying-pasting the username+password because it doesn't show up is the wrong approach. It gives you a chance to pause and reflect, since it isn't working, so in that case you lookup if it's actually the right domain, and if it is, add it to the allowed domains so it works fine in the future.
Maybe best would be if password managers defaulted to not showing a "copy" thing at all for browser logins, and not letting users select the password, instead prompting them to rely on the autofill, and fix the domains if the autofill doesn't work.
Half the reason I use password manager in the first place is specifically for this issue, the other half is because I'm lazy and don't like typing. It's really weird to hear people using password managers yet do the old copy-paste dance anyways.
nightski
This hasn't been my experience at all. I regularly check the bitwarden icon for example to make sure I am not on the wrong site (b/c my login count badge is there). In fact autofill has saved me before because it did not recognize the domain and did not fill.
hiccuphippo
My guess is their password manager is a separate app and they use the clipboard (or maybe it's a keyboard app) to paste the password. No way for the password manager to check the url in that case.
0cf8612b2e1e
I use a separate app like this because I do not fully trust browser security. The browser is such a tempting hacking target (hardened, for sure) that I want to know my vault lives in an offline-only area to reduce chance of leaks.
Is there some middle ground where I can get the browser to automatically confirm I am on a previously trusted domain? My initial thought is that I could use Firefox Workspaces for trusted domains. Limited to the chosen set of urls. Which I already do for some sites, but I guess I could expand it to everything with a login.
stanac
You are probably right. Still browser vendors or even extension devs can create a system where username hash and password hash are stored and checked on submit to warn for phishing. Not sure if I would trust such extension, except in case it's FF recommended and verified extension.
ecshafer
> Normally they're full of spelling mistakes and unprofessional grammar.
This is the case when you are doing mass phishing attacks trying to get the dumbest person you can. In these cases, they want the person that will jump through multiple loops one after another that keeps giving them money. A more technical audience you wouldn't want to do so, if you want one smart person to make one mistake.
jve
> Normally they're full of spelling mistakes and unprofessional grammar. The domain was also plausible.
I don't get these arguments. Yeah, of course I was always surprised phishing emails give itself away with mistakes as maybe non-native speakers create it without any spellcheck or whatever and it was straight forward to improve that... but whatever the text, if I open a link from email the first thing I look at is domain. Not how the site looks. The DOMAIN NAME! Am I on trusted site? Well .help TLD would SURELY ring a bell and involve research as whether this domain is associated to npm in any way.
At some point my bank redirected me to some weird domain name... meh, that was annoying, had to research whether that domain is really associated to them.. it was. But they just put their users under risk if they want domain name not to mean trust and just feed whatever domains as acceptable. That is NOT acceptable.
jonhohle
Nearly every email link now goes through an analytics domain that looks like a jumble of random characters. In the best case they end up at the expected site, but a significant number go to B2B service provider of the week’s domain.
There are more than a few instances when I’ve created an account for a service I know I’ve never interacted with before, but my password manager offered to log me in because another business I’ve used in the past used the same service (medical providers, schools, etc.).
Even as a technically competent person, I received a legitimate email from Google regarding old shadow accounts they were reconciling from YouTube and I spent several hours convinced it was a phishing scheme.it put me on edge for nearly a week that there was no way I could be sure critical accounts were safe, and worse yet, someone like my parents or in-laws could be safe.
bluGill
Unicode means that domain names can be different and look the same unless you really look close. Even if you just stick to ascii l (letter) and 1 (number) look so close that I would expect many people to not see the difference if it isn't pointed out. (remember you don't control the font in use, some are more different than others)
400thecat
more alarming than .help domain is the domain registration just few weeks ago. I got scammed just last week when paying with credit card online, and only later when investigating discovered several of identical eshops with different .shop domains registered just months ago if domain is less that year old, it should raise red flags
quitit
For regular computers users I recommend using a password manager to prevent these types of phishing scams. As the password manager won't autofill on anything but the correct login website, the user is given a figurative red flag whenever the autofill doesn't happen.
tom1337
At least 1Password on iOS checks the URLs and if you use the extension to fill the password anyway you get a prompt informing you that you are filling onto a new url which is not associated with the login item.
worble
> Normally they're full of spelling mistakes and unprofessional grammar.
Frankly I can't believe we've trained an entire generation of people that this is the key identifier for scam emails.
Because native English speakers never make a mistake, and all scammers are fundamentally unable to use proper grammar, right?
IshKebab
I don't see why you're surprised. It is a key identifier for scam emails. Or at least it was until recently. I don't think anyone was under the impression that scammers could never possibly learn good English.
pixl97
I mean most of the time it's the companies themselves that teach people bad habits.
MyBank: "Don't click on emails from suspicious senders! Click here for more information" { somethingweirdmybank.com } -- Actual real email from my bank.
Like, wtf. Why are you using a totally different domain.
And the companies I've worked for do this kind of crap all the time. "Important company information" { learnaboutmycompany.com } -- Like, is this a random domain someone registered. Nope, actually belongs to the place I work for when we have a well known and trusted domain.
Oh, and it's the best when the legit sites have their own spelling mistakes.
skeeter2020
>> So far, it seems to be a bog-standard phishing email
The fact this is NOT the standard phishing email shows how low the bar is:
1. the text of the email reads like one you'd get from npm in the tone, format and lack of obvious spelling & grammatical errors. It pushes you to move quicker than you might normally, without triggering the typical suspicions.
2. the landing domain and website copy seem really close to legit, no obfuscated massive subdomain, no uncanny login screen, etc.
All the talk of AI disrupting tech; this is an angle where generative AI can have a massive impact in democratizing the global phishing industry. I do agree with you that there's likely many more authors who have been tricked and we haven't seen the full fallout.
spoaceman7777
It's just a phishing email... there isn't anything novel going on here.
Also, I really don't see what this has to do with gen AI, or what "democratizing the global phishing industry" is supposed to mean even.
Is this comment AI generated?
ApolloFortyNine
If your someone who barely speaks English in a third world country running a phishing campaign, you can have chatgpt write you a professional sounding email in 10 seconds. If you convince it your running a phishing test you can probably even have a back and forth about the entire design and wording of the email and phishing site.
That's what I'm guessing OP meant.
diggan
Both of those points are fairly common in phishing emails, at least the ones I receive. Cloning the HTML/CSS for phishing has been done for as long as I've been able to receive emails, don't even need LLMs for that :)
r_lee
How does AI relate to this in any way? you can easily clone websites by just copying via devtools, like seriously
same with just copying email HTML
it's actually easier to make it looke exactly the same vs different in some ways
mvieira38
You can make your phishing bot write tailor-made messages and even respond
polynomial
The article says the victim used 2fa. How did the attacker know their 2fa in order to send them a fake 2fa request?
eviks
> This website contained a *pixel-perfect copy* of the npmjs.com website.
Not sure how this emphasis is of any importance, you brain doesn't have a pixel perfect image of the website, so you wouldn't know whether it's a perfect replica or not.
Let the silicon dummies in the password manager do the matching, don't strain your brain with such games outside of entertainment
stanac
My password manager is a separate app, I always have to manually copy/paste the credentials. That's because I believed that approach to be more secure, now I see it's replacing one attack vector for another.
SAI_Peregrinus
The one I use (KeePassXC) is also a separate app, but there are browser extensions for the major browsers to support autofill. Of course plenty of sites don't actually work with autofill, even the browser builtin autofill, because they don't mark the form fields properly. So autofill not working is common enough that it's not a reliable red flag. Separate password managers have the advantage that they can store passwords for things other than websites, and secret data other than passwords (arbitrary files). KeePassXC's auto-type can work with any application, not just a browser.
eviks
What's the most common example of an alternative attack with autofill?
kaoD
The password manager's autofill browser extension gets compromised.
null
welder
Please change that now! It's the muscle memory of never typing a password that prevents you from being victim to phishing.
udev4096
A mitm proxy can replicate the whole site, it's almost impossible to distinguish from the real one other than the checking the domain
hiccuphippo
Maybe email software should add an option to make links unclickable, or show a box with the clear link (and highlight the domain) before letting the user go through it.
They already make links go through redirects (to avoid referrer headers?) so it's halfway there. Just make the redirect page show the link and a go button instead of redirecting automatically. And it would fix the annoyance that is not being able to see the real domain when you hover the link.
elric
So many legit emails contain links that pass through some kind of URL shortener or tracker (like mailchimp does). People are being actively conditioned to ignore suspicious looking URLs.
ecshafer
I worked for a company that as part of phishing we were told not to click on suspicious links. However all links were put through proxy link shortener. So www.google.com becomes just proxy.com/randomstring like an internal link shortener/mitm. But this means I can no longer check the url to see if its legitimate.
weinzierl
Is this related to npm debug and chalk packages being compromised?
https://www.aikido.dev/blog/npm-debug-and-chalk-packages-com...
kyle-rb
I've been critical of blockchain in the past because of the lack of use cases, but I've gotta say crypto functions pretty well as an underlying bug bounty system. This probably could have been a much more insidious and well hidden attack if there wasn't a quick payoff route to take.
kyle-rb
Ah, apparently other people had thoughts along the same lines: https://news.ycombinator.com/item?id=45183029
tripplyons
That argument only really makes sense if you assume the attackers aren't rational actors. If there was a better, more destructive way to profit from this kind of compromise, they would either do it or sell their access to someone who knew how to do it.
0xbadcafebee
At least third major compromise in two weeks. (last comment: https://news.ycombinator.com/item?id=45172225) (before that: https://news.ycombinator.com/item?id=45039764)
Forget about phishing, it's a red herring. The actual solution to this is code signing and artifact signing.
You keep a private key on your local machine. You sign your code and artifacts with it. You push them. The packages are verified by the end-user with your public key. Even if your NPM account gets taken over, the attacker does not have your private key, so they cannot publish valid packages as you.
But because these platforms don't enforce code and artifact signing, and their tools aren't verifying those signatures, attackers just have to figure out a way to upload their own poison package (which can happen in multiple ways), and everyone is pwnd. There must be a validated chain of trust from the developer's desktop all the way to the end user. If the end user can't validate the code they were given was signed by the developer's private key, they can't trust it.
This is already implemented in many systems. You can go ahead and use GitHub and 1Password to sign all your commits today, and only authorize unsealing of your private key locally when it's needed (git commits, package creation, etc). Then your packages need to be signed too, public keys need to be distributed via multiple paths/mirrors, and tools need to verify signatures. Linux distributions do this, Mac packages do, etc. But it's not implemented/required in all package managers. We need Npm and other packaging tools to require it too.
After code signing is implemented, then the next thing you want is 1) sign-in heuristics that detect when unusual activity occurs and either notifies users or stops it entirely, 2) mandatory 2FA (with the option for things like passkeys with hardware tokens). This will help resist phishing, but it's no replacement for a secure software supply chain.
feross
Disclosure: I’m the founder of https://socket.dev
Strongly agree on artifact signing, but it has to be real end-to-end. If the attacker can trigger your CI to sign with a hot key, you still lose. What helps: 1) require offline or HSM-backed keys with human approval for release signing, 2) enforce that published npm artifacts match a signed Git tag from approved maintainers, 3) block publishes after auth changes until a second maintainer re-authorizes keys. In today’s incident the account was phished and a new token was used to publish a browser-side wallet-drainer. Proper signing plus release approvals would have raised several hard gates.
smw
"2) mandatory 2FA (with the option for things like passkeys with hardware tokens)."
No, with the _requirement_ for passkeys or hardware tokens!
lovehashbrowns
I guess it's hands off the npm jar for a week or three 'cause I am expecting a bunch more packages to be affected at this point.
bakugo
> According to the npm statistics, nobody has downloaded these packages before they were deprecated
Is this actually accurate? Packages with weekly downloads in the hundreds of thousands, yet in the 4+ hours that the malicious versions were up for, not a single person updated any of them to the latest patch release?
hfmuehleisen
DuckDB maintainer here, thanks for flagging this. Indeed the npm stats are delayed. We will know in a day or so what the actual count was. In the meantime, I've removed that statement.
belgattitude
I think you should unpublish rather than deprecate... `npm unpublish package@version` ... It's possible within 72h. One reason is that the patched version contains -alpha... so tools like npm-check-updates would keep the 1.3.3 as the latest release for those who installed it
hfmuehleisen
Yes we tried, but npm would not let us because of "dependencies". We've reached out to them and are waiting for a response. In the meantime, we re-published the packages with newer versions so people won't accidentally install the compromised version.
feross
Disclosure: I’m the founder of https://socket.dev
npm stats lag. We observed installs while the malicious versions were live for hours before removal. Affected releases we saw: duckdb@1.3.3, @duckdb/duckdb-wasm@1.29.2, @duckdb/node-api@1.3.3, @duckdb/node-bindings@1.3.3. Same payload as yesterday’s Qix compromise. Recommend pinning and avoiding those versions, reviewing diffs, and considering a temporary policy not to auto-adopt fresh patch releases on critical packages until they age.
diggan
I think that's pretty unlikely. I aren't even a high-profile npm author, and if I publish any npm package they end up being accessed/downloadaded within minutes of first publish, and any update after that.
I also know projects who are reading the update feeds and kick off CI jobs after any dependencies are updated to automatically test version upgrades, surely at least one dependent of DuckDB is doing something similar.
belgattitude
[dead]
ptrl600
Is there a way to configure npm that it only installs packages that are, like, a week old?
feross
Disclosure: I’m the founder of https://socket.dev
A week waiting period would not be enough. On average, npm malware lingers on the registry for 209 days before it's finally reported and removed.
Source: https://arxiv.org/abs/2005.09535
HatchedLake721
Don’t auto install latest versions, pick a version up to a patch and use package-lock.json
mdaniel
That's only half the story, as I learned yesterday <https://news.ycombinator.com/item?id=45172213> since even with lock files one must change the verb given to npm/yarn to have them honor the lock file
So, regrettably, we're back to "train users" and all the pitfalls that entails
This is critical infrastructure, and it gets compromised way too often. There are so many horror stories of NPM (and similar) packages getting filled with malware. You can't rely on people not falling for phishing 100% of the time.
People who publish software packages tend to be at least somewhat technical people. Can package publishing platforms PLEASE start SIGNING emails. Publish GPG keys (or whatever, I don't care about the technical implementation) and sign every god damned email you send to people who publish stuff on your platform.
Educate the publishers on this. Get them to distrust any unsigned email, no matter how convincing it looks.
And while we're at it, it's clear that the current 2FA approach isn't good enough. I don't know how to improve on it, but it's clear that the actions in this example were suspicious: user logs in, changes 2FA settings, immediately adds a new API token, which immediately gets used to publish packages. Maybe there should be a 24 hour period where nothing can be published after changing any form of credentials. Accompanied by a bunch of signed notification emails. Of course that's all moot if the attacker also changes the email address.