Snyk security researcher deploys malicious NPM packages targeting cursor.com
293 comments
·January 13, 2025n2d4
ArVID220u
cursor dev here. reasonable assumptions, but not quite the case. the snyk packages are just the names of our bundled extensions, which we never package nor upload to any registry. (we do it just like how VS Code does it: https://github.com/microsoft/vscode/tree/main/extensions)
we did not hire snyk, but we reached out to them after seeing this and they apologized. we did not get any confirmation of what exactly they were trying to do here (but i think your explanation that someone there suspected a dependency confusion vulnerability is plausible. though it's pretty irresponsible imo to do that on public npm and actually sending up the env variables)
nomilk
> "pretty irresponsible"
Wouldn't it be more like "pretty illegal"? They could have simply used body: JSON.stringify("worked"), i.e. not sent target machines’ actual environment variables, including keys.
reubenmorais
It's an unfortunate incentive structure. If you're doing offensive security research, there's two ways you can go about it: you can report the potential vulnerability without exploiting it, in which case you risk the company coming back to you and saying "thanks but we don't consider this a vulnerability because it's only exploited through misconfiguration and we're too smart for that". Maybe you get some token reward of $50.
Or you can exploit it and say here's the PoC, this many people at your company fell for it, and this is some of the valuable data I got, including some tokens you'll have to rotate. This puts you into actual bug bounty territory. Certainly the PR side of things alone will incentivize them to pay you so you don't make too much of a noise about how Cursor leaked a bunch of credentials due to a misconfiguration that surely every good programmer knows about and defends against (like so many vulnerabilities seem so dumb in hindsight).
null
null
DigitalNoumena
It may interest you that Guy Podjarny, one of the Snyk founders, now has an AI coding company (https://www.tessl.io/about) that looks like a competitor of yours
IAmGraydon
[flagged]
not_a_bot_4sho
It was a thing back in the late 90s. I still do it in casual conversations with friends, less so in professional settings.
It's a gen X thing, like using "lol" to mean literal laughter
johnny22
it's been a thing at least on irc for at least 20 years. i've been used to it for a long time.
dovin
I like to call it informal case.
furyofantares
When I grew up online in the 90s, on IRC, AOL/AIM, ICQ and web forums, it was extremely common. Most of the people I know from then still do it, and I still do it with them and in many other places, although for whatever reason I don't do it here. Although it's 50/50 when on their phones now that phones auto-capitalize by default now.
urig
Rules are put in place to be followed, for a reason. Capital letters at the start of the sentence increase readability. People who don't bother with them are being incosiderate towards their readers.
pizza
yes but there could be many possible reasons, for instance
- it's muuch faster on mobile
- you're aiming to convey litheness to potential target audiences who will know to recognize it as intentional litheness
- you've gotten used to minimizing the amount of keystrokes necessary for communicating things, to the point it's second nature
- you've worked a lot in the past with older nlp systems, where ignoring capitalization was a given for inputs anyhow, and just got used to treating it as syntactic cruft only strictly necessary in more formal settings ;)
Piisamirotta
I have been thinking of this too. I find it super annoying to read and it looks unprofessional.
demarq
how does this bother you, what greater meaning does it have?
rdegges
Hey there! I run DevRel & SecRel @ Snyk, we just published a piece to help dispel all the rumors, etc. This provides a lot of in-depth info on the situation: https://snyk.io/blog/snyk-security-labs-testing-update-curso...
arkadiyt
> If that's the case, then there's not much to see here
They could have demonstrated the POC without sending data about the installing host, including all your environment variables, upstream. That seems like crossing the line
nomilk
> If that's the case, then there's not much to see here.
Allowing someone full access to the contents of your environment (i.e. output of env command) is a big deal to most, I suspect.
LtWorf
If /proc is mounted you can read all of that.
NitpickLawyer
Wasn't this supposed to be fixed in NPM? I remember a talk by the researcher behind portswigger (sorry blanking on his name) doing this a while back, with great success (apple,ms,meta, basically all faang were vulnerable at that time).
tankster
Also interestingly the Snyk cofounder has started a competitor to cursor https://www.tessl.io/ https://techcrunch.com/2024/11/14/tessl-raises-125m-at-at-50...
I hope there is no foul play.
guappa
Given how all my interactions with them have been extremely negative (see my other comment), I think it's rather likely that there is foul play.
gortok
The only part of the article I disagree with is this line:
> But in general, it’s a good idea not to install NPM packages blindly. If you know what to look for, there are definite signals that these packages are dodgy. All of these packages have just two files: package.json and index.js (or main.js). This is one of several flags that you can use to determine if a package is legit or not.
This works -- maybe OK for top-level packages. But for transitive dependencies it's nearly impossible to vet every transitive dependency.
If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area? https://gist.github.com/anvaka/8e8fa57c7ee1350e3491#top-1000...
ziddoap
>If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area?
This would be where different security advice would apply: don't pull in a package that has 400 dependencies.
krainboltgreene
Given the nature of software development and software developers, especially given American companies decide to value shareholder profits over programmer productivity, this might as well be effectively "You don't need to get vaccines, simply don't get sick from other people."
wyldberry
Things like this are suppose to be provenance of an organizations security engineering teams. Helping to ensure you don't ship something like this. It's also hard for them too because no one wants to force developers to re-implement already solved functionality.
nightpool
Out of curiosity, I've always meant to ask, are you related to the famous Geoguesser content creator in any way? It's a pretty distinctive last name.
chamomeal
Wait how in the world does a React carousel component have over 400 deps…
jbreckmckye
Do you mean https://www.npmjs.com/package/carousel-react? By the looks of it, this was published by someone 7 years ago as part of a personal project. Nothing uses it.
Going through that list... they all look like personal projects, with no dependents, and a single release by a single person.
chamomeal
Ok now that I’ve actually looked at the package.json, it seems like this must be a joke or something. It’s got packages for CLI arg parsing, math expression evaluation, hashing, etc.
When I’m back on my computer I may look at the source and confirm my suspicion that none of those are required for the carousel functionality lol
cloverich
History of "micro dependencies" where many flexible utilities are split up into separate packages, such that many npm dependencies are a single function (ie rather than a package exporting ten methods, its ten separate dependencies).
Then because there is no standard library, many reinventions of similar but incompatible utilities. etc.
tristor
Because Javascript is a drug that makes developers stupid.
It's almost trite at this point to comment on the obsession that Node has created with developers to reduce functionality to the smallest possible reusable parts, even trivial things, and publish them as packages, then to import and use those dependencies. The idea, in and of itself, is not terrible, but it's been taken to a logical extreme which is very much the definition of terrible.
jbreckmckye
Nearly all of these look like demo projects. You're making inferences about an entire group of developers based on a meme plus a search over the very 'worst' offenders.
KuhlMensch
/giphy "first time?" meme
Sohcahtoa82
Did you think the meme about node_modules having more gravity than a star was just a meme?
It's very much based on reality. The npm ecosystem is just absolutely fucked.
XorNot
This is really where SELinux had the right idea overall: preclassifying files with data about their sensitivity, and denying access based on that, does adequately solve this problem (i.e. keeping npm installations away from id_rsa).
beardedwizard
The issue with SElinux is usability. A company called intrinsic tried a similar "allowlist" approach to javascript based on the assumption that you could never control this sprawl and had to assume every package was malicious. I never saw the technology take off because generating the allowlist was of course error prone.
im not sure what has to change in UX to make these approaches more palatable, but if you have to frequently allow 'good' behaviors, my experience is it never takes off.
__MatrixMan__
I think we need to to focus on empirical consensus rather than taking as authoritative some file which makes claims about what a particular piece of software will or won't do.
So before running any code you'd hash it and ask your peers: "what do we think this does?"
If it does something surprising, you roll back its effects (or maybe it was in a sandbox in the first place) and you update your peers so that next time they're not surprised.
I keep saying "you" but this would just be part of calling a function, handled by a tool and only surfaced to the user when they ask or when the surprising thing happens.
It could be a useful dataset both for maintainers and for people who want to better understand how to use the thing.
loaph
> If you're pulling in a package that has 400 dependencies, how the heck would you even competently check 10% of that surface area?
At my place of work we use this great security too called Snyk. Definitely check it out
/s
3eb7988a1663
I need to get serious about doing all development inside a virtual machine. One project per VM. There are just too many insidious ways in which I can ignorantly slip up such that I compromise my security. My only solace is that I am a nobody without secrets or a fortune to steal.
IDEs, plugins, development utilities, language libraries, OS packages, etc. So much code that I take on blind faith.
redserk
Vagrant’s popularity seems to have died down with Docker containers but it’s by far my favorite way to make dev environments.
Several years ago I worked somewhere that prohibited web browsers and development tools on laptops. If you needed to use a browser, you’d have to use one over Citrix. If you needed to code, you’d use a VDI or run the tools in a VM.
At the time I thought their approach was clinically insane, but I’m slowly starting to appreciate it.
arcanemachiner
I still like Vagrant. But I believe it's yet another victim of the Hashicorp license change debacle from a year or two ago.
Unlike with Terraform/OpenBao, I know of no community effort effort to keep the open-source version of this project alive. The latest open source version is still available on the Ubuntu repo, but who knows who long it will work until somefor of bit rot occurs.
pizza234
> I still like Vagrant. But I believe it's yet another victim of the Hashicorp license change debacle from a year or two ago.
The license change is irrelevant - from the licensing page:
> All non-production uses are permitted.
Devs who use Vagrant in a development environment can do it as they used to do it before.
> The latest open source version is still available on the Ubuntu repo, but who knows who long it will work until somefor of bit rot occurs.
Hashicorp products have always been intended to be downloaded from the website, since they're statically linked binaries (I don't like that they're huge, but matter of factually, they make distribution trivial).
fancyswimtime
more so a victim of speed
hresvelgr
Devcontainers[1] are the new incarnation of this pattern. We use them at work and they are a dream for onboarding new developers. The only downside is the VSCode lock-in but if that's a concern there's always DevPod[2].
bluehatbrit
It looks like the team behind it have been moving it towards more of an open standard over the last year. There's now a CLI reference implementation, and the Jetbrains IDE's have an implementation for it.
There's also a thread for Zed about a path to implementing it there [0]. Hopefully it'll become a bit more common over 2025.
roland35
I think vs code is the easiest way to set up dev containers, but once they are created I mostly just shell into them and use neovim!
tacticus
This is the practice in many government sites these days.
Except the vm is some old windows version without any tools on it. no shell access.
can't actually do anything useful on there at all.
VDI systems could work if implemented properly. but that's the last thing a security team actually wants to do.
dacryn
VDI is actually preferred by our security teams, because they have complete deep packet inspection on literally all traffic going in and out.
On our laptops, there are still some flows that avoid the vpn etc..
pmontra
A customer of mine still uses vagrant on a project, for local development. That project started in 2016. We are developing on a mix of Linux, Mac, Windows and it's not as straightforward as it could be. Linux is easier, Windows is messier.
A newer project fires up VMs from a Python script that calls an adapter for EC2 (with the boto library) when run on AWs and for VirtualBox (by calling VBoxManage) when running locally. That allows us to simulate EC2 locally: it's a project that has to deal with many long jobs so we start VMs for them and terminate the VMs when the jobs are done. That also runs better on our mix of development systems. WSL2 helped to ease the pains of developing on Windows. We call the native Windows VirtualBox, not the one we could have installed inside WSL2, but we keep most of the code that runs on Linux.
spike021
At my first job almost 10 years ago we had the concept of "X-in-a-box" using Vagrant + VMs and I miss that pattern so much ever since (multiple job skips later).
None of my jobs since have had any semblance of a better way to set up a local dev environment easily.
It was just way easier to encapsulate services or other things in a quickly reproducible state.
I digress..
jsjohnst
> At the time I thought their approach was clinically insane
Let’s be clear, it’s still clinically insane, even if marginally rationalized.
flyinghamster
I started using Ansible a few years back to set up VMs (or Raspberry Pis) with a consistent environment. Once I wrapped my head around it, I've found it very nice for any situation where I need to treat systems as livestock rather than pets.
bloopernova
I use Ansible in local only mode to install/configure macOS as a development environment.
Works well with Homebrew, and copies all the config files that devs often don't set up.
buildbot
Vagrant is still kicking! But yeah not as popular as back in 2014-2016?
A hybrid(?) alternative is enroot, which is pretty neat IMO, it converts a docker container into a squashfs file that can be mounted rw or used in an ephemeral way. https://github.com/NVIDIA/enroot
whitehexagon
It's horrible that trust is being eroded so much, and seeing monthly GB updates to my OS doesnt reassure me at all. I like the idea of having a stable isolated VM for each project. Are there standard open-source tools to do this?
Specifically I'm transitioning my Go and Zig development environments from an old mac to an M1 with Asahi Linux and getting a bit lost even finding replacements for Truecrypt and Little Snitch. Do these VM tools support encrypted VM's with firewall rules? I saw Vagrant mentioned here and that sounds like it might cover the network isolation, but what else would you suggest?
pritambaral
I run all my dev environments under LXD. Even the IDE: full graphical Emacs (or Vim) over X11 forwarding over SSH. Host is Wayland, so security concerns with X are handled. WayPipe also works, but is jankier than X, probably because X, unlike Wayland, was designed for network transparency.
LXD, unlike Docker, doesn't play fast-and-loose with security. It runs rootless by default, and I don't allow non-root access to the LXD socket on host. Each container is a full userspace, so it's much more convenient to configure and use than Dockerfiles.
SSH from a container to a remote works transparently because I forward my SSH Agent. This is secure because my agent confirms each signing request with a GUI popup (on host).
3eb7988a1663
Can you point to a write-up somewhere that details this setup?
Part of the appeals of VMs is that they were built with security as a primary objective. I probably have to do something stupid to break that isolation. A custom ad hoc configuration makes me a bit nervous that I will unknowingly punch a Docker sized hole through my firewall and have less security than if I ran a stock workflow.
stevage
I always used to do that, using Vagrant. Mostly because it was the only practical way to maintain independent environments for the tools I was using.
These days I work in JavaScript and rarely have issues with project environments interfering with each other. I've gotten lazy and don't use VMs anymore.
In theory docker type setups could work but they just seem so much effort to learn and setup.
smatija
Seconding vagrant - especially because it's the only reasonable way I found so far to test linux release on my windows rig (would prefer to dev on linux, but windows-only company is windows-only company).
Basically I put a Vagrantfile in src folder, then run docker compose with db, caddy, app server and other services inside it - then I forward ports 80 and 443 from vm and use localhost.whateverdomain.igot with self-signed cert on caddy (since https is just enough different than http that I otherwise get bitten by bugs every so often).
When I start a new project I can usually just copy the Vagrantfile with minimal changes.
mjl-
i develop on linux, on various projects. i'm mostly concerned with all the tools, build scripts and tests that may read sensitive data, or accidentally destroy data. so i'm limiting access to files when working on a project with linux namespaces, using bubblewrap.
i've got a simple per-project dot file that describes the file system binds. while i'm working on a project, new terminals i open are automatically isolated to that project based on that dot file. it has very low (cognitive) overhead and integrates pretty much seamlessly. i suspect many developers have similar scripts. i looked for projects that did this some time ago, but couldn't find it. either because it's too simple to make a project about, or because i don't know how others would describe it. if anyone has pointers...
i don't limit network access (though i did experiment with logging all traffic, and automatically setting up a mitm proxy for all traffic; it wasn't convenient enough to use as regular user). there is still a whole kernel attack surface of course. though i'm mostly concerned about files being read/destroyed.
arkh
Time to main Qubes OS on your development machine. https://www.qubes-os.org/
3eb7988a1663
I actually did try to install Qubes over the holiday, but I repeatedly encountered installation failures and could not ever login to the system. Someone had posted an identical issue, but they were similarly stymied. I should revisit, but my initial foray tells me I am going to have to withstand quite a few papercuts in order to get the isolation I want.
sim7c00
never had issues with qubes like that but i did pick something tested (hw). u can check hardware compat list. it has also some good links to forums for specific hw related tweaks u might need. that being said, runing qubes fully and workin with it is something else... i decided i am uninteresting enough just to use ubuntu these days :p... maybe sometime ill have the patience again.
dacryn
I wonder how this is mitigated by my current workflow of running jupyter and vscode from a docker container.
I did not start doing this because of security, but just to get something more or less self managed without any possibility to break different projects. I am tired of my team spending too much time on extensions, versions, packages, ...
Docker compose files have saved our team many hours, even if it's extremely wasteful to have multiple vscode instances running alongside each other
technion
I think a lot of the issues in this particular example is the ease with which api keys, once leaked, are single factor passwords.
If you ran a key logger on my machine you would never get into any major site with mfa. You couldn't watch me log on to the azure console with passkey and do much with it. But if you scrape a saved key with publish abilities bad things happen.
chrismarlow9
What's to stop me from installing custom certs and MITM your login session proxying the info. Or an extension to harvest the data after you login. I'm pretty sure if I have root it's game over one way or another. The surface is massive.
technion
At that point you've done something much more invasive and detectable than exporting a .env file and you've walked away with a very short lived token. There's always "something more an attacker can do", I'll stand by the view that requiring further authentication to perform interactive actions and pushes is worthwhile.
cedws
I started doing development under a separate non-admin user on my MacBook. I switch to another user for personal stuff, or the admin user to install stuff with Homebrew. Doesn't protect from zero days but it's better than nothing.
3eb7988a1663
I toyed around with this a bit, and it feels like it has significant merit. User separation is about the only security boundary built into Linux from the beginning. I was not totally happy with the workflow I adopted, but it is probably going to be less burdensome than the VM approach.
cedws
With Fast User Switching on macOS it's pretty convenient too. The difficulty is remembering to switch user when changing contexts. I tried to set a different wallpaper/icon for each user to make it more obvious which user I'm on, but macOS just resets them all to be the same.
weinzierl
I know where you are coming from and I considered this myself again and again. For me and for now it is not something I want to do and not primarily because of the effort.
The VM might protect me, but it will not protect the users of the software I am producing. How can I ship a product to the customer and expect them to safely use it without protection when I myself only touch it when in a hazmat suit?
No, that is not the environment I want.
My current solution is to be super picky with my dependencies. More specifically I hold the opinion that we should neither trust projects nor companies but only people. This is not easy to do, but I do not see a better alternative as for now.
guappa
snyk is the same company that instead of rotating oublic keys just… changes them without notice. https://github.com/snyk/cli/pull/5649
They also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D
Their competence isn't as big as their fame, in my opinion.
Also one of their sales people insulted me over email, because apparently not being interested in buying their product means you're an incompetent developer who can only write software filled with vulnerabilities.
azemetre
They also penalize libraries that are "done," and require minimal development.
Completely backwards software that corpos only seem to buy because their insurers force them to check off some security list box.
alp1n3_eth
That's extremely unfortunate, especially about the "abandoned" labelling. I've been looking to move off GitHub recently as well, it feels like it's got a bit too much control.
Codeberg looks interesting, and there are self-hosted ones like Forejo that also look great if you're okay with the maintenance.
gyoridavid
"insulted me over email" - whoa, that's wild, do you still have the email? would be fun to see it :D
guappa
Sorry, I searched, it seems all my emails from before the last company rename are gone.
edit: or microsoft outlook sucks… I tried to sort in reverse my inbox to see what's the oldest email there and "the request cannot be satisfied"
unixhero
Ouch, I kind of trusted it.
... more than Gmail and Google
bilekas
> hey also mark projects as "abandoned" if they move to any other forge that isn't github. And they stay abandoned even if new releases appear on npm/pypi :D
Well theres a sign of a good team.. /s
That's actually an interesting take, I haven't heard too much about them except that they do have an ego.
Ylpertnodi
I'm sure you can provide the body of the [appropriately redacted] said email?
guappa
I was also sure until I found out that outlook refuses to search old emails.
woodruffw
Without more context, this doesn't look great for Snyk either way: either they have an employee using NPM to live test their own services, or they have insufficient controls/processes for performing a legitimate audit of Cursor without using public resources.
tru3_power
Why not? NPM behaves oddly when there is a public package named the same as one on a private repo, in some cases it’ll fetch the public one instead. I believe it’s called package squatting or something. They might have just been showing that this is possible during an assessment. No harm no foul here imo
woodruffw
> They might have just been showing that this is possible during an assessment. No harm no foul here imo
You're not supposed to leave public artifacts or test on public services during an assessment.
It's possible Cursor asked them to do so, but there's no public indication of this either. That's why I qualified my original comment. However, even if they did ask them to, it's typically not appropriate to use a separate unrelated public service (NPM) to perform the demo.
Source: I've done a handful of security assessments of public packaging indices.
guappa
Comments here seem to indicate that cursor did NOT ask them to (unless of course someone inside the company did and didn't tell the others)
compootr
if Cursor is secure it shouldn't be a problem for them! (and, according to their comments, it is)
BeefWellington
"No Harm No Foul" in this case would be a simple demonstrative failure case, not functioning malware.
nikcub
Looks like a white hat audit from Snyk testing. Got flagged because oastify.com is a default Burp Collaborator server.
They should be running a private npm repo for tests (not difficult to override locally) and also their own collaborator server.
Cthulhu_
It's not white hat because they actively extract data; if it was just to prove it worked they could've done a console.log, cause npm install to fail, or not extract a payload.
that_guy_iain
The data they extract is nothing sensitive and this way they can see how many hits they get. The more affected the bigger the headline for them.
__jonas
In what world is "all environment variables" nothing sensitive?
fintechie
Hopefully this makes the Cursor team reconsider security (which doesn't seem very good really).
Stopped using it for serious stuff after I noticed their LLMs grabs your whole .env files and sends them to their server... even after you add them to their .cursorignore file. Bizarre stuff.
Now imagine a bad actor exploiting this... recipe for disaster.
miohtama
Security often means the opposite of scalability and growth, so why should they? The business goal is to make sure Cursor grows large enough that they have economics of scale to be a viable business.
If you want secure LLM you can use Mistral, which comes with all the EU limitations, good and bad.
yunwal
Mistral (an LLM company) is not really a substitute for cursor (an IDE). Tabby is probably the closest open-source alternative. https://github.com/TabbyML/tabby
mirkodrummer
Looks like NPM is generating jobs for those in the security field. It’s an unfixable mess, I really hope some competition like JSR will put enough pressure on the organization.
devjab
It's not just NPM, it's the trust in third party libraries in general. Even though it's much rarer, you'll see exploits on platforms like Nuget. You're also going to see them on JSR. You have more security because they are immutable, but you're not protected from downloading a malicious pacakge before it's outed.
I think what we're more likely to see is that leglislation like DORA and NSIS increasinly require that you audit third party packages. This enforcing a different way of doing development in critical industries. I also think you're going to see a lot less usage of external packages in the age of LLM's. Because why would you pull an external package to generate something like your OpenAPI specification when any LLM can write a cli script that does it for you in an hour or two of configuring it to your needs? Similarily, you don't need to use LLM's directly to auto-generate "boring" parts of your code, you can have them build cli tools which does it. That way you're not relying on outside factors, and while I can almost guarantee that these cli tools will be horrible cowboy code, their output will be what you refine the tools to make.
With languages like Go pushing everything you need in their standard packages, you're looking at a world where you can do a lot of things with nothing but the standard library very easily.
dannyallan
Snyk Research Labs regularly contributes back to the community with testing and research of common software packages. This particular research into Cursor was not intended to be malicious and included Snyk Research Labs and the contact information of the researcher. We were very specifically looking at dependency confusion in some VS Code extensions. The packages would not be installed directly by a developer.
Snyk does follow a responsible disclosure policy and while no one picked this package up, had anyone done so, we would have immediately followed up with them.
luma
Spraying your attack into the public with hopes of hitting your target is the polar opposite of responsible. The only "good" part of this is that you were caught in the act before anyone else got hit in the crossfire.
In response, you suggest that you'll send a letter of apology to the funeral home of anyone that got hit. Compromising their credentials, even if you have "good intentions", still puts them into a compromised position and they have to react the same as they would for any other malevolent attacker.
This is so close to "malicious" that it's hard to perceive a difference.
edit: Let's also remind everyone that a Snyk stakeholder is currently attempting to launch a Cursor competitor, so assuming good intentions is even MORE of a stretch.
yabones
This is grey-hat at best. Intent may have been good, but the fact is that this team created and distributed software to access and exfiltrate data without permission which is very illegal. You may want to consult with the legal department before posting about this on a public forum fyi.
senorrib
Cool. Why phone home the user's environment, then? The vulnerability could very much be confirmed by simply sending a stub instead of live envs.
etyp
Seems reasonable enough, but why would it (allegedly) send environment variables back via a POST? Even if it's entirely in good faith, I'd rather some random package not have my `env` output..
austinkhale
Upvoting this since presumably you're actually the CTO at Snyk and people should see your official response, but wow this feels wildly irresponsible. You could have proved the PoC without actually stealing innocent developer credentials. Furthermore, additional caution should have been taken given the conflict of interest with the competitor product to Cursor. Terrible decision making and terrible response.
pizzalife
What is responsible about sending the environment over in a proof of concept?
null
rdegges
Hey there! I run DevRel & SecRel @ Snyk, we just published a piece to help dispel all the rumors, etc. This provides a lot of in-depth info on the situation: https://snyk.io/blog/snyk-security-labs-testing-update-curso...
The TL;DR is that our security research team routinely hunts for various vulnerabilities in tools developers use. In this particular case, we looked at a potential dependency confusion attack in Cursor, but found no vulnerabilities.
There's no malicious intent or action here, but I can certainly understand how it appears when there's not a ton of information and things like this occur! As a sidenote, I use Cursor all the time and love it <3
kittikitti
NPM packages are the most bloated and unreadable pieces of code I've encountered. The creator of Node apparently hates all software and yet Google gave him the captain's hat and we're left with the absolute crap shoot that is web development. I feel guilty with an additional 1KB of code or 500 bytes of RAM but this is seen as an outsider opinion. I hope big tech rots and this is just a symptom. https://news.ycombinator.com/item?id=3055154
zelphirkalt
NPM packages VS Wordpress plugins ... I think it is a head to head race there.
[EDIT: See the response by a Cursor dev below — looks like it was not authorized by them]
Sounds to me like Cursor internally has a private NPM registry with those packages. Because of how NPM works, it's quite easy to trick it to fetch the packages from the public registry instead, which could be used by an attacker [0].
Assumably, this Snyk employee either found or suspected that some part of Cursor's build is misconfigured as above, and uploaded those packages as a POC. (Given the package description "for Cursor", I'd think they were hired for this purpose.)
If that's the case, then there's not much to see here. The security researcher couldn't have used a private NPM registry to perform the POC if the point is to demonstrate a misconfiguration which skips the private registry.
.
[0] In particular, many proxies will choose the public over the private registry if the latest package version is higher: https://snyk.io/blog/detect-prevent-dependency-confusion-att...