Skip to content(if available)orjump to list(if available)

I hacked a dating app (and how not to treat a security researcher)

michaelteter

Not excusing this is any way, but this app is apparently a fairly junior effort by university students. While it should make every effort to follow good security (and communication) practices, I'd not be too hard on them considering how some big VC funded "adult" companies behave when presented with similar challenges.

https://georgetownvoice.com/2025/04/06/georgetown-students-c...

tmtvl

I vehemently disagree. 'Well, they didn't know what they were doing, so we shouldn't judge them too harshly' is a silly thing to say. They didn't know what they were doing _and still went through with it_. That's an aggravating, not extenuating, factor in my book. Kind of like if a driver kills someone in an accident and then turns out not to have a license.

michaelteter

Still not excusing them, but these HN responses are very hypocritical.

US tech is built on the "go fast, break things" mentality. Companies with huge backers routinely fail at security, and some of them actually spend money to suppress those who expose the companies' poor privacy/security practices.

If anything, college kids could at least reasonably claim ignorance, whereas a lot of HN folks here work for companies who do far worse and get away with it.

Some companies, some unicorns, knowingly and wilfully break laws to get ahead. But they're big, and people are getting rich working for them, so we don't crucify them.

mmanfrin

> They didn't know what they were doing _and still went through with it_

You don't know what you don't know; sometimes people can think they do know what they're doing and they just haven't encountered situations otherwise. We were all new to programming once; no one would ever become a solid engineer if they prevented themselves from building anything out of fear of doing something wrong that they did not account for out of lack of experience.

dmitrygr

+1: if you cannot do security, you have no business making dating apps. The kind of data those collect can ruin lives overnight. This is not a theory, here is a recent example: https://www.bbc.com/news/articles/c74nlgyv7r4o

satanfirst

The claim that it should have come up in a government vetting process seems to be proof that one should publish one's own dating information before entrusting it to a site that might have lost it or worse might provide it to a government specifically.

steeeeeve

I would agree with you. Dating app data might not be legally protected like some PII out there, but there are easily foreseeable bad consequences from compromised dating app data of any kind. Security should be accounted for from the very beginning.

burnt-resistor

If you cannot do security, you have no business making any app people use in significant numbers containing Personally Identifiable Information (PII).

Perhaps, like GDPR, HIPAA, and similar, any (web|platform)apps that contain login details and/or PII must thoroughly distance themselves from haphazard, organic, unprofessional, and (bad) amateurish processes and technologies and conform to trusted, proven patterns, processes, and technologies that are tested, audited, and preferably formally proven for correctness. Without formalization and professional standards, there are no standards and these preventable, reinvent-the-wheel-badly hacks will continue doing the same thing and expecting a different result™. Massive hacks, circumvention, scary bugs, other attacks will continue. And, I think this means a proper amount of accreditation, routine auditing, and (the scary word, but smartly) regulation to drag the industry (kicking-and-screaming if need by by showing using appropriate leadership on the government/NGO-SGE side) from an under-structured wild west™ into professionalism.

LadyCailin

This is exactly why I think software engineering should require a licensing requirement, much like civil engineering. I get that people will complain about that destroying all sorts of things, and it might, yes, but fight me. Crap like this is exactly why it should be a requirement, and why you won’t convince me that the idea is not in general a good one.

viraptor

While the idea is good, I'm not sure how this would get implemented realistically. The industry standards/audits are silly checkbox exercises rather then useful security. The biggest companies are often terrible as far as secure design goes. The government security rules lag years behind the SotA. For example how long did it take NIST to stop recommending changing passwords?

Civil engineering works well because we mostly figured it out anyway. But looking at PCI, SOX and others, we'd probably just require people to produce a book's worth of documentation and audit trail that comes with their broken software.

Implicated

Agreed. My stance on this changed over the course of some years after a close family member married an actual engineer (structural) and I got a lot of insight into that world.

It's astonishing to me the ease of which software developers can wreak _real_ measurable damage to billions of lives and have no real liability for it.

Software developers shouldn't call themselves engineers unless they're licensed, insured and able to be held liable for their work in the same way a building engineer is.

Anon1096

I'm curious how you think this would be implemented. Do you think you should need a license to publish on GitHub? Write code on your own computer and run it? Because this was just a startup that some kids founded so saying that a license would have to be a prerequisite to hiring somebody would not cut it. You'd have to cut off their ability to write/run code entirely.

motorest

> This is exactly why I think software engineering should require a licensing requirement, much like civil engineering.

Civil engineering requires licensing because there are specific activities that are reserved for licensed engineers, namely things that can result in many people dying.

If a major screwup doesn't even motivate victims to sue a company then a license is not justified.

hackable_sand

Yes, I will happily fight against authoritarian takes cloaked in vagueries.

voytec

I've also hit this link trying to get any info on "Cerca". It's from April 2025 and praises app created two months earlier. It looks like a LLM-hallucinated garbage. OP's entry mentions contacting Cerca team in February. So either this entry is about a flaw detected at launch date or some weird scheme.

Nonetheless: "two months old vulnerability" and "two months old students-made app/service".

michaelteter

Ah that's a shame.

It's hard to tell these days what is real.

Linkedin shows 2024 founded, and 2-10 employees. And that same Linkedin page has a post which directly links to this blurb: https://www.readfeedme.com/p/three-college-seniors-solved-th...

The date of this article is May 2025, and it references an interview with the founders.

bearsyankees

I think the date there is March 25

barbazoo

How is one supposed to know that it's just a bunch of script kiddies we shouldn't be too hard on if their apps get released under "Cerca Applications, LLC".

yard2010

These guys should probably study something else.

imiric

That sounds like you're excusing them.

You know what else was an app built by university students? The Facebook. We're all familiar with the "dumb fucks" quote, with Meta's long history of abusing their users' PII, and their poor security practices that allowed other companies to abuse it.

So, no. This type of behavior must not be excused, and should ideally be strongly regulated and fined appropriately, regardless of the age or experience of the founders.

null

[deleted]

peterldowns

I hear you but if you're processing passports and sexual preferences you have to at least respond to the security researcher telling you how you're leaking them to absolutely anyone. This is a total clusterfuck and there are zero excuses for the lack of security here.

genewitch

i have an idea, if you don't know anything about app security, don't make an app. "Whataboutism" not-withstanding, this actually made me feel a little ill, and your comment didn't help. I have younger friends that use dating sites and having their information exposed to whoever wants it is gross, and the people who made it should feel bad.

They should feel bad about not communicating with the "researcher" after the fact, too. If i had been blown off by a "company" after telling them everything was wide open to the world for the taking, the resulting "blog post" would not be so polite.

STOP. MAKING. APPS.

dylan604

Stop pushing POCs into PROD.

There's nothing wrong with making your POC/MVP with all of the cool logic that shows what the app will do. That's usually done to gain funding of some sort, but before releasing. Part of the releasing stage should be a revamped/weaponized version of the POC, and not the damn POC itself. The weaponized version should have security stuff added.

That's much better than telling people stop making apps.

genewitch

These "devs" released an app to prod that took passport information and who knows what else. They had no business asking for any of that PII.

If all of the developers were named and shamed, would you, as a hiring manager, ever hire them to develop an app for you? Or would you, in fact, tell them to stop making apps?

They enabled stalkers. There's no possible way to argue that they didn't, you don't know, and some random person just looked into it because their friends mentioned the app and found all of this. I guarantee if anyone with a modicum of security knowledge looks the platform over there's going to be a lot more issues.

It's one thing to be curious and develop something. It's another to seek VC/investments to "build out the service" by collecting PII and not treating it as such. Stop. Making. Apps.

imiric

You're shouting into the void. The people making this type of product have zero regard for their users' data, and best engineering or security practices. They're using AI to pump out a product as quickly as possible, and if it doesn't work (i.e. makes them money), they'll do it again with something else.

This can only be solved by regulation.

rs186

There is a point to your comment, but I am afraid you are shouting at the wrong thing.

Instead, I think this is the fair approach: anyone is free to make a website/app/VR world whatever, but if it stores any kind of PII, you had better know what you are doing. The problem is not security. The problem is PII. If someone's AWS key got hacked, leaked and used by others, well it's bad, but that's different from my personal information getting leaked and someone applying for a credit card on my behalf.

ghssds

Programming should require a gouvernment-emited license reserved to alumni of duly certified schools. Possession of a turing-complete compiler of interpreter without permission should be a felony.

motorest

> Programming should require a gouvernment-emited license reserved to alumni of duly certified schools.

Nonsese. I've met PhDs in computer science that were easily out-performed by kids fresh out of coding bootcaps. Do you think that spending 5 years doing a few written exampls makes you competent at cyber security? Absurd.

yamazakiwi

You’ve successfully contributed 20 pts to your institutional privilege score; Impressive! You're just one step away from your next badge:

"Class Immobility" (95% of users unlock this without trying!)

How to unlock: Be denied access to an accredited education. Work twice as hard for half the recognition. Watch opportunities pass you by while gatekeepers congratulate themselves!

yibg

End of the day it's an ROI analysis (using the term loosely here, more of a gut feel). What is the cost and benefits of making an app more secure vs pushing out an insecure version faster. Unfortunately in today's business and funding climate, the latter has better pay off (for most things anyways).

Until the balance of incentives changes, I don't see any meaningful change in behavior unfortunately.

SpaceL10n

I worry about my own liability sometimes as an engineer at a small company. So many businesses operate outside of regulated industries where PCI or HIPAA don't apply. For smaller organizations, security is just an engineering concern - not an organizational mandate. The product team is focused on the features, the PM is focused on the timeline, QA is focused on finding bugs, and it goes on and on, but rarely is there a voice of reason speaking about security. Engineers are expected to deliver tasks on the board and litte else. If the engineers can make the product secure without hurting the timeline, then great. If not, the engineers end up catching heat from the PM or whomever.

They'll say things like...

"Well, how long will that take?"

or, "What's really the risk of that happening?"

or, "We can secure it later, let's just get the MVP out to the customer now"

So, as an employee, I do what my employer asks of me. But, if somebody sues my employer because of some hack or data breach, am I going to be personally liable because I'm the only one who "should have known better"?

SoftTalker

You're not really an engineer. You won't be signing any design plans certifying their safety, and you won't be liable when it's proven that they aren't safe.

kohbo

Depends on your industry. Even if SWE's aren't out here getting PE's there is absolutely someone signing off on all things safety-related.

remus

As an engineer I'm a small org I think it's our responsibility to educate the rest of the team about these risks and push to make sure they get engineering time to mitigate these issues. It's not easy, but it's important stuff that could sink the business if it's not taken seriously.

pixl97

If it's an LLC/Corp you should be protected by the corporate veil unless you've otherwise documented you're committing criminal behavior.

But yea, the lack of security standards across organizations of all sizes is pitiful. Releasing new features always seems to come before ensuring good security practices.

sailfast

I would personally want to know the law enough to protect myself, push back on anything illegal in writing, and then get written approval to disregard to be totally covered - but I understand that even this can be hard if you’re one or two devs deep at a startup or whatever. Personally, if I didn’t think they were pursuing legal work I’d leave.

kelnos

As much as I despise the "I was just following orders" defense, do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

Not sure where you are located, but I don't know of any case where an individual rank-and-file employee has been held legally responsible for a data breach. (Hell, usually no one suffers any consequences for data breaches. At most the company suffers a token fine and they move on without caring.

hnlmorg

> do make sure you get anything like that in writing: an email trail where you raise your concerns about the lack of security, with a response from a boss saying not to bother with it.

A few years ago I was put in the situation where I needed to do this and it created a major shitstorm.

“I’m not putting that in writing” they said.

However it did have the desired effect and they backed down.

You do need to be super comfortable with your position in the company to pull that stunt though. This was for a UK firm and I was managing a team of DevOps engineers. So I had quite a bit of respect in the wider company as well as stronger employment rights. I doubt I’d have pulled this stunt if I was a much more replaceable software engineer in an American startup. And particularly not in the current job climate.

hiatus

Are you an officer of the company? If not I would not think you could be personally liable.

yieldcrv

not in my experience

andrelaszlo

Oops! Nice find!

To limit his legal exposure as a researcher, I think it would have been enough to create a second account (or ask a friend to create a profile and get their consent to access it).

You don't have to actually scrape the data to prove that there's an enumeration issue. Say your id is 12345, and your friend signs up and gets id 12357 - that should be enough to prove that you can find the id and access the profile of any user.

As others have said, accessing that much PII of other users is not necessary for verifying and disclosing the vulnerability.

ofjcihen

This is the standard and obvious way to go about things that most security researchers ignore.

While you can definitely want PII protected and scrape data to prove a point it’s unnecessary and hypocritical.

mtlynch

This is a pretty confusing writeup.

>First things first, let’s log in. They only use OTP-based sign in (just text a code to your phone number), so I went to check the response from triggering the one-time password. BOOM – the OTP is directly in the response, meaning anyone’s account can be accessed with just their phone number.

They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

>The script basically just counted how many valid users it saw; if after 1,000 consecutive IDs it found none, then it stopped. So there could be more out there (Cerca themselves claimed 10k users in the first week), but I was able to find 6,117 users, 207 who had put their ID information in, and 19 who claimed to be Yale students.

I don't know if the author realizes how risky this is, but this is basically what weev did to breach AT&T, and he went to prison for it.[0] Granted, that was a much bigger company and a larger breach, but I still wouldn't boast publicly about exploiting a security hole and accessing the data of thousands of users without authorization.

I'm not judging the morality, as I think there should be room for security researchers to raise alarms, but I don't know if the author realizes that the law is very much biased against security researchers.

[0] https://en.wikipedia.org/wiki/Goatse_Security#AT&T/iPad_emai...

lima

> They don't explain it, but I'm assuming that the API is something like api.cercadating.com/otp/<phone-number>, so you can guess phone numbers and get OTP codes even if you don't control the phone numbers.

They mention guessing phone numbers, and then the API call for sending the OTP... literally just returns the OTP.

mtlynch

Yeah, I guess there's no reason for the API to ever return the OTP, but the severity depends on how you call the API. If the API is `api.cercadating.com/otp/<unpredictable-40-character-token>`, then that's not so bad. If it's `api.cercadating.com/otp/<guessable four-digit number>` that's a different story.

From context, I assume it's closer to the latter, but it would have been helpful for the author to explain it a bit better.

bearsyankees

Hi, author here! My bad if that was not clear. The endpoint was just a POST request where the body was the phone number, so that is all you needed to know to take over someone's account.

tptacek

Read the original complaint in the Auernheimer case. Prosecutors had (extensive) intent evidence that is unlikely to exist here. The defendants in that case were also accused of disclosing the underlying PII, which is not something that appears to have happened here.

SoftTalker

I was going to say the headline of the post, "I hacked..." could almost be taken as a confession. But that's not the actual title of the linked article. I'm almost tempted to flag this submission for clickbait embellishment in the title.

mtlynch

Yeah, I agree Auernheimer was a much more attractive target for prosecution, but do you think this student is legally safe in what they're doing here?

tptacek

I would personally not scrape the endpoint to collect statistics and inform the severity estimation, but I'm a lot more risk averse than most. But prosecution of good-faith security research is disfavored, so as long as you don't do anything to breach the assumption of good faith (as defendants in the trial you mentioned repeatedly did) I think you're probably fine.

The bigger thing is just that there's no actual win in scraping here. It doesn't make the vulnerability report any more interesting; it just reads like they're trying to make the whole thing newsier. Some (very small) risk, zero reward.

shayanbahal

I had a similar experience with another dating app, although they never got back to me. When I tried to get the founders attention by changing his bio to contact me text, they restored a backup lol

years later I saw their instagram ad and tried to see if the issue still exists, and yes it did. Basically anyone with the knowledge of their API endpoints (which is easy to find using the app-proxy-server) you have full on admin capabilities and access to all messages, matching, etc.

I wonder if I should go back and try again... :-?

cobalt60

Why not disclose it as a responsible dev with contacts and move on.

pixl97

If a company is not responsible enough to follow up on security reports you should not follow up, but instead disclose it to the world.

flutas

tbh, I agree.

I've sent 2 big bugs like this, one Funimation and one for a dating app.

Funimation you could access anyones PII and shop orders, they ignored me until I sent a linkedin message to their CTO with his PII (CC number) in it.

The "dating" app well they were literally spewing private data (admin/mod notes, reports, private images, bcrytped password, ASIN, IP, etc) via a websocket on certain actions. I figured out those actions that triggered it, emailed them and within 12 hours they had fixed it and made a bug bounty program to pay me out of as a thank you.

Importantly, I also didn't use anyone else's data/account, I simply made another account that I attacked to prove. Yes it cost me a monthly sub ~$10 to do so. But they also refunded that.

shayanbahal

I think it took so long that I moved on, but you are right and I should have done that. Probably I'll take a look again to see if I can do it now :)

nixpulvis

People need to be forced to think twice before taking in such sensitive information as a passport or even just addresses. This sort of thing cannot be allowed to be brushed off as just a bunch of kids making an app.

VBprogrammer

The UK government are trying really hard to mandate IDs for access to porn sites. Can't wait for that to blow up in their faces.

pixl97

"They" don't care, the entire point of many of these laws is to increase the friction and fear of being disclosed that you don't visit these sites in the first place.

kelnos

And for things like passport or other ID details, there's also no reason to expose them publicly at all after they've been entered. If you want an API available to fetch the data so you can display it in the UI, there's no need to include the full passport/ID number; at the very least it can be obscured with only the last few digits sent back via the API.

But for something like a dating site, It's enough for the API to just return a boolean verified/not-verified for the ID status (or an enum of something like 'not-verified', 'passport', 'drivers-license', etc.). There's no real need to display any of the details to the client/UI.

(In contrast with, say, and airline app where you need to select an identity document for immigration purposes, where you'd want to give the user more details so they can make the choice. But even then, as they do in the United app, they only show the last few digits of the passport number... hopefully that's all that's sent over their internal API as well.)

jonny_eh

There should to be some kind of government operated identity confirmation service that is secure/private.

Or by someone "government-like" such as Apple or Google.

steeeeeve

Government is the worst possible solution to every problem.

(not an attack on you. I have to say that every time I see someone say anything along the lines of "the government should do it")

clifflocked

OAuth exists and can be used to confirm someone's identity by linking their Google account.

kelnos

Linking a Google account doesn't confirm your identity, though. It just confirms that you created a Google account with a particular name.

nixpulvis

To be fair, I wouldn't want my google account linked to my dating profile. Aggregating services has risks too.

smt88

A Google account does nothing to prove identity

behringer

when I worked for the government, within 2 months they had leaked all of my data to the black market.

Governments should not be confirming shit.

pixl97

The government already has all your data so I'm not sure who you think should be confirming identity.

koakuma-chan

Were they not using some kind of third party identity verification service? That's what I usually see apps do. Don't tell me those third party services still share your ID with the app (like the actual images)?

nixpulvis

Read the article. They clearly have their own OTP setup.

But if they are asking for your passport, then they have access to it. It's not a third party asking and providing them with some checkmark or other reduced risk data.

koakuma-chan

I have read the article and OTP has nothing to do with identity verification. I'm asking because every single time I went through identity verification the app used a third party service that is supposed to be trustworthy.

blantonl

Returning the OTP in the request API response is wild. Like why?

MBCook

So the UI can check if what they enter is correct.

It’s very sensible and an obvious solution if you don’t think about the security of it.

A dating app is one of the most dangerous kinds of app to make due to all the necessary PII. this is horrible.

ryanisnan

> if you don’t think about the security of it.

This is big brain energy. Why bother needing to make yet another round trip request when you can just defer that nonsense to the client!

joelhaasnoot

No one would ever hack my app!

benmmurphy

I’ve seen banks where the OTP code is generated on the client and then sent to the server.

pydry

Smacks of vibe coding

MBCook

Could be. Somewhere else in these comments someone was saying they found evidence that the app was coded that way.

But they also said it was a project by two students. And I could absolutely see students (or even normal developers) who aren’t used to thinking about security make that mistake. It is a very obvious way to implement it.

In retrospect I know that my senior project had some giant security issues. There were more things to look out for than I knew about at that time.

bitbasher

I don't think a language model is that stupid. This smacks of pure human stupidity and/or offshoring.

matja

Eliminate your database costs with this one easy trick!

hectormalot

One reason I could think of is that they may return the database (or cache, or something else) response after generating and storing the OTP. Quick POCs/MVPs often use their storage models for API responses to save time, and then it is an easy oversight...

gwbas1c

It appears that the OTP is sent from "the response from triggering the one-time password".

I suspect it's a framework thing; they're probably directly serializing an object that's put in the database (ORM or other storage system) to what's returned via HTTP.

ceejayoz

Save a HTTP request, and faster UX! What's not to love?

When Pinterest's new API was released, they were spewing out everything about a user to any app using their OAuth integration, including their 2FA secrets. We reported and got a bounty, but this sort of shit winds up in big companies' APIs, who really should know better.

mooreds

I too am bewildered.

Maybe to make it easier to build the form accepting the OTP? Oversight?

I can't think of any other reasons.

Vuska

Oversight. Frameworks tend to make it easy to make an API endpoint by casting your model to JSON or something, but it's easy to forget you need to make specific fields hidden.

Alex-Programs

I assume that whoever wrote it just has absolutely no mental model of security, has never been on the attacking side or realised that clients can't be trusted, and only implemented the OTP authentication because they were "going through the motions" that they'd seen other people implement.

pixl97

Everyone that programs should take blackhat classes of some kind. I talk to so many programmers that really don't understand what hackers/attackers can actually do.

ksala_

My best guess would be some form of testing before they added sending the "sending a message" part to the API. Build the OTP logic, the scaffolding... and add a way to make sure it returns what you expect. But yes absolutely wild.

ungreased0675

I would like to see laws that make storing PII as dangerous as storing nuclear waste. Leaks should result in near-certain bankruptcy for the company and legal jeopardy for the people responsible.

That’s the best way I can think of to align incentives correctly. Right now there’s very little downside to storing as much user information as possible. Data breach? Just tweet an apology and keep going.

hiatus

> I would like to see laws that make storing PII as dangerous as storing nuclear waste.

This is a little extreme IMO. PII encompasses a lot of data, including benign things like email address stored only for authentication and contact purposes.

pixl97

I mean, we could consider email like light waste, can't dump it in the environment like plastic trash, but if you handle it correctly with cheap disposal methods it will be ok.

Things like photos of IDs/passports should be considered yellowcake.

gwbas1c

White collar jail?

That might be the only way to give the issue the attention it deserves.

edm0nd

> I have been met with radio silence.

Thats when its time to inform them you are dumping the vuln to the public in 90 days due to their silence.

hbn

That's more of a punishment to innocent users than the business

nick238

Disclosure is good for the 'innocent users' as they are made aware that their data may have been leaked (who knows if the company can do the sufficient auditing and forensics to detect total scraping), rather than just being oblivious because the company just didn't bother to tell them.

maxverse

Is there any reason to not just privately email the users? "Hey, I'm so and so, a security researcher. I was able to gather your data from <Company>, which has not responded to any inquiries from me. Please be aware that your data is mismanaged and vulnerable, and I encourage you to voice your concern directly to <Company>."

kube-system

> Disclosure is good for the 'innocent users' as they are made aware that their data may have been leaked

Presuming perfect communication which is never the case for security vulnerabilities on a consumer application.

ericmcer

This is a rare case where the leak is so egregious he could actually reach out to all the users themselves to let them know. Especially the ones with passport info.

kenjackson

True. Maybe let them know you will be directly contacting each user and letting them know that this service has exposed their personal information to hackers.

nick238

I'd definitely not do that. POCing a scraper to check is fine, but you shouldn't save any PII from that data. You're also saying you're the "hacker", as you don't know if it's actually been revealed to others without the forensics that (hopefully) only the business can do.

OutOfHere

There is no vulnerability here. It's just out in the open.

myself248

Imagine if they tried to claim that. "Everything was just out on the front lawn, you can't blame us for not locking the door because we didn't even have a door!"

null

[deleted]

9283409232

Good way to get yourself sued and have possible criminal charges brought up to you.

Buttons840

Yeah. Security researchers face the threat of lawsuits constantly, while those who build insecure apps face no consequences.

We are literally sacrificing national security for the convenience of wealthy companies.

SoftTalker

Well it's kind of like "I walked around the neighborhood trying everyone's front door, I found one unlocked and I could even enter the house and rummage through their personal effects. Just trying to improve the security of the neighborhood!"

b8

Which has never happened before and if it does then the EFF would back you presumably.

chickenzzzzu

Imagine banking your physical and financial security on a presumption that the EFF can help you XD

9283409232

This is a completely uninformed comment. Security researchers get sued or threatened all the time. Bunnie was threatened by Microsoft for publishing his research on Xbox vulnerabilities, the city of Columbus sued David Ross for his reporting on data exposed during a ransomware attack, Google has threatened action against a few security researchers if memory serves and that is just what I can remember off the top of my head.

edm0nd

most certainly not (at least in the US).

I'm so tired of researchers being ignored when they bring a serious vuln to a company to be met with silence and/or resistance on top of them never alerting their users about it.

gwbas1c

FYI: This is more common than you think.

I briefly worked with a company where I had to painfully explain to the lead engineer that you can't trust anything that comes from the browser; because a hacker can curl whatever they want.

Our relationship deteriorated from there. Needless to say, I don't list the experience on my resume.

xutopia

That's crazy to not have responded to his repeated requests!

benzible

As someone managing a relatively low-profile SaaS app, I get constant reports from "security researchers" who just ran automated vulnerability scanners and are seeking bounties on minor issues. That said, it's inexcusable - they absolutely need to take these reports seriously and distinguish between scanner spam and legitimate security research like this.

Update: obviously I just skimmed this, per responses below.

nick238

Pardon sir, I see you have:

* Port 443 exposed to the internets. This can allow attackers to gain access to information you have. $10k fee for discovery

* Your port 443 responds with "Server: AmazonS3" header. This can allow attackers to identify your hosting company. $10k fee for discovery.

Please remit payment and we will offer instructions for remediation.

sshine

They already met with him and acknowledged the problem. So their lack of follow-up is an attempt to push things under the rug. Users deserve to know that their data was compromised. In some places of the world it is a crime to not report a data leak.

bee_rider

It sounds like they actually met with him, patched the issues, and then didn’t respond afterwards. IMO that is quite rude of them toward him, but they do seem to have taken the issue itself somewhat seriously.

benzible

Ah, sorry, I need to actually read things before I react :)

moonlet

Not really if they don’t have any security or even devsecops yet… if they just have devs and those devs are people who are relatively junior / just out of school, I could unfortunately absolutely see this happening

mytailorisrich

A company has no duty to report to you about just because you kindly notified them of a vulnerability in their software.

> During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.

Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.

Engineers can be a little too open and naive. Perhaps his first contacts was with the technical team but then managament and the legal team got hold of the issue and shut it off.

kadoban

> > During our conversation, the Cerca team acknowledged the seriousness of these issues, expressed gratitude for the responsible disclosure, and assured me they would promptly address the vulnerabilities and inform affected users.

> Well that was the decent thing to do and they did it. Beyond that it is their internal problem and, especially they did fix the issue according to the article.

They didn't inform anyone, as far as I can tell. Especially users need(ed) to be informed.

It's also at least good practice to let security researchers know schedule of when it's safe to inform the public, otherwise in the future disclosure will be chaotic.

sakjur

Taking Yale as a starting point, they seem to have failed their legal obligation to inform their Conneticut users within 60 days (assuming the author of the post would’ve received a copy of such a notification).

https://portal.ct.gov/ag/sections/privacy/reporting-a-data-b...

I doubt this is an engineering team’s naivete meeting a rational legal team’s response. I’d guess it’s rather facing marketing or management naivete that sticking your head in the sand is the correct way to deal with a potential data leak story.

mytailorisrich

Companies won't inform of vulnerabilities. They may/should inform users if they think their data was breached, which is different.

Not clear why "the public" should be informed, either.

Ultimately they thanked the researcher and fixed the issue, job done.

pixl97

>A company has no duty to report to you about just because you kindly notified them of a vulnerability in their software.

Then you have no duty to report the vuln to the company and instead should feel free to disclose it to the world.

A little politeness goes a long ways on both sides.