Skip to content(if available)orjump to list(if available)

Cryptographic Issues in Cloudflare's Circl FourQ Implementation (CVE-2025-8556)

mmsc

>after having received a lukewarm and laconic response from the HackerOne triage team.

A slight digression but lol, this is my experience with all of the bug bounty platforms. Reporting issues which are actually complicated or require an in depth understanding of technology are brickwalled, because reports of difficult problems are written for .. people who understand difficult problems and difficult technology. The runarounds are not worth the time for people who try to solve difficult problems because they have better things to do.

At least cloudflare has a competent security team that can step in and say "yeah, we can look into this because we actually understand our whole technology". It's sad that to get through to a human on these platforms you have to effectively write two reports: one for the triagers who don't understand the technology at all, and one for the competent people who actually know what they're doing.

poorman

There is definitely a miss-alignment of incentives with the bug bounty platforms. You get a very large number of useless reports which tends to create a lot of noise. Then you have to sift through a ton of noise to once in a while get a serious report. So the platforms up-sell you on using their people to sift through the reports for you. Only these people do not have the domain knowledge expertise to understand your software and dig into the vulnerabilities.

If you want the top-teir "hackers" on the platforms to see your bug bounty program then you have to pay the up-charge for that too, so again miss-alignment of incentives.

The best thing you can do is have an extremely clear bug-bounty program detailing what is in scope and out of scope.

Lastly, I know it's difficult to manage but open source projects should also have a private vulnerability reporting mechanism set up. If you are using Github you can set up your repo with: https://docs.github.com/en/code-security/security-advisories...

miohtama

The useless reports are because there are a lot of useless people

wslh

The best thing you can do is to include an exploit when it is possible, so this can be validated automatically and clear the noise.

tptacek

The backstory here, of course, is that the overwhelming majority of reports on any HackerOne program are garbage, and that garbage definitely includes 1990s sci.crypt style amateur cryptanalyses.

CaptainOfCoit

> 1990s sci.crypt style amateur cryptanalyses

Just for fun, do you happen to have any links to public reports like that? Seems entertaining if nothing else.

CiPHPerCoder

Most people don't make their spam public, but I did when I ran this bounty program:

https://hackerone.com/paragonie/hacktivity?type=team

The policy was immediate full disclosure, until people decided to flood us with racist memes. Those didn't get published.

Some notable stinkers:

https://hackerone.com/reports/149369

https://hackerone.com/reports/244836

https://hackerone.com/reports/115271

https://hackerone.com/reports/180074

cedws

IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre. And that’s assuming that the company even has a responsible disclosure program or you risk putting your ass on the line.

I don’t like bounty programs. We need Good Samaritan laws that legally protect and reward white hats. Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

bongodongobob

Companies get hacked because Bob in finance doesn't have MFA and got a phishing email. In my experience working for MSP's it's always been phishing and social engineering. I have never seen a company comprised from some obscure bug in software. This may be different for super large organizations that are international targets, but for the average person or business, you're better off spending time just MFAing everything you can and using common sense.

bri3d

> We need Good Samaritan laws that legally protect and reward white hats.

What does this even mean? How is the a government going to do a better job valuing and scoring exploits than the existing market?

I'm genuinely curious about how you suggest we achieve

> Rewards that pay the bills and not whatever big tech companies have in their couch cushions.

So far, the industry has tried bounty programs. High-tier bugs are impossible to value and there is too much low-value noise, so the market converges to mediocrity, and I'm not sure how having a government run such a program (or set reward tiers, or something) would make this any different.

And, the industry and governments have tried punitive regulation - "if you didn't comply with XYZ standard, you're liable for getting owned." To some extent this works as it increases pay for in-house security and makes work for consulting firms. This notion might be worth expanding in some areas, but just like financial regulation, it is a double edged sword - it also leads to death-by-checkbox audit "security" and predatory nonsense "audit firms."

cedws

For the protections part: it means creating a legal framework in which white hats can ethically test systems without companies having a responsible disclosure program. The problem with responsible disclosure programs is that the companies with the worst security don't give a shit and won't have such a program. They may even threaten such Good Samaritans for reporting issues in good faith, there have been many such cases.

For the rewards part: again, the companies who don't have a shit won't incentivise white hat pentesting. If a company has a security hole that leads to disclosure of sensitive information, it should be fined, and such fines can be used for rewards.

This creates an actual market for penetration testing that includes more than just the handful of big tech companies willing to participate. It also puts companies legally on the hook for issues before a security disaster occurs, not after it's already happened.

jacquesm

Legal protections have absolutely nothing to do with 'the existing market'.

lenerdenator

> IMO it’s no wonder companies keep getting hacked when doing the right thing is made so painful and the rewards are so meagre.

Show me the incentives, and I'll show you the outcomes.

We really need to make security liabilities to be just that: liabilities. If you are running 20+ year-old code, and you get hacked, you need to be fined in a way that will make you reconsider security as a priority.

Also, you need to be liable for all of the disruption that the security breach caused for customers. No, free credit monitoring does not count as recompense.

dpoloncsak

I love this idea, but I feel like it just devolves into ways to classify that 'specific exploit' is/isn't technically a 0-day, so they can/can't be held liable

csmantle

User-supplied EC point validation is one of the most basic yet crucial steps in a sound implementation. I wonder why no one (and no tests) at CloudFlare caught these carelessnesses pre-signoff and pre-release.

bri3d

The article's deep dive into the math does it a disservice IMO, by making this seem like an arcane and complex issue. This is an EC Cryptography 101 level mistake.

Reading the actual CIRCL library source and README on GitHub: https://github.com/cloudflare/circl makes me see it as just fundamentally unserious, though; there's a big "lol don't use this!" disclaimer and no elaboration about considerations applied to each implementation to avoid common pitfalls, mention of third or first-party audit reports, or really anything I'd expect to see from a cryptography library.

tptacek

It's more subtle than that and is not actually that simple (though the attack is). The "modern" curve constructions pioneered by Bernstein are supposed to be misuse-resistant in this regard; Bernstein popularized both Montgomery and Edwards curves. His two major curve implementations are Curve25519 and Ed25519, which are different mathematical representations of the same underlying curve. Curve25519 famously isn't vulnerable to this attack!

tptacek

Oh, my God, I'm just now remembering why this curve was called FourQ.

Rygian

Here's an idea, from a parallel universe: Cloudflare should have been forced, by law, to engage a third party neutral auditor/pentester, and fix or mitigate each finding, before being authorised to expose the CIRCL lib in public.

After that, any CVE opened by a member of the public, and subsequently confirmed by a third party neutral auditor/pentester, would result in 1) fines to Cloudflare, 2) award to the CVE opener, and 3) give grounds to Cloudflare to sue their initial auditor.

But that's just a mental experiment.

semiquaver

Seems like you want open source software to die.

Rygian

A more charitable interpretation could be "seems like you want large corporations, which have the financial means, to take security seriously and build a respectable process before publishing security solutions whatever the license".

jjk7

The license reads: 'THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"'.

Rygian

If you bought a car and your dealer had you sign an EULA with that sentence in it (pertaining specifically to the security features of your car), would you feel safe to ride it at highway speeds?

jonathanstrange

What? We're talking about a free open source library (that I happen to use). Nobody who writes and publishes software for free should be subject to any such regulations. That's why the licenses all contain some "provided as is, no warranty" clause.

Otherwise, nobody would ever write non-commercial cryptographic libraries any longer. Why take the risk? (And good luck with finding bugs in commercial, closed source cryptographic libraries and getting them fixed...)

Rygian

Taking the parallel-universe idea a bit further: for-profit actors must accept financial accountability for the open source software they engage with, whereas not-for-profit actors are exempt or even incentivised.

Build an open-source security solution as an individual? Well done you, and maybe here's a grant to be able to spend more of your free time on it, if you choose to do so.

Use an open-source security solution to sell stuff to the public and make a profit? Make sure you can vouch for the security, otherwise no profit for you.

trklausss

What do you mean, practices from safety-critical industries applied to security? Unpossible! (end /s)

For that you need regulation that enforces it. On a global scale it is pretty difficult, since it's a country-by-country thing... If you say e.g. for customers in the US, then US Congress needs to pass legislation on that. Trend is however to install backdoors everywhere, so good luck with that.