Skip to content(if available)orjump to list(if available)

U.S. Government Disclosed 39 Zero-Day Vulnerabilities in 2023, First-Ever Report

nimbius

I hope this signals a turning point and lessons learned from the historic practice of hoarding exploits in the hopes they can be weaponized.

when you disclose vulnerabilities and exploits, you effectively take cannons off both sides of the metaphorical battle field. it actively makes society safer.

toomuchtodo

Governments who want power will hoard the knowledge, which is power. Other governments will share. This is perpetual tension: we collectively receive utility when good policy is active (rapid dissemination of vuln info), but we need third parties to seek these exploits out when government cannot be relied on. Very similar to the concept of journalism being the Fourth Estate imho.

(vuln mgmt in finance is a component of my day gig)

thomastjeffery

You can't hoard knowledge, just like you can't take it away.

boringg

Huh? You absolutely can hoard knowledge and you can most certainly purge knowledge as well.

Xen9

Only predictive capabilities & AI trained on data can be more valuable than having perfect profile of the person who turns out to be an enemy of your nation in whatever sense. Taking a person down once you know everyone they have ever talked, been interested about, or thought, is trivial. This can only be acheived by mass surveillance & hoarding.

Arguably US as a nation state has the very best LLMs in the world, which is why I personally think they have been running weak AGI for few years, e.g. for autonomous malware analysis, reverse-engineering, and tailored malware generation & testing capability. Because they can actually store the personal data long term, without habing to delete it, this may be of gigantic strategic advantage due web being highly "polluted" after 2020-2021.

I would from this guess US has bet on AI research since end of WWII, and especially within last 30 years, noting the rather highly remarkable possibility that Surveillance Capitalism is actually part of the nation's security effforts. The warehouses of data they have built are warehouses of gold, or rather, gold mixed in sand since of course lots of it is also garbage.

tptacek

This is a little bit like talking about why they hoard the guns. The reason governments have caches of exploit chains is not hard to understand.

DoctorOetker

It is substantially different from hoarding guns: not hoarding exploits takes away those exploits from adversaries.

If an important factor is the ratio of exploits A and B have, then publishing their hidden but common exploits the ratio does not remain the same.

The ratio is interesting because potential exploitation rate is proportional zero days (once used, the "zero" day is revealed and remediated after a certain time span).

toomuchtodo

You're the expert, and certainly not wrong, I wrote my comment because I have had to explain this to folks in a professional capacity and thought it might be helpful.

spacephysics

Most likely these vulnerabilities were known by adversaries and they decided to report these to make it more difficult for those adversaries to attack.

I’m sure the really juicy zero days they’ve discovered in-house are kept out of reports like these

thewebguyd

This is the most likely scenario. Its not like the government has decided they no longer need to hang on to zero days to use against adversaries.

They've just determined these ones are either no longer useful to them, or adversaries have discovered and began using them.

tptacek

It literally is the scenario, as really the only outcome of the VEP (for serious, "marketable" vulnerabilities) is "disclose once burned".

kevin_thibedeau

The US depends on exploits being available for the companies it uses to circumvent the 4th amendment.

stephen_g

The problem is when they don't know that their adversaries know about those exploits...

There's a lot of arrogance and hubris with the idea of NOBUS, and they often make things worse assuming only they know...

JumpCrisscross

> when you disclose vulnerabilities and exploits, you effectively take cannons off both sides of the metaphorical battle field. it actively makes society safer

If I know you always disclose, and I find something you haven't disclosed, I know I have an edge. That incentivises using it because I know you can't retaliate in kind.

The hoarding of vulns is a stability-instability paradox.

Retr0id

How would you ever know that someone always discloses, if you can't know what they don't disclose?

krisoft

You can’t know (in the mathematical certainty sense) that they always disclose. But you can know if some entity has the policy of always disclosing. Those are two different things. A policy is about the intentions and the structure of the organisation. How they think about themselves, how they train their recruits and how they structure their operations.

The first hint would be the agency stating that they have a policy of always disclosing. You would of course not believe that because you are a spy with trust issues. But then you would check and hear from all kind of projects and companies that they are receiving a steady stream of vulnerability reports from the agency. You could detect this by compromising the communications or individuals in the projects receiving the reports, or through simple industrial rumours. That would be the second hint.

Then you would compromise people in the agency for further verification. (Because you are a spy agency. It is your job to have plants everywhere.) You would ask these people “so what do you do when you find a vulnerability?” And if the answer is “oh, we write a report to command and we sometimes never hear about it again” then you know that the stated policy is a lie. If they tell you “we are expected to email the vulnerable vendor as soon as possible, and then work with them to help them fix it, and we are often asked to verify that the fix is good” then you will start to think that the policy is actually genuine.

JumpCrisscross

> How would you ever know that someone always discloses

Same way you know if they don't have nukes. Based on what they say and your best guess.

tptacek

It is not that turning point. These are VEP vulnerabilities. Like every major government, the US will continue to do online SIGINT.

lenerdenator

I'd be surprised if the policy continues.

Or if the people who worked at the agency are still there.

burkaman

They are not: https://techcrunch.com/2025/01/22/trump-administration-fires...

Also, not a joke, this program contains the word "equity" ("the Director of National Intelligence is required to annually report data related to the Vulnerabilities Equities Process") so it will probably be frozen or cancelled.

vaccineai

[dead]

dang

We've banned this account for using HN primarily (exclusively?) for political/ideological/national battle. That's not allowed here, regardless of what you're battling for or against.

https://news.ycombinator.com/newsguidelines.html

axegon_

I doubt it. Historically, most government agencies around the world have had appalling security and each iteration is just as bad as the previous with a few half-assed patches on top to cover the known holes.

meowface

I might be a contrarian, but I think it makes sense for the NSA to hoard 0-days. They should disclose only after they burn them.

tptacek

You're only a contrarian on message boards. The economics of CNE SIGINT are so clear --- you'd be paying integer multiples just in health benefits for the extra staff you'd need if you replaced it --- that vulnerabilities could get 10x, maybe 100x more expensive and the only thing that would change would be how lucrative it was to be a competitive vuln developer.

A lot of things can be understood better through the lens of "what reduces truck rolls".

burkaman

Any 0-day found by an NSA employee can and will be found by someone else, and then sold or used.

throwaway2037

In theory, I agree. In practice, how do you explain how NSO Group kept the secret for the 0-day exploit for WhatsApp remote execute for its Pegasus product? Or do I misunderstand Pegasus? Maybe it isn't a one trick pony, but a platform to continuously deliver 0-day exploits.

tptacek

The VEP is literally based on that premise.

timewizard

The NSAs charter should be to secure this country and not attack others.

dadrian

That organization exists, and it is called the FBI.

tptacek

That is literally the opposite of why NSA exists.

thomastjeffery

You can't actually hoard them, though. They aren't objects, they are knowledge.

A 0-day is present in every instance of the software it can exploit.

bluefirebrand

This is a meaningless distinction imo

You hoard knowledge by writing it down somewhere and then hoarding the places it's written down. Whether that's books, microfilm, hard drives, what have you

rozab

meowface

That's definitely the downside in the trade-off, yeah. If you're going to hoard you better also protect or you just get the worst of all worlds. Still, I am generally hopeful about our intelligence agencies' ability to prevent leaks, even if fuckups have occurred.

tptacek

Yeah, because intelligence is famously a discipline where nothing ever goes wrong.

honzaik

OK, hoarding discovered zero-days might not be the best strategy, BUT if we actually create a backdoor and don't tell anyone about it, then this should be safer right? right? /s

https://www.wired.com/2015/12/researchers-solve-the-juniper-...

https://en.wikipedia.org/wiki/Dual_EC_DRBG

https://en.wikipedia.org/wiki/Juniper_Networks#ScreenOS_Back...

null

[deleted]

timewizard

The lesson is not in vulnerability management.

The lesson is that our desktop software is garbage and the vendors are not properly held to account.

skirge

Burning 0-days makes your enemies spend more time on finding new ones - costs rise so they will go bankrupt. Cold war 2.0. It's not enough to just run grep / memcpy finder on software like 20-15 years ago.

ikmckenz

There is no such thing as a "Nobody But Us" vulnerability. Leaving holes in systems and praying enemies won't discover them, with the hope of attacking them ourselves is extremely foolish.

tptacek

CNE "zero-day" isn't "NOBUS", so you're arguing with a straw man.

jjk166

I mean there are certainly vulnerabilities that are substantially asymmetric.

joshfraser

I've seen the invite-only marketplaces where these exploits are sold. You can buy an exploit to compromise any piece of software or hardware that you can imagine. Many of them go for millions of dollars.

There are known exploits to get root access to every phone or laptop in the world. But researchers won't disclose these to the manufacturers when they can make millions of dollars selling them to governments. Governments won't disclose them because they want to use them to spy on their citizens and foreign adversaries.

The manufacturers prefer to fix these bugs, but aren't usually willing to pay as much as the nation states that are bidding. All they do is drive up the price. Worse, intelligence agencies like the NSA often pressure or incentivize major tech companies to keep zero-days unpatched for exploitation.

It's a really hard problem. There are a bunch of perverse incentives that are putting us all at risk.

JumpCrisscross

> It's a really hard problem

Hard problems are usually collective-action problems. This isn't one. It's a tragedy of the commons [1], the commons being our digital security.

The simplest solution is a public body that buys and releases exploits. For a variety of reasons, this is a bad idea.

The less-simple but, in my opinion, better model is an insurance model. Think: FDIC. Large device and software makers have to buy a policy, whose rate is based on number of devices or users in America multiplied by a fixed risk premium. The body is tasked with (a) paying out damages to cybersecurity victims, up to a cap and (b) buying exploits in a cost-sharing model, where the company for whom the exploit is being bought pays a flat co-pay and the fund pays the rest. Importantly, the companies don't decide which exploits get bought--the fund does.

Throw in a border-adjustment tax for foreign devices and software and call it a tariff for MAGA points.

[1] https://en.wikipedia.org/wiki/Tragedy_of_the_commons

impossiblefork

I think what is actually the problem is the software and hardware manufacturers.

Secure use of any device requires a correct specification. These should be available to device buyers and there should be legal requirements for them to be correct and complete.

Furthermore, such specifications should be required also for software-- precisely what it does and legal guarantees that it's correct.

This hasn't ever been more feasible, also considering that we Europeans are basically at war with the Russians, it seems reasonable to secure our devices.

Veserv

We have already have that: ISO 15408, Common Criteria [1]. Certification is already required and done for various classes of products before they can be purchased by the US government.

However, large commercial IT vendors such as Microsoft and Cisco were unable to achieve the minimum security requirements demanded for high criticality deployments, so the US government had to lower the minimum requirements so their bids could be accepted.

At this point, all vendors just specify and certify that their systems have absolutely no security properties and that is deemed adequate for purchase and deployment.

The problem is not lack of specification, it is that people accept and purchase products that certify and specify they have absolutely zero security.

[1] https://en.m.wikipedia.org/wiki/Common_Criteria

JumpCrisscross

> These should be available to device buyers and there should be legal requirements for them to be correct and complete

You're still left with a massive enforcement problem nobody wants to own. Like, "feds sued your kid's avourite toy maker because they didn't file Form 27B/6 correctly" is catnip for a primary challenger.

clippyplz

That's an incredibly tough sell, particularly for software. Who is it that should "require" these specifications, and in what context? Can I still put my scrappy code on Github for anyone to look at? Am I breaking the law by unwittingly leaving in a bug?

joshfraser

Modern software is layers upon layers of open-source packages and libraries written by tens of thousands of unrelated engineers. How do you write a spec for that?

fluoridation

A tragedy of the commons occurs when multiple independent agents exploit a freely available but finite resource until it's completely depleted. Security isn't a resource that's consumed when a given action is performed, and you can never run out of security.

JumpCrisscross

> Security isn't a resource that's consumed when a given action is performed, and you can never run out of security

Security is in general non-excludable (vendors typically patch for everyone, not just the discoverer) and non-rival (me using a patch doesn't prevent you from using the patch): that makes it a public good [1]. Whether it can be depleted is irrelevant. (One can "run out" of security inasmuch as a stack becomes practically useless.)

[1] http://www.econport.org/content/handbook/commonpool/cprtable...

skirge

security maybe considered "commons" but accountables are individual manufacturers. If my car is malfunctioning I'm punished by law enforcement. There are inspections and quality standards. Private entities may provide certifications.

Always42

Please no more mandated insurance programs.

skirge

insurers can be quite good at enforcing quality standards

tptacek

The markets here are complicated and the terms on "million dollar" vulnerabilities are complicated and a lot of intuitive things, like the incentives for actors to "hoard" vulnerabilities, are complicated.

We got Mark Dowd to record an episode with us to talk through a lot of this stuff (he had given a talk whose slides you can find floating around, long before) and I'd recommend it for people who are interested in how grey-market exploit chain acquisition actually works.

https://securitycryptographywhatever.com/2024/06/24/mdowd/

Melatonic

Makes me wonder if there are engineers on the inside of some of these manufacturers intentionally hiding 0 days so that they can then go and sell them (or engineers placed there by companies who design 0 days)

tptacek

People have been worrying about this for 15 years now, but there's not much evidence of it actually happening.

One possible reason: knowing about a vulnerability is a relatively small amount of the work in providing customers with a working exploit chain, and an even smaller amount of the economically valuable labor. When you read about the prices "vulnerabilities" get on the grey market, you're really seeing an all-in price that includes value generated over time. Being an insider with source code access might get you a (diminishing, in 2025) edge on initial vulnerability discovery, but it's not helping you that much on actually building a reliable exploit, and it doesn't help you at all in maintaining that exploit.

skirge

good vulnerability / backdoor should be indistinguishable from programming mistake. Indirect call. Missing check on some bytes of encrypted material. Add some validation and you will have good item to sell no one else can find.

Edman274

Are we just straight up ignoring the Jia Tan xz exploit that happened 10 months ago that would've granted ssh access to the majority of servers running OpenSSH?, or does that not count for the purposes of this question, because that was an open source library rather than a hardware manufacturer?

timewizard

> It's a really hard problem.

Classify them as weapons of mass destruction. That's what they are. That's how they should be managed in a legal framework and how you completely remove any incentives around their sale and use.

kingaillas

How about some penalties for their creation? If NSA is discovering or buying, someone else is creating them (even if unintentionally).

Otherwise corporations will be incentivized (even more than they are now) to pay minimal lip service to security - why bother investing beyond a token amount, enough to make PR claims when security inevitably fails - if there is effectively no penalty and secure programming eats into profits? Just shove all risk onto the legal system and government for investigation and clean up.

JumpCrisscross

> weapons of mass destruction. That's what they are

Seriously HN? Your Netflix password being compromised is equivalent to thermonuclear war?

aczerepinski

Think more along the lines of exploits that allow turning off a power grid, spinning a centrifuge too fast, or releasing a dam.

tptacek

That is never, ever going to happen, and they are nothing at all like NBC weapons.

joshfraser

Yes. Except our government is the largest buyer.

Symbiote

The USA has 5044 nuclear missiles, so that shouldn't be a problem.

Henchman21

Suddenly I felt like re-reading Ken Thompson’s essay Reflections on Trusting Trust.

We’ve created such a house of cards. I hope when it all comes crashing down that the species survives.

davisr

Instead of hoping, you can do a lot just by ditching your cell phone and using Debian stable.

Henchman21

Ah yes, switching from an iPhone to Debian is sure to… checks notes save the species from extinction.

Apologies for the dismissive snark; perhaps you could provide me some examples of how this would help?

westoque

reminds me of the anthropic claude jailbreak challenge which only pays around $10,000. if you drive the price up, i'm pretty sure you'll get some takers. incentives are not aligned.

null

[deleted]

pentel-0_5

These are just the disclosed ones. The weaponized ones (as mentioned) found or bought kept secret by the NSA, etc. such as from Zerodium (ex-VUPEN) and similar aren't counted obviously. ;)

tptacek

It's a "tell" in these discussions when they center exclusively on NSA, since there are dozens of agencies in the USG that traffick in exploit chains.

HypnoDrone

So there was 39 vulnerabilities that affected government systems. The rest didn't so they had no need to disclose.

bangaladore

Similar, but my thought is that they found out some other gov(s) know about it as well. And that it hurts others more than it hurts the US gov.

maerF0x0

I'd say that's a very cynical take, theres a verification process and they disclosed 90% of them, pretty generous gift to the world if you ask me. They do not have a moral mandate to use their resources to benefit all.

mattmaroon

"What the government didn't reveal is how many zero days it discovered in 2023 that it kept to exploit rather than disclose. Whatever that number, it likely will increase under the Trump administration, which has vowed to ramp up government hacking operations."

This is a bit of a prisoner's dilemma. The world would be better off if everyone disclosed every such exploit for obvious reasons. But if government A discloses everything and government B reserves them to exploit later, then government B has a strong advantage over government A.

The only responses then are war, diplomacy, or we do it too and create yet another mutually assured destruction scenario.

War is not going to happen because the cure would be worse than the disease. The major players are all nuclear powers. Diplomacy would be ideal if there were sufficient trust and buy-in, but it seems unlikely the U.S. and Russia could get there. And with nuclear treaties there's an easy verification method since nuclear weapons are big and hard to do on the sly. It'd be hard to come up with a sufficient verification regime here.

So we're left with mutually assured cyber destruction. I'd prefer we weren't, but I don't see the alternative.

dadrian

If Government A and Government B are not equally "good" for the world, then the world is _not_ better off if everyone disclosed, since the main users of CNE are LE/IC.

mattmaroon

I'm not sure what some of these initialisms are but the whole idea behind disclosing is to take tools away from the bad guys (whoever you think they are) because presumably they'll have found some of them too.

tptacek

CNE: the modern term of art for deploying exploits to accomplish real-world objectives ("computer network exploitation").

LE: law enforcement, a major buyer of CNE tooling.

IC: the intelligence community, the buyer of CNE tooling everyone thinks about first.

null

[deleted]

Veserv

Disclosing zero-days so the vendor can patch them and declare "mission accomplished" is such a waste.

"Penetrate and Patch" is about as effective for software security as it is for bulletproof vests. If you randomly select 10 bulletproof vests for testing, shoot each 10 times and get 10 holes each, you do not patch those holes and call it good. What you learned from your verification process is that the process that lead to that bulletproof vest is incapable of consistently delivering products that meet the requirements. Only development process changes that result in passing new verification tests give any confidence of adequacy.

Absent actively, or likely actively, exploited vulnerabilitys, the government should organize vulnerabilitys by "difficulty" and announce the presence of, but not disclose the precise nature of, vulnerabilitys and demand process improvement until vulnerabilitys of that "difficulty" are not longer present as indicated by fixing all "known, but undiclosed" vulnerabilitys of that "difficulty". Only that provides initial supporting evidence that the process has improved enough to categorically prevent vulnerabilitys of that "difficulty". Anything less is just papering over defective products on the government's dime.

tptacek

"Penetrate and patch" is a term of art Marcus J. Ranum tried to popularize 15-20 years ago, as part of an effort to vilify independent security research. Ranum was part of an older iteration of software security that was driven by vendor-sponsored cliques. The status quo ante of "penetrate and patch" that he was subtextually supporting is not something that most HN people would be comfortable with.

Veserv

"Penetrate and Patch" as a failed security process is distinct from vilifying independent security research, and it should be obvious from my post as I point out that penetration testing is a integral part of the testing and verification process. It tells you if your process and design have failed, but it is not a good development process itself.

tptacek

Again and tediously: the only reason anyone would use that sequence of words would be to invoke Ranum's (in)famous "Six Dumbest Ideas In Computer Security" post, one of which was, in effect, "the entire modern science of software security".

I recommend, in the future, that if you want to pursue a security policy angle in discussions online with people, you avoid using that term.

JumpCrisscross

> the government should organize vulnerabilitys by "difficulty" and announce the presence of, but not disclose the precise nature of, vulnerabilitys and demand process improvement until vulnerabilitys of that "difficulty" are not longer present as indicated by fixing all "known, but undiclosed" vulnerabilitys of that "difficulty"

For this amount of bureaucracy, the government should just hire all coders and write all software.

Veserv

You appear to misunderstand what I am saying:

1) Government already has vulnerabilitys.

2) Government identifies vulnerabilitys they already own by "difficulty to discovery".

3) Government selects the lowest "difficulty to discover" vulnerabilitys they already own.

4) Government announces products with known "lowest difficulty to discover" vulnerabilitys are vulnerable, but does not disclose them.

5) Government keeps announcing those products continue to be the most "insecure" until all vulnerabilitys they already own at that level are fixed.

6) Repeat.

JumpCrisscross

What you're suggesting requires creating a massive federal bureaucracy to continuously survey the product landscape. It then requires the private sector to duplicate that work. This is stupid.

staticelf

I think people give the US a lot of unnecessary shit. I don't think my government releases any zero days but I am sure they must have found some. Every government today probably uses zero days but it seems very few release information about them?

null

[deleted]

dylan604

It's not about being held to a lower standard, it's about being held to a higher standard.

egberts1

Simply because not enough anti-malware vendors are willing to let US government know that one of their favorite hoard of malware has lost "its edge".

So, either they form a department of viability or they lose it all.

davemp

While I don’t think we should be hoarding vulns, the idea of the government having huge budgets to find and disclose software defects is a bit strange to me. Seems like another instance of socializing bad externalities.