Go Cryptography State of the Union
26 comments
·November 20, 2025Aman_Kalwar
This is a super helpful overview. Love how Go’s crypto ecosystem keeps getting more opinionated and safer.
alphazard
I don't know why the standard library crypto packages insist on passing around `[]byte` for things like a seed value, or why we can't just pass in a seed value to a single unambiguous constructor when generating asymmetric keys. Or how the constructor for a key pair could possibly return an error, when the algorithm is supposed to be deterministic.
It all just seems a bit sloppy. Asking for a seed value like `[32]byte` could at least communicate to me that the level of security is at most 256 bits. And removing all dependencies on rand would make it obvious where the entropy must be coming from (the seed parameter). Cloudflare's CIRCL[0] library does a bit better, but shares some of the same problems.
FiloSottile
> I don't know why the standard library crypto packages insist on passing around `[]byte` for things like a seed value
These are actually very deliberate choices, based on maybe unintuitive experience.
We use []byte instead of e.g. [32]byte because generally you start with a []byte that's coming from somewhere: the network, a file format, a KDF.
Then you have two options to get a [32]byte: cast or copy. They both have bad failure modes. If you do a ([32]byte)(foo) cast, you risk a panic if the file/packet/whatever is not the size you expected (e.g. because it's actually attacker controlled). If you do a copy(seed, foo) it's WAY WORSE, because you risk copying only 5 bytes and leaving the rest to zero and not noticing.
Instead, we decided to move the length check into the library everywhere we take bytes, so at worst you get an error, which presumably you know how to handle.
> why we can't just pass in a seed value to a single unambiguous constructor when generating asymmetric keys
I am not sure what you are referring to here. For e.g. ML-KEM, you pass the seed to NewDecapsulationKey768 and you get an opaque *DecapsulationKey768 to pass around. We've been moving everything we can to that.
> Or how the constructor for a key pair could possibly return an error, when the algorithm is supposed to be deterministic.
Depends. If it takes a []byte, we want to return an error to force handling of incorrect lengths. If the key is not a seed (which is only an option for private keys), it can also be invalid, deterministic or not. (This is why I like seeds. https://words.filippo.io/ml-kem-seeds/)
> removing all dependencies on rand would make it obvious where the entropy must be coming from (the seed parameter)
Another place where experience taught us otherwise. Algorithms that take a well-specified seed should indeed just take that (like NewDecapsulationKey768 does!), but where the spec annoyingly takes "randomness from the sky" (https://words.filippo.io/avoid-the-randomness-from-the-sky/) in an unspecified way, taking a io.Reader gave folks the wrong impression that they could use that for deterministic key generation, which then breaks as soon as we change the internals.
There is only one place to get entropy from in a Go program, anyway: crypto/rand. Anything else is a testing need, and it can be handled with test affordances like the upcoming crypto/mlkem/mlkemtest or testing/cryptotest.SetGlobalRandom.
tialaramex
It's been many years since I wrote any Go for a living, but does Go seriously lack a way to say "foo is probably 32 bytes, give me the 32 byte array, and if I'm wrong about how big foo is, let me handle that" ?
If the caller was expected to provide a duration and your language has a duration type, you presumably wouldn't take a string, parse that and if it isn't a duration return some not-a-duration error, you'd just make the parameter a duration. It seems like this ought to be a similar situation.
alphazard
> If you do a ([32]byte)(foo) cast, you risk a panic if the file/packet/whatever is not the size you expected (e.g. because it's actually attacker controlled)
Can you give an example of a situation where that is actually a concern? It doesn't really seem like a realistic threat model to me. Knowledge of the key is pretty much how these algorithms define attackers vs. defenders. If the attacker has the key that's gg.
There are lots of things in Go that can panic. Even in syntax, the conversion is very similar to an interface conversion, and those haven't been a problem for me in practice, partly because of good lint rules to force checking the "okay" boolean.
FiloSottile
A cloud service that lets users upload their certificates and private keys, to be served by the service's CDN. Here the attacker is attacking the system's availability, not the key.
(But also, it's easy to see how this is a problem for public keys and ciphertexts, and it would be weird to have an inconsistent API for private keys.)
stouset
Those aren’t arguments for having []byte instead of [32]byte like you think they are. They’re arguments for having an unambiguous IV type that can be constructed from a []byte or [32]byte, or responsibly generated on your behalf.
Of course, this isn’t really reasonable given golang’s brain-dead approach to zero values (making it functionally impossible to structurally prevent using zero IVs). But it just serves as yet another reminder that golang’s long history of questionable design choices actively impede the ability to design safe, hard-to-misuse APIs.
FiloSottile
I'm not sure what you are referring to, but we were talking about keys, not IVs.
Also, "an unambiguous key type that can be constructed from a []byte or responsibly generated on your behalf" is exactly what crypto/mlkem exposes.
kalterdev
What’s wrong with zero values? They free the developer from guessing hidden allocations. IMO this benefit outweighs cast riddles by orders of magnitude.
edoceo
I'm curious about how GC languages handle crypto. Is it a risk that decrypted stuff or keys and things may be left in memory (heap?) before the next GC cycle?
FiloSottile
You might find this proposal and the upcoming runtime/secret package interesting.
OhMeadhbh
What we did with Java (J/SAFE) was to add explicit methods to zero out sensitive info. It was a bit of a PITA because Java's never had consistent semantics about when final(ize,ly) methods were called. Later we added code to track which objects were allocated, but no longer needed, which also wasn't much fun.
Back in the Oak days Sun asked us (I was at RSADSI at the time) to review the language spec for security implications. Our big request was to add the "secure" storage specifier for data. The idea being a variable, const, whatever that was marked "secure" would be guaranteed not to be swapped out to disk (or one of a number of other system specific behaviors). But it was hard to find a concrete behavior that would work for all platforms they were targeting (mostly smaller systems at the time.)
My coworker Bob Baldwin had an existing relationship with Bill Joy and James Gosling (I'm assuming as part of the MIT mafia) so he led the meetings. Joy's response (or maybe Goslings, can't remember anymore) was "Language extension requests should be made on a kidney. Preferably a human kidney. Preferably yours. That way you'll think long and hard about it and you sure as hell won't submit 2."
alphazard
It can be, another risk it that a secret value is left on the stack, and is never overwritten because the stack doesn't get to that memory address again, so it's never overwritten or zerod.
Go really just needs a few `crypto.Secret` values of various sizes, or maybe a generic type that could wrap arrays. Then the runtime can handle all the best practices, like a single place in memory, and aggressive zeroing of any copies, etc.
FiloSottile
It's not that simple! What about intermediate values during arithmetic computations? What about registers spilling during signal handling?
I honestly thought it could not be done safely, but the runtime/secret proposal discussion proved me wrong.
Thaxll
If you have access to the local machine no language will save you.
OhMeadhbh
Sure. But there are several graduations of threat between "zero access" and "complete access." On the intarwebs, every request is from a potential attacker. Attackers are known for violating RFC3514, so it is frequently useful to build a trust model and use existing access control mechanism to deny "sensitive" data (or control functions) to protocol participants who cannot verify their identity and/or access permission.
These models can get complex quickly, but are nevertheless important to evaluate a system's specified behaviour.
No system is perfect and your mileage may vary.
edoceo
To oversimplifiy, it's like the same-ish risk level as JS or PHP or Ruby? (assuming the underlying algorithm is good)
OhMeadhbh
I'm more of a C person than a Go person, but I am unbelievably happy that someone in that community is using the word "cryptography" to mean cryptography and not Bitcoin.
jsheard
Wasn't it just the shorthand "crypto" that got co-opted by the Shitcoin Industrial Complex? I think "cryptography" still means what it always meant regardless of who you ask.
OhMeadhbh
That's mostly the case, but I've seen job postings for "cryptography experts" that are, as best I can tell, looking for block chain hucksters. But I'm unlikely to work for Microsoft, so I just ignore them.
jsheard
Well yeah, someone hired to work with the low-level nuts and bolts of blockchains would ideally need to know their way around actual bona-fide cryptography.
OhMeadhbh
Downvoted for mentioning that people confuse cryptography with Bitcoin? Good thing I didn't mention I think we're in an AI bubble. Or that I prefer emacs to vi.
kunley
Let's revert the self-scrutiny trend and actually enjoy a fellow mentioning his downvote. At least I am enjoying, your perception is valid.
I agree with the author’s sentiment about FIPS 140. I find NIST to be incredibly slow. I understand there must be some stability, but they are too slow. For example, I think it's horrible that they are still recommending PBKDF2 in 2025.