Skip to content(if available)orjump to list(if available)

Hell is overconfident developers writing encryption code

umvi

What does "don't build your own crypto" even mean any more?

I originally thought it meant "don't implement AES/RSA/etc algorithms yourself"

But now it seems to mean "pay auth0 for your sign in solution or else you'll definitely mess something up"

As an example, we have a signing server. Upload a binary blob, get a signed blob back. Some blobs were huge (multiple GB), so the "upload" step was taking forever for some people. I wrote a new signing server that just requires you to pass a hash digest to the server, and you get the signature block back which you can append to the blob. The end result is identical (i.e. if you signed the same blob with both services the result would indistinguishable). I used openssl for basically everything. Did I roll my own crypto? What should I have done instead?

pclmulqdq

It used to mean "use AES instead of rolling your own form of symmetric encryption." Then it became "use a library for AES instead of writing the code." It has now reached the obvious conclusion of "don't do anything vaguely security-related at all unless you are an expert."

tptacek

No it hasn't. The subtext of "don't roll your own crypto" is that "AES" doesn't do, by itself, what most developers think it does; there's a whole literature of how to make AES do anything but transform 16 unstructured bytes into 16 different bytes indistinguishable from random noise; anything past that, including getting AES to encrypt a string, is literally outside of AES's scope.

The shorter way of saying this is that you should not use libraries that expose "AES", but instead things like Sodium that expose "boxes" --- and, if you need to do things that Sodium doesn't directly expose, you need an expert.

Contra what other people on this thread have suggested, reasonable security engineers do not in fact believe that ordinary developers aren't qualified to build their own password forms or permissions systems; that's a straw man argument.

stouset

It's complicated, no?

Reasonable developers are qualified to do those things. But to build a full-featured authentication subsystem for their webapp? If it's something that holds any kind of reasonably private info, I'm not so sure.

Sure, a reasonable developer will use some sort of PBKDF to hash passwords. But when users need a password reset over email, will they know not to store unhashed reset tokens directly in the database? Will they know to invalidate existing sessions when the user's password is reset? Will they reset a browser-stored session ID at login, preventing fixation? And on and on and on. The answer to some of these questions will be yes, but most developers will have a few for which the answer is no. Hell, I've probably built more auth systems than most (and have reported/fixed a few vulnerabilities on well-known open-source auth systems to boot) and I'm honestly not sure I'd trust myself to do it 100% correctly for a system that really mattered.

Even outside of "holding the crypto wrong", these things have sharp edges and the more you offload to an existing, well-vetted library the more likely you are to succeed.

pclmulqdq

Reasonable security engineers take a reasonable position on this. Many other developers (usually uninformed ones) believe this now means "don't make an auth system." Like it or not, this has become a sort of adage that people cargo-cult.

tialaramex

There's even bogus "But it's encrypted" crypto in the new software I'm working with in my day job this year 2025. There are so many other "must fix" user visible problems that I actually forgot to mention "Also the cryptography is nonsense, either fix it or remove it" on the one hour review meeting last week, but I should add that.

I've made one of these goofs myself when I was much younger (32 byte tokens, attacker can snap them into two 16-byte values, replace either half from another token and that "works" meaning the attacker can take "Becky is an Admin" and "Jimmy is a Customer" glue "y is an Admin" to "Jimm" and make "Jimmy is an Admin") which got caught by someone more experienced before it shipped to end users, but yeah, don't do that.

ozim

Unfortunately not really.

Amount of foot guns in auth flow is high. Implementation of login / password form is just a small piece.

Making sure there are no account enumeration possibilities is hard. Making sure 2FA flow is correct and cannot be bypassed is hard. Making proper account recovery flow has its own foot guns.

If you can use off the shelf solution where someone already knows about all those - it still stands don’t roll your own.

brianstrimp

And the consequence is that people get banged on the head if they either use sth existing (cause they will be using it wrong) or they build sth on their own (because that's obviously bad) or they get fed up and don't use anything.

The issue with security researchers, as much as I admire them, is that their main focus is on breaking things and then berating people for having done it wrong. Great, but what should they have done instead? Decided which of the 10 existing solutions is the correct one, with 9 being obvious crap if you ask any security researcher? How should the user know? And typically none of the existing solutions matched the use case exactly. Now what?

It's so easy to criticize people left and right. Often justifiably so. But people need to get their shit done and then move on. Isn't that understandable as well?

soatok

> The issue with security researchers, as much as I admire them, is that their main focus is on breaking things and then berating people for having done it wrong.

This is plain incorrect in my experience.

Recommended reading (addresses the motivations and ethics of security research): https://soatok.blog/2025/01/21/too-many-people-dont-value-th...

> Great, but what should they have done instead? Decided which of the 10 existing solutions is the correct one, with 9 being obvious crap if you ask any security researcher?

There's 10 existing solutions? What is your exact problem space, then?

I've literally blogged about tool recommendations before: https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/

I'm also working in all of my spare time on designing a solution to one of the hard problems with cryptographic tooling, as I alluded to in the blog post.

https://soatok.blog/2024/06/06/towards-federated-key-transpa...

Is this not enough of an answer for you?

> How should the user know? And typically none of the existing solutions matched the use case exactly. Now what?

First, describe your use case in as much detail as possible. The closer you can get to the platonic ideal of a system architecture doc with a formal threat model, the better, but even a list of user stories helps.

Then, talk to a cryptography expert.

We don't keep the list of experts close to our chest: Any IACR-affiliated conference hosts several of them. We talk to each other! If we're not familiar with your specific technology, there's bound to be someone who is.

This isn't presently a problem you can just ask a search engine or generative AI model and get the correct and secure answer for your exact use case 100% of the time with no human involvement.

Finding a trusted expert in this field is pretty easy, and most cryptography experts are humble enough to admit when something is out of their depth.

And if you're out of better options, this sort of high-level guidance is something I do offer in a timeboxed setting (up to one hour) for a flat rate: https://soatok.com/critiques

harrall

I feel like there is a huge subset of people who just do trial and error programming. The other day, I watched a whole team of 8 people spend a whole work day swapping out random components on Zoom to diagnose a problem because not a single person considered attaching a debugger to see the exact error in 5 minutes.

I feel like you have to tell people to not roll your own whatever because there are so many of these types of people.

skydhash

I was so surprised the first time I saw this. I mostly work alone and my workflow is usually study an example (docs or project), collect information (books and article), and iteratively build a prototype while learning the domain. Then I saw a colleague literally copying and pasting stuff, hoping that the errors go away in his project. After he asked for my help, I tell him to describe how he planned to solve the problem and he couldn't. For him, it was just endless tweaking until he got something working. And he could have understood the (smallish) problem space by watching one or two videos on YouTube.

sgarland

Those same people are utterly incapable of reading logs. I’ve had devs send me error messages that say precisely what the problem is, and yet they’re asking me what to do.

The form of this that bothers me the most is in infra (the space I work in). K8s is challenging when things go sideways, because it’s a lot of abstractions. It’s far more difficult when you don’t understand how the components underpinning it work, or even basic Linux administration. So there are now a ton of bullshit AI products that are just shipping return codes and error logs out to OpenAI, and sending it back rephrased, with emoji. I know this is gatekeeping, and I do not care: if you can’t run a K8s cluster without an AI tool, you are not qualified to run a K8s cluster. I’m not saying don’t try it; quite the opposite: try it on your own, without AI help, and learn by reading docs and making mistakes (ideally not in prod).

smolder

The stuggle was real in the early web with getting companies to do proper password storage and authentication but the fact that seasoned professionals turn to auth0 or okta (and have been bitten by this reliance!) nowadays strikes me as a little embarrassing.

switch007

Let's be fair though: Auth0 does a little more than just password authentication

Registration, 2FA, reset, email verification, federation, password rules, brute force protection, RBAC/ABAC etc

(I'm no fan of Auth0 fwiw)

tptacek

It never meant "don't implement AES/RSA yourself". Nobody sane does that; there has never been a need to convince people not to write their own implementations of AES. Ironically, doing your own AES is one of the less-scary freelancing projects you can undertake. Don't do it, but the vulnerabilities you'll introduce are lower-tier than the ones you get using OpenSSL's AES.

It has, always, mean "don't try to compose new systems using things like AES and RSA as your primitives". The serious vulnerabilities in cryptosystems are, and always have been, in the joinery (you can probably search the word "joinery" on HN to get a bunch of different fun crypto vulnerabilities here I've commented on).

Yes: in the example provided, you rolled your own cryptography. The track record of people building cryptosystems on top of basic signature constructions is awful.

hyperman1

I assume you've never seen AES written in vbscript? Generally, the thought process goes: This thing needs AES, I want to talk to it, I know $language, and wikipedia has the algorithm. The idea that AES is there for a reason ( like security) never enters the thought process.

Someone is building a shed, not a whole building, and stopped listening to real builders with their nitpicky rules long ago. It works great, until the shed has grown into a skyscraper without anybody noticing, and an unexpected and painfull lesson about 'load bearing' appears.

tptacek

I ran the Cryptopals challenges. I have been sent AES implemented in 4 different assemblies, Julia before it was launched, pure Excel spreadsheet formulae, and a Postscript file.

loeg

I want to roll my own variant of AES (I know, I know!) CTR mode for appendable (append-only) files that does not require a key change or reencrypting any full AES block. Big caveat, this design doesn't have a MAC, with all the associated problems (it's competing against alternatives like AES-XTS, not authenticated modes).

Partial blocks are represented by stealing 4 bits of counter space to represent the length mod block size. This restricts us to 2^28 blocks or about 4GB, but that's an ok limitation for this use.

So say you initially write a file of 33 bytes: two full blocks A and B, and a partial block C. A and B get counter values 0 (len) || 0 (ctr) and 0 (len) || 1 (ctr). C is encrypted by XORing the plaintext with AES(k, IV || 1 (len) || 2 (ctr)).

You can append a byte to get a length of 34 bytes. Encrypted A/B don't change. C_2 is encrypted by XORing plaintext_2 with AES(k, IV || 2 (len) || 2 (ctr)). Since the output of AES on different inputs is essentially a PRF, this seems... ok?

Finally if you append enough bytes to fill C, it gets to len=0 mod 16. So the long and short if it is: no partial or full block will ever reuse the same k+iv+len+ctr, even rewriting it for an append.

some_furry

> I want to roll my own variant of AES (I know, I know!) CTR mode for appendable (append-only) files that does not require a key change or reencrypting any full AES block.

https://xyproblem.info

Why, exactly, do you want to do that at all?

umvi

So... you've pointed out the problems, what are the solutions?

What am I allowed to use as primitives to compose systems that require cryptographic functionality? If I'm writing medical device software and the hospitals I'm selling to say I can't store files in plaintext on disk, but also some security expert on HN says I shouldn't use AES as a piece of the solution because that's "rolling my own crypto" and too dangerous, what should I do? Mandate that postgres configured with encryption be used (even if the application is simple and doesn't require a full db)? That will almost certainly harm the prospect of the sale because having to get hospital IT involved introduces a lot of friction. Or are you saying "use sodium to encrypt the file, don't try to do it yourself with AES"?

tptacek

There's a subtext here of "what do I do when the high-level libraries like Sodium don't do exactly what I need", and the frank answer to that is: "well, you're in trouble, because consulting expertise for this work is extraordinarily expensive".

We have an SCW episode coming out† next week about cryptographic remote filesystems (think: Dropbox, but E2E) with a research team that reviewed 6 different projects, several of them with substantial resources. The vulnerabilities were pretty surprising, and in some cases pretty dire. Complaining that it's hard to solve these problems is a little like being irritated that brain surgery is expensive. I mean, it's good to want things.

https://securitycryptographywhatever.com/

some_furry

> What am I allowed to use as primitives to compose systems that require cryptographic functionality?

The problem isn't the primitives, it's the act of a custom composition.

The problem isn't whether AES is used. The problem is whether you're writing code that interfaces at the level of 128-bit blocks.

Want a canned solution for generic problems?

https://soatok.blog/2024/11/15/what-to-use-instead-of-pgp/

Want a specific solution for your specific use case? Talk to an expert to guide you through the design and/or implementation of a tool for solving your specific problem. But don't expect everyone writing for a general audience (read: Hacker News comments) to roll out a bespoke solution for your specific use case.

EE84M3i

Now it's possible for someone to ask the server to sign a blob that they only know the hash of. Is that an issue in your threat model? No idea.

magicalhippo

I feel threat modelling is the really difficult part in gluing together known-good crypto parts into a solution.

I've glued together crypto library calls a few times, and I've implemented RFCs when I've done so, like HKDF[1].

But that isn't enough if the solution I've chosen can easily be thwarted by some aspect I didn't even consider. No point in having a secure door lock if there's an open window in the back.

[1]: https://www.rfc-editor.org/rfc/rfc5869

null

[deleted]

hoilmic

> What should I have done instead?

Security has largely to do with trust.

When asked who I trust most in this space, the answer is always libsodium.

I leave as much of the protocol as possible to their implementation.

https://doc.libsodium.org/

unscaled

The sad truth is that even libsodium can be misused (and the article explicitly mentions that twice). Even pretty high-level constructions like libsodium's crypto_secretbox can be misused, and most of the constructs that libsodium exposes are quite low-level: crypto_sign_detached, crypto_kdf_hkdf, crypto_kx, crypto_generichash.

I think we should understand and teach "do not roll your own crypto" in the context of what constructs are used for what purposes.

AES is proven to be secure and "military-grade" if you need to encrypt a block of data which is EXACTLY 128 bit (16 bytes) in size. If you want to encrypt more (or less) data with the same key, or make sure that the data is not tampered with or even just trust encrypted data that passes through an insecure channel, then all bets are off. AES is not designed to help you with that, it's just a box that does a one-off encryption of 128 bits. If you want do anything beyond that, you go into the complex realm of chaining (or streaming) modes, padding, MACs, nonces HKDFs and other complex things. The moment you need to combine more than two things (like AES, CBC, PKCS#7 padding and HMAC-SHA256) in order to achieve a single purpose (encrypting a message which cannot be tampered), you've rolled your own crypto.

Libsodium's crypto secretbox is safe for encrypting small or medium size messages that will be decrypted by a trusted party that you share a securely-generated and securely-managed key with. It's far more useful than plain AES, but it will not make any use case that involves "encryption" safe. If you've decided to generate a deterministic nonce for AES, or derive a deterministic key (the article links a good example[1]), then libsodium will not protect you. If you attempt to encrypt large files with crypto_secretbox by breaking them to chunks, you will probably introduce some vulnerabilities. If you try to build an entire encrypted communication protocol based on crypto_secretbox, then you are also likely to fail.

The best guideline non-experts can follow is the series of Best Cryptographic Answers guides (the latest one I believe is Latacora 2018[1]). If you have a need that is addressed there and you only need to use the answer given with combining it with something else, you're probably safe.

Notice that the answer for Diffie-Hellman is "Probably nothing", since you're highly unlikely to be able to use key exchange safely in isolation. I did roll key exchange combined with encryption once, and I still regret it (I can't prove that thing is safe).

The "You care about this" part explains the purpose of each class of cryptographic constructs that has a recommendation. For instance, for asymmetric encryption it says "You care about this if: you need to encrypt the same kind of message to many different people, some of them strangers, and they need to be able to accept the message asynchronously, like it was store-and-forward email, and then decrypt it offline. It’s a pretty narrow use case."

So this tells you that you probably should not be using libsodium's crypto_box for sending end-to-end encrypting messages in your messaging app.

[1] https://www.latacora.com/blog/2018/04/03/cryptographic-right...

[1] https://www.cryptofails.com/post/75204435608/write-crypto-co...

Ekaros

With signing server level of interaction is the thing that really matters. I would probably say that if you implemented algo doing the signing operation it might be questionable. But if you implemented the key storage fully yourself, it is even more questionable. Or design your HSM module, or just write the keys in plaintext or minimally protected on the disk...

Also who designed and implemented checking the appended blob? Also crypto...

philjohn

There's a load of places you can shoot yourself in the foot, e.g., not using constant time comparison functions, which leaves you open to timing attacks.

nazgul17

Glueing together secure pieces too often does not result in a secure whole. People spend careers coming up with secure protocols out of secure pieces.

My take on not rolling your crypto is to apply as close to zero cleverness as possible when it comes to crypto. Take ready made boxes, use them as instructed and assume that anything clever I try to build with them is likely not secure, in some way or another.

ozim

Building crypto systems is also hard.

You write about passing hash digest.

Makes me think about pass the hash vuln in windows NTLM where if someone grabs the hash they don’t need to know the pass anymore because they can pass the hash.

The same you still can use AES or RSA incorrectly so that whatever you built on top of those is vulnerable if you don’t have experience.

OK TFA explains all I have the same view on topic as author.

tptacek

This is a relatively long post that is kind of beating around the bush: these developers believed that OpenSSL was a trustworthy substrate on which to build a custom cryptosystem. It's not; this is why libraries like Tink and Sodium exist. They don't really need much more lecturing than "stop trying to build on OpenSSL's APIs."

arthurcolle

tptacek I don't want to waste your time, but do you have any good recommendations for material that bridges the gap between modern deployed cryptosystems and the current SOTA quantum computing world in enough detail that is useful for an engineer practitioner preparing for next 10 years?

tptacek

Nope. It's at times like these I'm glad I've never claimed I was competent to design cryptosystems. I'm a pentester that happens to be able to read ~30% of published crypto attack papers. My advice is: ask Deirdre Connolly.

My standard answer on PQC about about the quantum threat is: "rodents of unusual size? I don't think they exist."

pclmulqdq

I have become a bit of a cryptographer (after running a cryptography-related company for a while), and aside from joke thought experiments, I am one of the most conservative cryptographic programmers I know.

I'm personally pretty skeptical that the first round of PQC algorithms have no classically-exploitable holes, and I have seen no evidence as of yet that anyone is close to developing a computer of any kind (quantum or classical) capable of breaking 16k RSA or ECC on P-521. The problem I personally have is that the lattice-based algorithms are a hair too mathematically clever for my taste.

The standard line is around store-now-decrypt-later, though, and I think it's a legitimate one if you have information that will need to be secret in 10-20 years. People rarely have that kind of information, though.

Dibby053

The post links to this GitHub issue [1] where the critic explains his issues with the design and the programmer asks him to elaborate on how those crypto issues apply to his implementation. The critic's reply does not convince me. It doesn't address any points, and refers to some vague idea about "boring cryptography". In what way is AWS secrets manager or Hashicorp Vault more "obviously secure" than the author's 72-line javascript file?

[1] https://github.com/gristlabs/secrets.js/issues/2

amluto

The criticism in that issue is pretty bad, I agree. But the crypto in secrets.js is all kinds of bad:

The use case is sometime calling this tool to decrypt data received over an unauthenticated channel [0], and the author doesn’t seem to get that. The private key will be used differently depending on whether the untrusted ciphertext starts with '$'. This isn’t quite JWT’s alg none issue, but still: never let a message tell you how to authenticate it or decrypt it. That’s the key’s job.

This whole mess does not authenticate. It should. Depending on the use case, this could be catastrophic. And the padding oracle attack may well be real if an attacker can convince the user to try to decrypt a few different messages.

Also, for Pete’s sake, it’s 2025. Use libsodium. Or at least use a KEM and an AEAD.

Even the blog post doesn’t really explain any of the real issues.

[0] One might credibly expect the public key to be sent with some external authentication. It does not follow that the ciphertext sent back is authenticated.

rendaw

But having bad crypto doesn't mean you have to be aggressive... in fact if the critic's goal is to actually improve the situation (and not just vent or demonstrate their superiority) then being polite and actually answering the questions might go a long way further to remedy it.

borski

You’re right. The problem is that after repeating the same thing hundreds of times to different developers you can develop a bit of an anger toward the situation, as you see the same mistakes play out over and over.

I’m not defending it, but I can understand where it comes from.

1970-01-01

Great question. AWS secrets and Hashicorp Vault have both been audited by a plethora of agencies (and have passed). GitHub code for someone's pet project very likely isn't going to pass any of those audits. When something goes wrong in prod, are you going to point your copy of 'some JS code that someone put on the Internet' and still have a job?

https://docs.aws.amazon.com/secretsmanager/latest/userguide/...

https://www.hashicorp.com/trust/compliance/vault

bagels

Yeah, many probably wouldn't get fired for that, but small consolation for a breach.

scott_w

The very fact it was audited massively reduces the chances it’ll be breached compared to a random JS file that hasn’t been seriously audited. A “please read and tell me the problems” is NOT a security audit.

SomaticPirate

Wow, the smugness of that reply. Responding by calling someone naive and blowing them off despite there being real questions.

The “insecure crypto “ that they clearly link to (despite not wanting to put them on blast) was also a bit overdone. I guess we all are stuck hiring this expert to review our crypto code(under NDA of course) and tell us we really should use AWS KMS.

BigJono

AWS KMS is great product branding. I've never seen another company so accurately capture how it feels to use their product with just the name before.

tptacek

It's also just a profoundly good product. If you can use KMS, you should.

smitelli

Always be suspicious of any acronym with a ‘K’ in it, just on general principle.

benmmurphy

There are some weird attacks against KMS that I think are possible that are not obvious. For example KMS has a mode where it will decrypt without supplying a key reference (suspicious!). If an attacker can control the cipher text then they can share a KMS key from their AWS account to yours and then control the plaintext. I haven’t confirmed this works so maybe my understanding is incorrect.

Also, with KMS you probably should be using the data key API but then you need some kind of authenticated encryption implemented locally. I think AWS has SDKs for this but if you are not covered by the SDK then you are back to rolling your own crypto.

block_dagger

I agree with his comment and would like to add that the critic came across as rude and superior. Instead of answering the dev’s question in good faith, they linked to their own blog entry that has the same tone. Is it a cryptographic expert thing to act so rude?

hatf0

Those aren’t even the correct answer for the use-case in question, anywho. What they’re looking for would actually be sops (https://github.com/getsops/sops), or age (made by the fantastic Filo Sottile: https://github.com/FiloSottile/age), or, hell, just using libsodium sealed boxes. AMS KMS or Vault is perhaps even worse of an answer, Actually

maqp

>It doesn't address any points

Taking some time to point out the vulnerability is already charity work. Assuming that's also a commitment to a free lecture on how the attacks work, and another hour of free consultation to look into the codebase to see if an attack could be mounted, is a bit too much to ask.

Cryptography is a funny field in that cribs often lead to breaks. So even if the attack vector pointed out doesn't lead to complete break immediately, who's to know it won't eventually if code is being YOLOed in.

The fact the author is making such a novice mistake as unauthenticated CBC, shows they have not read a single book on the topic should not yet be writing cryptographic code for production use.

LPisGood

> Taking some time to point out the vulnerability is already charity work

Sure, but if you’re not going to reason why the vulnerability you’re pointing out is an issue or respond well to questions then it’s almost as bad as doing nothing at all.

A non expert could leave the same Maintainers on many Github pages. Developers can’t be expected to blindly believe every reply with a snarky tone and a blog link?

maqp

>Developers can’t be expected to blindly believe every reply with a snarky tone and a blog link?

Developers are adults with responsibility to know the basics of what they're getting into, and you don't have to get too far into cryptography to learn you're dealing with 'nightmare magic math that cares about the color of the pencil you write it with', and that you don't do stuff you've not read about and understood. Another basic principle is that you always use best practices unless you know why you're deviating.

The person who replied to that issue clearly understands some of the basics, or they at least googled around, since they said "Padding oracle attacks -- doesn't this require the ability to repeatedly submit different ciphertext for decryption to someone who knows the key?"

In what college course or book is padding oracle described without mentioning how it's mitigated, I have no idea. Even Wikipedia article on padding oracle attacks says it clearly: "The CBC-R attack will not work against an encryption scheme that authenticates ciphertext (using a message authentication code or similar) before decrypting."

The way security is proved in cryptography, is often we give the attacker more powers than they have, and show it's secure regardless. The best practices include the notion that you do things in a way that categorically eliminates attacks. You don't argue about 'is padding oracle applicable to the scenario', you use message authentication codes (or preferably AE-scheme like GCM instead of CBC-HMAC) to show you know what you're doing and to show it's not possible.

If it is possible and you leave it like that because the reporter values their time, and they won't bother, an attacker won't mind writing the exploit code, they already know from the open source it's going to work.

danparsonson

If the snarky comment is "your crypto implementation is bad", then, yes, I would always take that seriously. If I really know what I'm doing then I'll be able to refute the comment; if not, then I probably should be using an audited library anyway.

Mistakes in crypto implementation can be extremely subtle, and the exact nature of a vulnerability difficult to pin down without a lot of work. That's why the usual advice is just "don't do it yourself"; the path to success is narrow and leads through a minefield.

1970-01-01

It probably is stemming from the fact that cryptography needs to be perfect code to guarantee total confidentiality with complete integrity. Without this goal of writing perfect code, that always encrypts and decrypts, but only for the keyholder(s), they're simply begging to get things wrong and waste time.

eviks

How is it as bad if no signal means you'll likely never fix it, and some signal means you're more likely to fix it? Like seriously, this is the only 1 (one!) issue in this repo, not hundreds of bot vulnerability submissions, where is this fear of snarky replies come from?

> Developers can’t be expected to blindly believe every reply with a snarky tone and a blog link?

Sure, they can google around, read the blog, do other steps to educate themselves on crypto - all with their eyes wide open - before realizing they've made a big mistake and fixing vulnerabilities (and thanking the snarky author for the service)!

null

[deleted]

rendaw

And the critic's only argument is a link to their own blog...

biimugan

Only tangential to this post, but if you need a way to share secrets with your teams (or whoever), Hashicorp Vault is pretty decent. They don't even need login access. Instead of sharing secret values directly, you wrap the secret which generates a one-time unwrapping token with a TTL. You share this unwrapping token over a traditional communication method like Slack or e-mail or whatever, and the recipient uses Vault's unwrap tool to retrieve the secret. Since the unwrapping token is one time use, you can easily detect if someone intercepted the token and unwrapped the secret (by virtue of the unwrapping token no longer working). This hint tells you the secret was compromised and needs to be rotated (you just need to follow-up with the person to confirm they were able to retrieve the secret). And since you can set a TTL, you can place an expectation on the recipient too -- for example, that you expect them to retrieve the secret within 10 minutes or else the unwrapping token expires.

All of this has the added benefit that you're not sharing ciphertext over open channels (which could be intercepted and stored for future decryption by adversaries).

programmarchy

Glad to hear this. I’m planning to use Vault in a new project that has sensitive security concerns. I liked Hashicorp’s concept of “encryption as a service” as a way of protecting engineering teams from cryptographic footguns.

nejsjsjsbsb

I like this. Can you add yubikey as another factor?

Where I work we never need this though, we have a jwt server that can serve a time limited token for work account that various systems can accept.

quotemstr

I'm not sure the way to get developers to stop writing their own crypto is to turn it into a delicious forbidden fruit edible only by the most virtuous. APIs sometimes have a "Veblen good" character to them: the more difficult, the more use they attract, because people come to see use of the hard API as a demonstration of skill.

The right way to stop people writing their own cryptography isn't to admonish them or talk up the fiendish difficulty of the field. The right way to make it boring, not worth the effort one would spend to reinvent it.

TheCleric

I think one of the reasons developers roll their “own” crypto code is because there doesn’t really seem like a simple “do it this way” way to do it. OpenSSL has hundreds of ways of doing encryption. So it almost has pitfalls by default, because of the complexity of the choices.

maqp

OpenSSL is not misuse resistant, nor is it opinionated.

Opinionated means the ciphers available are the best of the bunch. Misuse resistance means the API validates parameters and reduces the footguns.

If you're doing a closed ecosystem, you'll want both. Use something like libsodium, or it's higher level wrappers. For this the team should have decent understanding of the primitives of that high level library to e.g. know that insecurely generated keys are still accepted so you'll have to use a CSPRNG.

If you're having to interact with other systems that only support e.g. TLS, you'll want to have the company hire a professional cryptographer with focus on applied cryptography to build the protocol from lower level primitives. Other programmers should refuse to do this work, like non-structural-engineers refuse to do structural engineering because they know they'll be held responsible for doing work they're not qualified.

tpimenta

In the end the author expressed his frustration at the lack of input from security professionals, but the words he used were perhaps a little arrogant:

He said "to routinely make preventable mistakes because people with my exact skillset haven’t yet delivered easy-to-use, hard-to-misuse tooling".

I would suggest rephrasing this as a way to make it clear what skills he is referring to, something like "people who are senior security experts". Otherwise it might sound like he is implying that he is the only one who should audit everything, because who else would have the exact same experience that he has had all his life?

some_furry

> I would suggest rephrasing this as a way to make it clear what skills he is referring to, something like "people who are senior security experts".

Well, it needs to say exactly what it says, not a vague category like "senior security experts".

There are countless developers and security nerds who run circles around me. They can do everything I can do, and more.

A lot of the trappings that cause developers to make preventable mistakes is because my betters and I haven't delivered easy-to-use tools that solve their use cases perfectly, and I feel personally responsible for not being able to help more.

Not sure what's arrogant about that.

tpimenta

Well, just thinking about the meaning of the word "exact" can be literal, the level of frustration suggests the emphasis is the highest, it can make the reader understand the meaning of "exact" is a literal irreducible exact, implying no one other than you have the literal exact same experience.

nonameiguess

I'd argue there are easy-to-use tools for this use case, frankly. Buy your employees corporate Yubikeys, issue them over certified mail, and just use gpg in some authenticated mode. That may be too complicated for Grandma Marie and Uncle Paul, but is it seriously too complicated for someone trying to run a software business? How do you ever expect to understand the laws you have to comply with to run a business if that's your attitude?

If that's still too complicated, send each other API keys over Proton Mail. Unless you're an enemy of the Swiss government, I can't think of a reason not to trust them that isn't into serious crackpot territory. If you're actually being targeted by Mossad or the NSA, they can intercept your certified mail anyway. OpenAI would probably cooperate with them besides.

RainyDayTmrw

I have empathy for people who end up stuck in the in-between areas, where out-of-the-box building blocks solve part of their problem, but how to glue them together can still get tricky.

For one example, you've got a shared secret. Can you use it as a secret key? Do you have to feed it through a KDF first? Can you use the same key for both encryption and signing? Do you have to derive separate keys?

erinaceousjones

I'm in that boat. I'm watching all of Christoph Paar's cryptography lecture series on YouTube -- it was recorded in 2010, so I do wonder if it's missing any new state of the art / best practises.

I'm like 18 lectures in, two out of three semesters. And I still feel like I have only the vaguest ideas what the primitives are, how they work, what they're for, and their weaknesses. I'm having to follow all the mathematics as someone not mathematically inclined (Prof Paar did do a good job of making the mathematics fairly accessible though).

All of this so I can have a bit more confidence in proposing E2E for a project at some point in future (before somebody asks us to, too late).

And my use-case makes it difficult to follow the most trodden paths so I can't just plug in a handshake protocol and MACs and elliptic curves or "just use PGP" or whatever.

As a software dev, I have all these boxes I could use, that come with so many caveats "if you do this, but don't do this, no do that, don't do that"... It's very tricky trying to work out how to glue the pieces together without already being in the field of crypto. Feels like I'll always be missing some crucial piece of information I'd get if I pored over hundreds of textbooks and papers but I don't have the resources to do so!

I'd love if someone did like, a plain English recipe book for cryptography! Give the mathematical proof of stuff, but also explain the strengths/weaknesses/possible attacks to laypeople without the prerequisite that you need to understand ring modulus or Galois fields or whatever first. Or, like, flowcharts to follow!

maqp

>so I do wonder if it's missing any new state of the art / best practises.

https://nostarch.com/serious-cryptography-2nd-edition should have the latest info, it's approachable and goes into pitfalls. https://www.manning.com/books/real-world-cryptography is another.

>As a software dev, I have all these boxes I could use, that come with so many caveats "if you do this, but don't do this, no do that, don't do that"... It's very tricky trying to work out how to glue the pieces together without already being in the field of crypto

Until you know more, strongly consider suggesting the company just hires someone who knows that. Just because you're available to do it, doesn't mean you should just yet.

erinaceousjones

Thanks, I'd not found these yet! Very helpful :)

> Until you know more, strongly consider suggesting the company just hires someone who knows that. Just because you're available to do it, doesn't mean you should just yet.

This is a fair point. We'd always find it difficult to hire someone who was 100% specialising in software security / crypto etc, but a software eng who has some experience would probably be palatable... But funding for new hires could be a couple of years out. That, or we find a way to turn it into a research proposal we can sic a PhD on.

Still, I think it benefits us to have a strong baseline knowledge of crypto systems as a team, "bus factor" and all that. Maybe one day we have a colleague that can teach us that, but until then we may as well crack on with self-teaching :-)

sureglymop

My question is, why does the library even support AES-CBC? I know that it's OpenSSL here but why can't we have an additional library then that ships only the recommended encryption schemes, safe curves, etc. and deprecates obsolete practices in an understandable process? Something that is aimed at developers and also comes with good documentation on why it supports what it does.

Ekaros

Legacy. You rarely only touch new stuff or secure stuff. And instead end up at least with one scenario that requires you do something with old stuff. And this is not just crypto.

sureglymop

I get that. But then, keeping that legacy stuff running is just as problematic as rolling your own crypto. We can leave OpenSSL as it is but it shouldn't be the popular recommendation for developers.

They should have another library which, like I said, actively deprecates obsolete and insecure practices but in a way that makes the update process digestible for people depending on it.

maqp

>We can leave OpenSSL as it is but it shouldn't be the popular recommendation for developers.

OpenSSL isn't a recommendation for developers. For TLS, LibreSSL and BoringSSL are. For other stuff, libsodium is.

The only reason I've picked OpenSSL, is it has a higher level library (pyca/cryptography) that gives bindings to X448.

chupasaurus

> I get that. But then, keeping that legacy stuff running is just as problematic as rolling your own crypto.

Both problems are from the lack of knowledge, but latter one is orders of magnitude harder to fix.

izacus

Because such library is pretty useless in the real world where your system needs to interact with other systems using "legacy" encryption schemes.

Usability and practicality is critical for successful security approach.

markus_zhang

My optimistic opinion is that since our information is being leaked left and right, it doesn't matter whether we roll the algo by ourselves or not. Might as well do it for practice...

Quote from the Honorable MI5 head:

> I mean, with Burgess and Maclean and Philby and Blake and Fuchs and the Krogers... one more didn't really make much more difference.

archi42

Ah, yes, common people and cryptography. I've recently been in a meeting with industrial people, the topic was securing industrial communication in production lines and such stuff: "Someone told us zero trust means we can establish secure communications between random devices without any root of trust". I think they plan to eventually standardize something around that. I'm looking forward to either be pleasantly surprised, or do a side quest to prevent them from making fools of themselves.

christophilus

Aside: I appreciated this article and didn’t find it overly arrogant as others have suggested. It sounds like a security expert venting a bit after being exasperated. His gripes are legitimate, and his tone wasn’t out of line with the frustration I’d feel in the same situation. My 2cents.