Skip to content(if available)orjump to list(if available)

SecretSpec: Declarative Secrets Management

imglorp

It's nice to present a LastPass method, but really, my suggestion is stay far away from LastPass either as a user or an integrator. They've been breached at least seven times since 2011. The net will be better off with fewer integrations to it.

domenkozar

I hope that this will be one of the tools that allow that transition to happen those who'd like to migrate from LastPass :)

lucideer

lastpass provide a cli that as far as I've seen serves all migration needs so I haven't seen any need to ever touch the service with a ten foot pole otherwise.

aranelsurion

Another alternative: https://github.com/tellerops/teller

It's a standalone tool with YAML configuration, simple to use.

Basically the way it works:

- You create the secret in GCP/AWS/etc Secrets Manager service, and put the secret data there.

- Refer to the secret by its name in Teller.

- Whenever you run `$ teller run ...` it fetches the data from the remote service, and makes it available to your process.

athorax

Unfortunately, teller is largely an abandoned project at this point.

tgbugs

For at least the "keep secrets out of version control" I implemented a python library (and racket library) that has served me well over the years for general configuration [0].

One key issue is that splitting general config from secrets is practically extremely difficult because once the variables are accessible to a running code base most languages and code bases don't actually have a way differentiate between them internally.

I skipped the hard part of trying to integrate transparently with actual encrypted secret stores. The architecture leaves open the ability to write a new backend, but I have found that for most things, even in production, the more important security boundaries (for my use cases) mean that putting plaintext secrets in a file on disk adds minuscule risk compared to the additional complexity of adding encryption and screwing something up in the implementation. The reason is that most of those secrets can be rotated quickly because there will be bigger things to worry about if they leak from a prod or even a dev system.

The challenge with a standard for something like this is that the devil is always in the details, and I sort of trust the code I wrote because I wrote it. Even then I assume I screwed something up, which is part of why I don't shared it around (the others are because there are still some missing features and architecture cleanup, and I don't want people depending on something I don't fully trust).

There is a reason I put a bunch of warnings at the top of the readme. Other people shouldn't trust it without extensive review.

Glad to see work in the space trying to solve the problem, because a good solution will need lots of community buy-in to build quality and trust.

0. https://github.com/tgbugs/orthauth

JadoJodo

> Don't you feel some anxiety given we've normalized committing encrypted secrets to git repos?

Maybe I haven't worked at enough places, but... when has this ever been allowed/encouraged/normalized?

anbotero

Wait, why are there so many skeptics in this thread?

I have setup AWS + SOPS in several projects now, and the developers do not have access to the secrets themselves nor the encryption key (which is stored in AWS). Only once did we ever require to rollback a secret and that happened at AWS level, not the code’s. Also it happened within the key rotation period, so it was easy.

For us it’s easier to track changes (not the value, but when it changes), easier to associate it with incidents.

apopapo

What's wrong with committing encrypted secrets? That's how I use `sops`.

JeffMcCune

You can’t revoke, rotate, or audit access to them.

nodesocket

I would venture to guess the main concern is accidental commit of decrypted secrets.

thomasingalls

If a key gets compromised, the encrypted secrets are compromised forever, since you can't be sure all the git clones everywhere can be updated with a new encryption key. Not to mention how fiddly it is to edit git history.

JohnMakin

You'd be surprised. In the past I was on a big project at company with multi-billion $ revenue. They got caught with their pants down on an audit once because people would not only commit credentials into internal repositories, they were usually not encrypted at all, among other deeper issues. It sparked a multi-year long project of incorporating a secrets management service into the 1000+ repositories and services the company used. Found a loooooot of dead bodies, tons of people got fired during the process. After that experience I imagine this practice is fairly common - people, even smart developers, don't always seem to be able to comprehend the blast radius of some of these things.

One of my favorite incidents during this clean-up effort was, the security team + my team had discovered a lot of DB credentials were just sitting on developer's local machines and basically nowhere else that made any kind of sense, and they'd hand them around as needed via email or message. So, we made tickets everywhere we found instances of this to migrate to the secret management platform. One lead developer with a privileged DB credential wrote a ticket that was basically:

"Migrate secret to secret management platform" and in the info section, wrote the plaintext value of the key, inadvertently giving anyone with Jira read access to a sensitive production database. Even when it was explained to him I could tell he didn't really understand fully why that was silly. Why did he have it in the first place is a natural followup question, but these situations don't happen in a vacuum, there's usually a lot of other dumb stuff happening to even allow such a situation to unfold.

NewJazz

Okay, but that sounds like a very different situation than a small shop where encrypted secrets are committed to one file per-repo, and keys and secrets are rotated regularly.

JohnMakin

Okay, in case it was missed, my salient point was that this behavior is very common and provided a ridiculous example as my evidence. I'm making no commentary on the practice itself (although I do think committing configs like secrets is really silly and anti-productive)

jasonthorsness

Indeed, the only time I saw this was a decade ago for a temporary POC... not doing this is a good defense-in-depth practice even if the encryption is solid.

dayjah

It’s not clear to me how the secrets are referenced in storage. Is the expectation that given `--provider onepassword` that one of the entries in 1p would be “BUCKET”?

edit: it’s not covered in the post, but it is on the launch and doc site: https://secretspec.dev/providers/onepassword/

dvtkrlbs

I really like this. I am using infisical but it does not handle the app side without vendor locking to their service. I love the additional secretspec_derive bit for the Rust example.

kevmo314

Isn't this "echo with more steps"? The CI/CD example [1] strikes me as not obviously better than doing

          cat > .env << EOF
          DATABASE_URL=${{ secrets.TEST_DATABASE_URL }}
          STRIPE_API_KEY=${{ secrets.STRIPE_TEST_KEY }}
          EOF
which also addresses the trust and rotation problems. I suppose for dev secrets those are annoying, but even with secretspec you would have to rotate dev secrets when someone is offboarded.

[1] https://devenv.sh/blog/2025/07/21/announcing-secretspec-decl...

domenkozar

The example is more of a way to show how to keep backwards compatibility and migration to secretspec.

We hope that one day github actions would integrate secretspec more tightly, leaving aside using environment variables as a transport.

That's going to be a long journey, one worth striving for.

0xbadcafebee

It's embarrassing to see a place like HN, which is supposed to be cutting-edge, continuing to use designs from 2005.

*Configuration Values*

Your laptop is not hosting your website (I presume), so .env is not going to be enough to run your app somewhere other than your laptop.

I get it. You only want to run your app locally, and .env is convenient. But your production server probably isn't going to load your .env file directly, and it will probably need extra or different variables. This disconnect between "the main development environment variables" and "the extra stuff in production" will lead to inconsistencies that you have not tested/developed against. That will lead to production bugs. So keeping track of those differences in a uniform way is pretty useful.

How do you specify configuration for development and production without running into inconsistency bugs? By splitting up your app's configuration into "static" and "dynamic", and version-controlling everything.

1) "Static" configuration is things like environment variables, which do not change from run to run, and are not environment-specific. So for example, an API URL prefix like "/api/routes" is pretty static and probably not going to change. But an IP address definitely will change at some point, so this configuration isn't static. (To think about it another way: on your computer, some environment variables are simply stored in a text file and read into your shell; these are static)

2) "Dynamic" configuration are values that may change, like hostnames, IP addresses, port numbers, usernames, passwords, etc. Secrets are also "dynamic", because they should never be hard-coded into a file or code, and you will want to rotate secrets in the future. All dynamic configuration should be loaded during a deployment process (for example, creating an ECS task definition, or Kubernetes yaml file), or at runtime (an ECS task definition that sources environments from secrets, or a Kubernetes yaml that sources environments from secrets, or a function in your code that calls an API to look up a secret from Hashicorp Vault or similar). In particular for secrets, you want to load those every time your program starts, as close to the application's execution environment as possible. (To think about it another way: some environment variables on your computer require executing a program and getting its output to set the variable - like your $HOSTNAME, $USER, $SHELL, and other variables)

3) Both static and dynamic configuration should be version-controlled, and any change to these should trigger a new deployment. If a value changes, and you don't then immediately make a new deployment, that change could be harboring a lurking bug that you won't find out about until someone makes a deployment much later on, and trying to find the cause will be very difficult.

*Infrastructure Patterns*

test, stage, and prod servers are like pets. You have individual relationships with them, change them in unique ways, until eventually they have their own individual personalities. They become silos that pick up peculiarities that will not be reflected in other environments, and will be hard to replicate or rebuild later.

Instead, use ephemeral infrastructure (the "cattle" in "pets vs cattle"). There should be a "production" infrastructure, which is built with Infrastructure-as-Code, to create an immutable artifact that can simply be deleted and re-created automatically. That same code that builds production should build any other server, for example for testing or staging. When the testing or staging is done, the ephemeral copy should be shut down. They should all be rebuilt frequently to prevent infrastructure rot from setting in.

This pattern does a lot of things, like making sure you have automation for disaster recovery, using automation to prevent inconsistencies, using automation to detect when your infrastructure-as-code has stopped working, saving money by turning off unneeded resources, and the ability to spin up a unique copy of your infrastructure with unique changes in order to test them in parallel to your other infrastructure/changes. It also makes it trivial to test upgrades, patch security holes, or destroy and recreate compromised infrastructure. And of course it saves you time in the long run, because you only expend effort to set it up once.

*This Is Not About Scaling*

I know the first thing everyone's going to complain about is something like "I'm not Facebook, I don't need all that!" or "It works fine for me!".

There's a lot of things we do today that are better for us than what we did before, even though we don't have to. You brush your teeth and wash your hands, right? Well we didn't used to do those things. And you can still live your life without doing them! So why do them at all?

Because we've learned about the downsides of not doing them, and the benefits outweigh the downsides. Getting into the habit of doing things differently may be annoying or painful at first, but then they will become second nature, and you won't even think about it.

sofixa

I'm not sure I like the concept.

Realistically, why would your different environments have different ways of consuming secrets from different locations? Yes, you wouldn't use AWS Secrets Manager in your local testing, maybe... but giving each developer control and management of their own secrets, in their own locations, is just begging for trouble. How do you handle sharing of common secrets? How do you handle scenarios where some parts are shared (e.g. a shared api key for a dev third party API) but others aren't (local instance of test db)? How do you make sure that api key that everyone uses in dev is actually rotated from times to times, and nobody has stored it in clear text .env because once they had issues with OnePassword's service being down, and left it at that? How do you make sure that nobody is using an insecure secrets manager (e.g. LastPass)?

It's just adding the risk of having the impression that there is proper secrets management, but actually having a mess of everyone doing whatever they feel like with secrets, with no control over who has access to what, and what secret is used where and by whom and why. Which is kind of like a good ~70% of the point of secrets management.

Centralised secrets management or bust, IMO. Ideally with a secrets scanner checking your code doesn't have a secret in clear text left by mistake/lazyness. Vault/OpenBao isn't that complicated to set up, but if really is, your platform probably has something already.

Disclaimer: I work at HashiCorp, but opinions my own, I've been a part of the team implementing Vault at my past job for centralised secrets management and 100% believe it's the way things should be done to minimise the risk of mishandling secrets.

domenkozar

I'm not advocating that different locations of secrets IS something we want, but rather it IS the sad state of reality.

By having a secrets specification we can start working towards a future that will consolidate these providers and allow teams to centralize it if needed, by having simple means of migrating from a mess into a central system.