Launch HN: Tinfoil (YC X25): Verifiable Privacy for Cloud AI
63 comments
·May 15, 2025Etheryte
How large do you wager your moat to be? Confidential computing is something all major cloud providers either have or are about to have and from there it's a very small step to offer LLM-s under the same umbrella. First mover advantage is of course considerable, but I can't help but feel that this market will very quickly be swallowed by the hyperscalers.
itsafarqueue
Being gobbled by the hyperscalers may well be the plan. Reasonable bet.
3s
Confidential computing as a technology will become (and should be) commoditized, so the value add comes down to security and UX. We don’t want to be a confidential computing company, we want to use the right tool for the job of building private & verifiable AI. If that becomes FHE in a few years, then we will use that. We are starting with easy-to-use inference, but our goal of having any AI application be provably private
ATechGuy
This. Big tech providers already offer confidential inference today.
julesdrean
Yes Azure has! They have very different trust assumptions though. We wrote about this here https://tinfoil.sh/blog/2025-01-30-how-do-we-compare
mnahkies
Last I checked it was only Azure offering the Nvidia specific confidential compute extensions, I'm likely out of date - a quick Google was inconclusive.
Have GCP and AWS started offering this for GPUs?
null
coolcase
Tinfoil hat on: say you are compelled to execute a FISA warrant and access the LLM data, is it technically possible? What about an Australian or UK style "please add a backdoor".
I see you have to trust NVidia etc. so maybe there are such backdoors.
internetter
> the client fetches a signed document from the enclave which includes a hash of the running code signed
Why couldn't the enclave claim to be running an older hash?
3s
This is enforced by the hardware (that’s where the root of trust goes back to NVDIA+AMD). The hardware will only send back signed enclave hashes of the code it’s running and cannot be coerced by us (or anyone else) into responding with a fake or old measurement.
amanda99
Does this not require one to trust the hardware? I'm not an expert in hardware root of trust, etc, but if Intel (or whatever chip maker) decides to just sign code that doesn't do what they say it does (coerced or otherwise) or someone finds a vuln; would that not defeat the whole purpose?
I'm not entirely sure this is different than "security by contract", except the contracts get bigger and have more technology around them?
rkagerer
I agree, it's lifting trust to the manufacturer (which could still be an improvement over the cloud status quo).
Another (IMO more likely) scenario is someone finds a hardware vulnerability (or leaked signing keys) that let's them achieve a similar outcome.
natesales
We have to trust the hardware manufacturer (Intel/AMD/NVIDIA) designed their chips to execute the instructions we inspect, so we're assuming trust in vendor silicon either way.
The real benefit of confidential computing is to extend that trust to the source code too (the inference server, OS, firmware).
Maybe one day we’ll have truly open hardware ;)
ignoramous
Hi Nate. Routinely your various networking-related FOSS tools. Surprising to see you now work in the AI infrastructure space let alone co-founding a startup funded by YC! Tinfoil looks über neat. All the best (:
> Maybe one day we'll have truly open hardware
At least the RoT/SE if nothing else: https://opentitan.org/
julesdrean
Love Open Titan! RISC-V all the way babe! The team is bunker: several of my labmates now work there
ts6000
NVIDIA shared open-source solutions for confidential AI already in mid-2024 https://developer.nvidia.com/blog/advancing-security-for-lar...
max_
The only way to guarantee privacy in cloud computing is via homorphic encryption.
This approach relies too much on trust.
If you have data you are seriously sensitive about, its better for you to run models locally on air gapped instances.
If you think this is an overkill, just see what happened to coinbase of recent. [0]
[0]: https://www.cnbc.com/2025/05/15/coinbase-says-hackers-bribed...
FrasiertheLion
Yeah, totally agree with you. We would love to use FHE as soon as it's practical. And if you have the money and infra expertise to deploy air gapped LLMs locally, you should absolutely do that. We're trying to do the best we can with today's technology, in a way that is cheap and accessible to most people.
sigmaisaletter
Looks great. Not sure how big the market is between "need max privacy, need on-prem" and "don't care, just use what is cheap/popular" tho.
Can you talk about how this relates to / is different / is differentiated from what Apple claimed to do during their last WWDC? They called it "private cloud compute". (To be clear, after 11 months, this is still "announced", with no implementation anywhere, as far as I can see.)
Here is their blog post on Apple Security, dated June 10: https://security.apple.com/blog/private-cloud-compute/
EDIT: JUST found the tinfoil blog post on exactly this topic. https://tinfoil.sh/blog/2025-01-30-how-do-we-compare
davidczech
Private Cloud Compute has been in use since iOS 18 released.
DrBenCarson
Private Cloud Compute has been live in production for 8 months
ts6000
Companies like Edgeless Systems have been building open-source confidential computing for cloud and AI for years, they are open-source, and have published in 2024 how they compare to Apple Private Cloud Compute. https://www.edgeless.systems/blog/apple-private-cloud-comput...
SebP
Thats impressive, congrats. You've taken the "verifiable security" concept to the next level. I'm working on a similar concept, without "verifiable" part... trust remains to be built, but adding RAG ad fine tuned modelds to the use of open source LLMs, deployed in the cloud: https://gptsafe.ai/
blintz
Excited to see someone finally doing this! I can imagine folks with sensitive model weights being especially interested.
Do you run into rate limits or other issues with TLS cert issuance? One problem we had when doing this before is that each spinup of the enclave must generate a fresh public key, so it needs a fresh, publicly trusted TLS cert. Do you have a workaround for that, or do you just have the enclaves run for long enough that it doesn’t matter?
FrasiertheLion
We actually run into the rate limit issue often particularly while spinning up new enclaves while debugging. We plan on moving to HPKE: https://www.rfc-editor.org/rfc/rfc9180.html over the next couple months. This will let us generate keys inside the enclave and encrypt the payload with the enclave specific keys, while letting us terminate TLS in a proxy outside the enclave. All the data is still encrypted to the enclave using HPKE (and still verifiable).
This will let us fix the rate limit issue.
gojomo
Is there a frozen client that someone could audit for assurance, then repeatedly use with your TEE-hosted backend?
If instead users must use your web-served client code each time, you could subtly alter that over time or per-user, in ways unlikely to be detected by casual users – who'd then again be required to trust you (Tinfoil), rather than the goal on only having to trust the design & chip-manufacturer.
FrasiertheLion
Yes, we have a customer who is indeed interested in having a frozen client for their app, which we're making possible. We currently have not frozen our client because we're in the early days and want to be able to iterate quickly on functionality. But happy to do so on a case-by-case basis for customers.
ignoramous
> rather than the goal on only having to trust the design & chip-manufacturer
If you'd rather self-host, then the HazyResearch Lab at Stanford recently announced a FOSS e2ee implementation ("Minions") for Inference: https://hazyresearch.stanford.edu/blog/2025-05-12-security / https://github.com/HazyResearch/Minions
offmycloud
> https://docs.tinfoil.sh/verification/attestation-architectur...
I tried taking a look at your documentation, but the site search is very slow and laggy in Firefox.
3s
Interesting, we haven't noticed that (on Firefox as well). We'll look into it!
offmycloud
It looks like it might be the blur effect in a VM with no Firefox video acceleration. Also, email to support@tinfoil.sh (from "contact" link) just bounced back to me.
FrasiertheLion
Ah we don't have support@tinfoil.sh set up yet. Can you try contact@tinfoil.sh?
cuuupid
This is a great concept but I think "Enterprise-Ready Security" and your competitive comparison chart are kind of misleading. Yes, zero trust is huge. But, virtually everyone who has a use case for max privacy AI, has that use case because of compliance and IP concerns. Enterprise-Ready Security doesn't mean sigstore or zero trust, it means you have both the security at a technical level as well as certification by an auditor that you do.
You aren't enterprise ready because to address those concerns you need to get the laundry list of compliance certs: SOC 2:2, ISO 27k1/2 and 9k1, HIPPA, GDPR, CMMC, FedRAMP, NIST, etc.
3s
We're going through the audit process for SOC2 right now and we're planning on doing HIPPA soon
Hello HN! We’re Tanya, Sacha, Jules and Nate from Tinfoil: https://tinfoil.sh. We host models and AI workloads on the cloud while guaranteeing zero data access and retention. This lets us run open-source LLMs like Llama, or Deepseek R1 on cloud GPUs without you having to trust us—or any cloud provider—with private data.
Since AI performs better the more context you give it, we think solving AI privacy will unlock more valuable AI applications, just how TLS on the Internet enabled e-commerce to flourish knowing that your credit card info wouldn't be stolen by someone sniffing internet packets.
We come from backgrounds in cryptography, security, and infrastructure. Jules did his PhD in trusted hardware and confidential computing at MIT, and worked with NVIDIA and Microsoft Research on the same, Sacha did his PhD in privacy-preserving cryptography at MIT, Nate worked on privacy tech like Tor, and I (Tanya) was on Cloudflare's cryptography team. We were unsatisfied with band-aid techniques like PII redaction (which is actually undesirable in some cases like AI personal assistants) or “pinky promise” security through legal contracts like DPAs. We wanted a real solution that replaced trust with provable security.
Running models locally or on-prem is an option, but can be expensive and inconvenient. Fully Homomorphic Encryption (FHE) is not practical for LLM inference for the foreseeable future. The next best option is using secure enclaves: a secure environment on the chip that no other software running on the host machine can access. This lets us perform LLM inference in the cloud while being able to prove that no one, not even Tinfoil or the cloud provider, can access the data. And because these security mechanisms are implemented in hardware, there is minimal performance overhead.
Even though we (Tinfoil) control the host machine, we do not have any visibility into the data processed inside of the enclave. At a high level, a secure enclave is a set of cores that are reserved, isolated, and locked down to create a sectioned off area. Everything that comes out of the enclave is encrypted: memory and network traffic, but also peripheral (PCIe) traffic to other devices such as the GPU. These encryptions are performed using secret keys that are generated inside the enclave during setup, which never leave its boundaries. Additionally, a “hardware root of trust” baked into the chip lets clients check security claims and verify that all security mechanisms are in place.
Up until recently, secure enclaves were only available on CPUs. But NVIDIA confidential computing recently added these hardware-based capabilities to their latest GPUs, making it possible to run GPU-based workloads in a secure enclave.
Here’s how it works in a nutshell:
1. We publish the code that should run inside the secure enclave to Github, as well as a hash of the compiled binary to a transparency log called Sigstore
2. Before sending data to the enclave, the client fetches a signed document from the enclave which includes a hash of the running code signed by the CPU manufacturer. It then verifies the signature with the hardware manufacturer to prove the hardware is genuine. Then the client fetches a hash of the source code from a transparency log (Sigstore) and checks that the hash equals the one we got from the enclave. This lets the client get verifiable proof that the enclave is running the exact code we claim.
3. With the assurance that the enclave environment is what we expect, the client sends its data to the enclave, which travels encrypted (TLS) and is only decrypted inside the enclave.
4. Processing happens entirely within this protected environment. Even an attacker that controls the host machine can’t access this data. We believe making end-to-end verifiability a “first class citizen” is key. Secure enclaves have traditionally been used to remove trust from the cloud provider, not necessarily from the application provider. This is evidenced by confidential VM technologies such as Azure Confidential VM allowing ssh access by the host into the confidential VM. Our goal is to provably remove trust both from ourselves, aka the application provider, as well as the cloud provider.
We encourage you to be skeptical of our privacy claims. Verifiability is our answer. It’s not just us saying it’s private; the hardware and cryptography let you check. Here’s a guide that walks you through the verification process: https://docs.tinfoil.sh/verification/attestation-architectur....
People are using us for analyzing sensitive docs, building copilots for proprietary code, and processing user data in agentic AI applications without the privacy risks that previously blocked cloud AI adoption.
We’re excited to share Tinfoil with HN!
* Try the chat (https://tinfoil.sh/chat): It verifies attestation with an in-browser check. Free, limited messages, $20/month for unlimited messages and additional models
* Use the API (https://tinfoil.sh/inference): OpenAI API compatible interface. $2 / 1M tokens
* Take your existing Docker image and make it end to end confidential by deploying on Tinfoil. Here's a demo of how you could use Tinfoil to run a deepfake detection service that could run securely on people's private videos: https://www.youtube.com/watch?v=_8hLmqoutyk. Note: This feature is not currently self-serve.
* Reach out to us at contact@tinfoil.sh if you want to run a different model or want to deploy a custom application, or if you just want to learn more!
Let us know what you think, we’d love to hear about your experiences and ideas in this space!