Skip to content(if available)orjump to list(if available)

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

Show HN: Local Privacy Firewall-blocks PII and secrets before ChatGPT sees them

16 comments

·December 9, 2025

OP here.

I built this because I recently caught myself almost pasting a block of logs containing AWS keys into Claude.

The Problem: I need the reasoning capabilities of cloud models (GPT/Claude/Gemini), but I can't trust myself not to accidentally leak PII or secrets.

The Solution: A Chrome extension that acts as a local middleware. It intercepts the prompt and runs a local BERT model (via a Python FastAPI backend) to scrub names, emails, and keys before the request leaves the browser.

A few notes up front (to set expectations clearly):

Everything runs 100% locally. Regex detection happens in the extension itself. Advanced detection (NER) uses a small transformer model running on localhost via FastAPI.

No data is ever sent to a server. You can verify this in the code + DevTools network panel.

This is an early prototype. There will be rough edges. I’m looking for feedback on UX, detection quality, and whether the local-agent approach makes sense.

Tech Stack: Manifest V3 Chrome Extension Python FastAPI (Localhost) HuggingFace dslim/bert-base-NER Roadmap / Request for Feedback: Right now, the Python backend adds some friction. I received feedback on Reddit yesterday suggesting I port the inference to transformer.js to run entirely in-browser via WASM.

I decided to ship v1 with the Python backend for stability, but I'm actively looking into the ONNX/WASM route for v2 to remove the local server dependency. If anyone has experience running NER models via transformer.js in a Service Worker, I’d love to hear about the performance vs native Python.

Repo is MIT licensed.

Very open to ideas suggestions or alternative approaches.

postalcoder

Very neat, but recently I've tried my best to reduce my extension usage across all apps (browsers/ide).

I do something similar locally by manually specifying all the things I want scrubbed/replaced and having keyboard maestro run a script on my system keyboard whenever doing a paste operation that's mapped to `hyperkey + v`. The plus side of this is that the paste is instant. The latency introduced by even the littlest of inference is enough friction to make you want to ditch the process entirely.

Another plus of the non-extension solution is that it's application agnostic.

informal007

Smart idea! Thanks for sharing.

If we move the detection and modification process from paste to copy operation, that will reduce in-use latency

sailfast

How do you prevent these models from reading secrets in your repos locally?

It’s one thing for the ENVs to be user pasted but typically you’re also giving the bots access to your file system to interrogate and understand them right? Does this also block that access for ENVs by detecting them and doing granular permissions?

willwade

I wonder if this would have been useful https://github.com/microsoft/presidio - its heavy but looks really good. There is a lite version..

fmkamchatka

Could this run at the network level (like TripMode)? So it would catch usage from web based apps but also the ChatGPT app, Codex CLI etc?

p_ing

Deploy a TLS interceptor (forward proxy). There are many out there, both free and paid for solutions; there are also agent-based endpoint solutions like Netskope which do this so you don't have to route traffic through an internal device.

robertinom

That would be a great way to get some revenue from "enterprise" customers!

cjonas

Curious about how much latency this adds (per input token)? Obviously depends on your computer, but it's it ~10s or ~1s?

Also, how does this deal with inquiries when piece of PII is important to the task itself? I assume you just have to turn it off?

jedisct1

LLMs don't need your secret tokens (but MCP servers hand them over anyway): https://00f.net/2025/06/16/leaky-mcp-servers/

Encrypting sensitive data can be more useful than blocking entire requests, as LLMs can reason about that data even without seeing it in plain text.

The ipcrypt-pfx and uricrypt prefix-preserving schemes have been designed for that purpose.

itopaloglu83

It wasn’t very clear in the video, does it trigger on paste event or when the page is activated?

There are a lot of websites that scans the clipboard to improve user experience, but also pose a great risk to users privacy.

willwade

can i have this between my machine and git please.. Like its twice now I've commmited .env* and totally passed me by (usually because its to a private repo..) then later on we/someone clears down the files.. and forgets to rewrite git history before pushing live.. it should never have got there in the first place.. (I wish github did a scan before making a repo public..)

acheong08

GitHub does warn you when you have API keys in your repo. Alternatively, there are CLI tools such as TruffleHog you can put in pre-commit hooks to run before commits automatically

mh-

You can use git hooks. Pre-commit specifically.

https://git-scm.com/docs/githooks

hombre_fatal

At least you can put .env in the global gitignore. I haven’t committed DS_Store in 15 years because of it - its secrets will die with me.

null

[deleted]