Improving the Trustworthiness of JavaScript on the Web
17 comments
·October 16, 2025jmull
miloignis
For me, the key problem being solved here is to have reasonably trustworthy web implementations of end-to-end-encrypted (E2EE) messaging.
The classic problem with E2EE messaging on the web is that the point of E2EE is that you don't have to trust the server not to read your messages, but if you're using a web client you have to trust the server to serve you JS that won't just send the plain text of your messages to the admin.
The properties of the web really exacerbate this problem, as you can serve every visitor to your site a different version of the app based on their IP, geolocation, tracking cookies, whatever. (Whereas with a mobile app everyone gets the same version you submitted to the app store).
With this proposed system, we could actually have really trustworthy E2EE messaging apps on the web, which would be huge.
(BTW, I do think E2EE web apps still have their place currently, if you trust the server to not be malicious (say, you or a trusted friend runs it), and you're protecting from accidental disclosure)
jmull
It doesn't seem like there's much difference in the trust model between E2EE web apps and App Store apps. Either way the publisher controls the code and you essentially decide whether to trust the publisher or not.
Perhaps there's something here that affects that dynamic, but I don't know what it is. It would help this effort to point out what that is.
fabrice_d
On the web, if your server is compromised it's game over, even if the publisher is not malicious. In app stores, you have some guarantee that the code that ends up on your device is what the publisher intended to ship (basically signed packages). On the web it's currently impossible to bootstrap the integrity verification with just SRI.
This proposal aims at providing the same guarantees for web apps, without resorting to signed packages on the web (ie. not the same mechanism that FirefoxOS or ChromeOS apps used). It's competing with the IWA proposal from Google, which is a good thing.
CharlesW
> I don't know what problem this solves.
This allows you to validate that "what you sent is what they got", meaning that the code and assets the user's browser executes are exactly what you intended to publish.
So, this gives web apps and PWAs some of the same guarantees of native app stores, making them more trustworthy for security-sensitive use cases.
everdrive
I improve the trustworthiness of js by blocking it by default.
AndrewStephens
As a site owner, the best thing you can do for your users is to serve all your resources from a server you control. Serving javascript (or any resource) from a CDN was never a great idea and is pointless these days with browser domain isolation, you might as well just copy any third party .js in your build process.
I wrote a coincidently related rant post last week that didn't set the front page of HN on fire so I won't bother linking to it but the TL/DR is that a whole range of supply chain attacks just go away if you host the files yourself. Each third party you force your users to request from is an attack vector you don't control.
I get what this proposal is trying to achieve but it seems over complex. I would hate to have to integrate this into my build process.
doomrobo
You're right that, when your own server is trustworthy, fully self-hosting removes the need for SRI and integrity manifests. But in the case that your server is compromised, you lose all guarantees.
Transparency adds a mechanism to detect when your server has been compromised. Basically you just run a monitor on your own device occasionally (or use a third party service if you like), and you get an email notif whenever the site's manifest changes.
I agree it's far more work than just not doing transparency. But the guarantees are real and not something you get from any existing technology afaict.
EGreg
If they want to make a proposal, they should have httpc://sha-256;... URLS which are essentially constant ones, same as SRI but for top-level domains.
Then we can really have security on the Web! Audit companies (even anonymous ones but with a good reputation) could vet certain hashes as being secure, and people and organizations could see a little padlock when M of N approved a new version.
As it is, we need an extension for that. Because SRI is only for subresource integrity. And it doesn't even work on HTML in iframes, which is a shame!
ameliaquining
The linked proposal is basically a user-friendlier version of that, unless you have some other security property in mind that I've failed to properly understand.
null
some_furry
This is really cool, and I'm excited to hear that it's making progress.
Binary transparency allows you to reason about the auditability of the JavaScript being delivered to your web browser. This is the first significant step towards a solution to the "JavaScript Cryptography Considered Harmful" blog post.
The remaining missing pieces here are, in my view, code signing and the corresponding notion of public key transparency.
zb3
Ok (let's pretend I didn't see the word "blockchain" there), but none of this should interfere with browser extensions that need to modify the application code.
some_furry
EDIT: Disregard this comment. I think there was a technical issue on my computer. Keeping the original comment below.
-----
> let's pretend I didn't see the word "blockchain" there
There's nothing blockchain about this blog post.
I think this might be a rectangles vs squares thing. While it's true that all blockchains use chains of hashes (e.g., via Merkle trees), it's not true that all uses of append-only data structures are cryptocurrency.
See also: Certificate transparency.
JimDabell
They specifically suggest using a blockchain for Tor:
> A paranoid Tor user may not trust existing transparency services or witnesses, and there might not be any other trusted party with the resources to self-host these functionalities. For this use case, it may be reasonable to put the prefix tree on a blockchain somewhere. This makes the usual domain validation impossible (there’s no validator server to speak of), but this is fine for onion services. Since an onion address is just a public key, a signature is sufficient to prove ownership of the domain.
some_furry
Oh, weird. I didn't see that (and a subsequent Ctrl+F showed 0 results) but now it's showing up for me?
null
It would be helpful if they included a problem statement of some sort.
I don't know what problem this solves.
While I could possibly read all this and deduce what it's for, I probably won't... (the stated premise of this, "It is as true today as it was in 2011 that Javascript cryptography is Considered Harmful." is not true.)