Skip to content(if available)orjump to list(if available)

Jurisdiction Is Nearly Irrelevant to the Security of Encrypted Messaging Apps

tabbott

I think this is a dangerous view. As we've seen with the libxz attack, skilled developers are very capable of hiding backdoors/vulnerabilities in security software, even when it is open source. So it's very important whether the developers building the software are trustworthy.

Authoritarian jurisdictions with a modus operandi of compelling their businesses and citizens by force are thus much riskier than Western democracies, even flawed ones. I at least expect it's a lot harder to say no to demands to break your promises that come with credible threats of torturing your family.

I'll also say that it's quite hard to make a messaging app without the servers that run the service having a great deal of power in the protocol. Many types of flaws or bugs in a client or protocol go from "theoretical problem" to "major issue" in the presence of a malicious server.

So if end-to-end security is a goal, you must pay attention to not only the protocol/algorithms and client codebase. The software publisher's risks are important (E.g., Zoom has a lot of risk from a China-centric development team). As are those of the hosting provider (if different from the publisher).

And also less obvious risks, like the mobile OS, mobile keyboard app, or AI assistant that are processing your communications even though they're sent between clients with E2EE.

Reflections on Trusting Trust is still a great read for folks curious about these issues.

some_furry

> I think this is a dangerous view.

I think you misinterpreted the most important nuance in this post. The rest of your comment is about jurisdiction in the context of who develops the client software.

The blog post is talking about jurisdiction in the context of where ciphertext is stored, and only calls that mostly irrelevant. The outro even acknowledges that when jurisdiction does matter at all, it's about the security of the software running on the end machine. (The topic at hand is end to end encryption, after all!)

So, no, this isn't a dangerous view. I don't think we even disagree at all.

tabbott

I think we agree here that the US/Europe jurisdiction difference is relatively minor compared to questions about the software itself.

What's dangerous is the framing; many E2EE messengers give the server a LOT more power than "just stores the ciphertext". https://news.ycombinator.com/item?id=33259937 is discussion of a relevant example that's gotten a lot of attention, with Matrix giving the server control over "who is in a group", which can be the whole ball game for end-to-end security.

And that's not even getting into the power of side channel information available to the server. Timing and other side channel attacks can be powerful.

Standard security practice is defense in depth, because real-world systems always have bugs and flaws, and cryptographic algorithms have a history of being eventually broken. Control over the server and access to ciphertext are definitely capabilities that, in practice, can often be combined with vulnerabilities to break secure systems.

If the people who develop the software are different from those who host the server, that's almost certainly software you can self-host. Why not mention self-hosting in the article?

If you're shopping for a third party to host a self-hostable E2EE messenger for you. The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant. I can't agree with that claim.

some_furry

> What's dangerous is the framing; many E2EE messengers give the server a LOT more power than "just stores the ciphertext". https://news.ycombinator.com/item?id=33259937 is discussion of a relevant example that's gotten a lot of attention, with Matrix giving the server control over "who is in a group", which can be the whole ball game for end-to-end security.

I'm a vocal critic of Matrix, and I would not consider it a private messenger like Signal.

https://soatok.blog/2024/08/14/security-issues-in-matrixs-ol...

When Matrix pretends to be a Signal alternative, the fact that the server had control over group membership makes their claim patently stupid.

> And that's not even getting into the power of side channel information available to the server. Timing and other side channel attacks can be powerful.

A lot of my blog discusses timing attacks and side-channel cryptanalysis. :)

> If the people who develop the software are different from those who host the server, that's almost certainly software you can self-host. Why not mention self-hosting in the article?

Because all of the self-hosting solutions (i.e., Matrix) have, historically, had worse cryptography than the siloed solutions (i.e., Signal, WhatsApp) to the point that I wholesale discount Matrix, OMEMO, etc. as secure messaging solutions.

> If you're shopping for a third party to host a self-hostable E2EE messenger for you. The framing of the server as just "storing ciphertext" would suggest trustyworthyness of that hosting provider isn't relevant. I can't agree with that claim.

It's more of an architecture question.

Is a self-hosted Matrix server that accepts and stores plaintext, but is hosted in Switzerland, a better way to chat privately than Signal? What if your threat model is "the US government"? My answer is a resounding, "No. You should fucking use Signal."

mananaysiempre

> But What If The Host Country [...] Legally Compels the App Store to Ship Malware?

> This is an endemic risk to smartphones, but binary transparency makes this detectable.

> That said, at minimum, the developer should control their own signing keys.

So, don’t ship on the Play Store unless you’re grandfathered?

> If the developers for an app do not live in a liberal democracy with a robust legal system, they probably cannot tell their government, “No,” if they’re instructed to backdoor the app and cut a release (stealth be damned).

Or when the laws of said democracy make it illegal for them to say “no” (see: Australia, possibly the US per Lavabit, realistically every country in Europe if the government is willing to claim a grave enough threat per the German hoster of Jabber.ru attempting a MITM against them).

mcherm

I think this is missing one important issue: what if your encryption is highly reliable but the cypher test is hosted in a jurisdiction that has laws requiring the disclosure of the plaintext (perhaps with a court order or a "National Security Letter") and the ability to compel the system owners to obey.

some_furry

> what if your encryption is highly reliable but the cypher test is hosted in a jurisdiction that has laws requiring the disclosure of the plaintext (perhaps with a court order or a "National Security Letter") and the ability to compel the system owners to obey.

This is a contradiction. If you have such a capability, then your encryption isn't sufficiently reliable. If it is sufficiently reliable, then this law cannot take effect.

If, for example, Australia wanted to compel me to backdoor something for their investigative purposes, there's nothing they can do. I live in America.

If I hosted ciphertext in Australia, the most they can hope is to terminate the service in their country. This is an availability concern, but the failure mode isn't "the government sees your nudes".

> (perhaps with a court order or a "National Security Letter")

National Security Letters don't do what you think they do. There are widespread misconceptions about their allowed scope, but they only allow the government to request "subscriber information" from a service provider. That doesn't include "we compel you to backdoor your app, and here's an automatic gag order". If they try to use non-NSL measures to accomplish this compulsion, talk to a lawyer not a cryptographer.

adrian_b

The previous poster has not referred to a backdoor, but to the fact that in certain places, including USA, law enforcement can request from you the decryption key, and if you do not comply they can throw you in jail for an indefinite time, until you comply.

In my opinion, as someone who has been born and raised in a country occupied by external invaders, who had installed there a fake communist "democracy" and fake justice, the most fundamental human right is the right to refuse to answer to a question, regardless who asks the question.

If in a country such a refuse is sufficient for severe punishments, without the need of any other proof that the one refusing to answer has done anything wrong, then, regardless if such a refuse to answer is labeled "obstruction of justice", "contempt of court" or whatever, in that country any claims that human rights were respected are false.

It is a shame for the United Nations that this most important human right is not included in their declaration.

In order to be able to oppose an abusive government, the right to refuse to answer a question is much more important than the right to possess weapons (which will always be inferior to those of law enforcement and military, so they are not a solution).

some_furry

The blog post makes it clear that, if the service operator ever even has access to the secret keys to surrender it in the first place, it doesn't qualify as "properly implemented cryptography". See: The Mud Puddle test.

The only way they would be able to acquire the key would be to push a backdoored update to the app. Reproducible builds (which implies open source to be meaningful) and binary transparency make that incompatible with gag orders, by design.