Skip to content(if available)orjump to list(if available)

Let's talk about AI and end-to-end encryption

klik99

> You might even convince yourself that these questions are “privacy preserving,” since no human police officer would ever rummage through your papers, and law enforcement would only learn the answer if you were (probably) doing something illegal.

Something I've started to see happen but never mentioned is the effect automated detection has on systems: As detection becomes more automated (previously authored algorithms, now with large AI models), there's less cash available for individual case workers, and more trust at the managerial level on automatic detection. This leads to false positives turning into major frustrations since it's hard to get in touch with a person to resolve the issue. When dealing with businesses it's frustrating, but as these get more used in law enforcement, this could be life ruining.

For instance - I got flagged as illegal reviews on Amazon years ago and spent months trying to make my case to a human. Every year or so I try to raise the issue again to leave reviews, but it gets nowhere. Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

More automatic detection can work (and honestly, it's inevitable) but it's got to acknowledge that false positives will happen and allocate enough people to resolve those issues. As it stands right now, these detection systems get built and immediately human case workers get laid off, there's this assumption that detection systems REPLACE humans, but it should be that they augment and focus human case workers so you can do more with less - the human aspect needs to be included in the budgeting.

But the incentives aren't there, and the people making the decisions aren't the ones working the actual cases so they aren't confronted with the problem. For them, the question is why save $1m when you could save $2m? With large AI models making it easier and more effective to build automated detection I expect this problem to get significantly worse over the next years.

drysine

>Imagine this happening for a serious criminal issue, with the years long back log on some courts, this could ruin someones life.

It can be much scarier.

There was a case in Russia when a scientist was accused in a murder that happened 20 years ago based on 70% face recognition match and fake identification as an accomplice by a criminal. [0] He spent 10 months in jail during "investigation" despite being incredibly lucky to have an alibi -- archival records of the institute where he worked, proving he was in an expedition far away from Moscow at that time. He was eventually freed but I'm afraid that police investigators that used very weak face recognition match as a way to improve their work performance stats are still working in the police.

[0] https://lenta.ru/articles/2024/04/03/scientist/

rainonmoon

Grave consequences are not a rarity. Automated decision making in immigration and housing classify people with zero recourse or transparency, locking them out of a place to live (and in the case of Australia, locking them up in offshore detention for years).

polygon87

I know it’s the wrong way to think but things like this make me glad about a digital footprint… good chance I’m liking a TikTok comment or reading an HN thread at the same time as any crime, just statistically.

asddubs

that's not going to get you off the hook. anything that could be faked via account sharing is going to be discarded (not to mention that tiktok and similar platforms will not collaborate with you to build an alibi by giving access to this data, only the police to build a case)

ithkuil

And probably there are other people in jail convicted using the same method that just were unlucky enough to not have a bulletproof alibi?

drysine

I don't know, but it seems quite likely, unfortunately. There were quite a few other cases when fake evidence was planted by police.

It's not the only problem with technology -- it's claimed that there has been over hundred cases of false DNA matches not caused by malice or processing errors.[0] In theory, DNA match must not be considered by courts as 100% accurate, but in fact it is.

On the other hand, there were cases when human rights advocates or journalists were claiming that innocent people were jailed but that turned out to be false, like people getting caught on camera doing the same kind of crime again after they served their sentence.

[0] https://www.kommersant.ru/doc/5825384

bflesch

[flagged]

asddubs

https://www.washingtonpost.com/local/crime/fbi-overstated-fo...

The notion that this kind of thing couldn't happen in the west is laughable

fwn

> [...] my conclusion is that you're here to spill russian propaganda. [...]

The case described by the parent is that of someone who was wrongly imprisoned for 10 months on the basis of bogus application of faulty technology, even though they had a solid alibi. Therefore, the comment does not reflect well on Russia, the Russian state or the Russian government, like.. at all.

If there is a propaganda dimension to this (which I doubt), it is certainly not an attempt to say something nice about the Russian justice system.

smallmancontrov

The UK Post Office scandal is bone-chilling.

Update this to a world where every corner of your life is controlled by a platform monopoly that doesn't even provide the most bare-bones customer service and yeah, this is going to get a lot worse before it gets better.

Vampiero

And that's the early game.

Imagine when AI will be monitoring all internet traffic and arresting people for thoughtcrime.

What wasn't feasible to do before is now quite in reach and the consequences are dire.

Though of course it won't happen overnight. First they will let AI encroach every available space (backed by enthusiastic techbros). THEN, once it's established, boom. Authoritarian police state dystopia times 1000.

And it's not like they need evidence to bin you. They just need inference. People who share your psychological profile will act and speak and behave in a similar way to you, so you can be put in the same category. When enough people in that category are tagged as criminals, you will be too.

All because you couldn't be arsed to write some boilerplate

shakna

It's already arresting the wrong people [0].

[0] https://www.theregister.com/2023/08/08/facial_recognition_de...

HeatrayEnjoyer

We need strong and comprehensive regulations. Some places have enacted partial solutions but none anywhere near as complete as needed. EU has GDPR and some early AI laws, India has the IT Act that requires companies to provide direct end-user support.

BlueTemplar

That's why there are transparency laws that indirectly forbid the use of black box decision systems like these for anything government-related.

WhyNotHugo

This exact scenario is described in the 1965 short story "Computers Don't Argue".

You can find it in the following link in the third page of the PDF (labelled as page 84): https://nob.cs.ucdavis.edu/classes/ecs153-2021-02/handouts/c...

It's amazing how 60 years ago somebody anticipated these exact scenarios, yet we didn't take their cautionary tale seriously in the slightest.

klik99

Wow that story is so grim and precinct

divan

There was a good thread on this phenomenon (called "accountability sinks") [1]

[1] https://news.ycombinator.com/item?id=41891694

t0bia_s

It could also be used to eliminate political opponents, minorities, etc. Persecution with collective guilt bases on digital footprints wasn't easier ever.

CoffeeOnWrite

Also AI for accountability laundering. It gives plausible deniability. It's a sociopathic manager's dream.

rainonmoon

This. They're digital sniffer dogs, a pretext to lend credibility to vibes-based policing.

ChoHag

[dead]

blueblimp

> Yet this approach is obviously much better than what’s being done at companies like OpenAI, where the data is processed by servers that employees (presumably) can log into and access.

No need for presumption here: OpenAI is quite transparent about the fact that they retain data for 30 days and have employees and third-party contractors look at it.

https://platform.openai.com/docs/models/how-we-use-your-data

> To help identify abuse, API data may be retained for up to 30 days, after which it will be deleted (unless otherwise required by law).

https://openai.com/enterprise-privacy/

> Our access to API business data stored on our systems is limited to (1) authorized employees that require access for engineering support, investigating potential platform abuse, and legal compliance and (2) specialized third-party contractors who are bound by confidentiality and security obligations, solely to review for abuse and misuse.

chefandy

I have to say — I’m kind of amazed that anyone would expect privacy out of chat bot companies and products. You’re literally having a “conversation” with the servers of companies that built their entire product line using other people’s professional and personal output whether they approved, or even knew about it or not. Less a “it’s better to ask for forgiveness than permission” sort of thing than a “we’d rather just not ask and be pretty cagey about it if they ask, and then if they prove it, tell them they tacitly agreed to it by not hiding it from us even though they had no way to know we were looking at it“ sort of thing. Frankly I’m astonished that open ai, specifically, promises as much as they do in their privacy policy. Based on their alleged bait-and-switch tactics quietly swapping out models or reducing compute for paying customers after the initial “gee wiz look at that” press cycle, I can’t imagine those privacy policies will have much longevity when the company gets a more stable footing… and whooops looks like they figured out how to extract the training data from the models! And it’s different data since we extracted it from the model so the old privacy policy doesn’t apply! Haha sorry, that’s business and we’re building a techno utopian society here, so you should feel honored to be included! You think Altman wouldn’t sell that in a heartbeat to try and fund some big moonshot product if they get clobbered in the marketplace? Never mind the sketchy girlfriend-in-an-app-class chatbots.

Don’t get me wrong — I absolutely think the privacy SHOULD be there, but I’m just shocked that anyone would assume it was. Maybe I’m being overly cynical? These days when I think I might be, in the end, it seems I wasn’t being cynical enough.

notfed

Cynically, I think most people know this in this kind of situation, but like clockwork media sources will suddenly dramatize things for clicks, money, lawsuits, or politics, and people will nod their heads not because they agree with the accusations, but because they have preconceived bias against the defendant company.

null

[deleted]

ajb

The real threat here is going to be when AI expands from being applied to accelerate the work of individuals, to being applied to the control of organisations. And it will be tempting to do that. We all know the limitations of managers, management hierarchies, metrics, OKRs etc. It's easy to think of a CEO deciding that all the communications between their employees should just be fed into an AI that they can query . (Ironically that would be easier to enforce if everyone was remote). It's quite possible that it would enable more effective organisations, as the CEO and upper level management can have a better idea of what is really happening. But it will reduce the already tenuous belief of the powerful that their ordinary staff are real human beings. And it will inevitably leak out from private organisations, as the executive class see no reason why they shouldn't have the same tools when running the country as when running a corporation.

Advocates of mass surveillance like to point out that no human now needs to listen to your calls. But the real danger was never the guy in a drab suit transcribing your conversions from reel-to-reel tape. It was always the chief who could call for the dossier on anyone they were finding inconvenient, and have it looked at with an eye to making you never inconvenience them again.

The full consequences of mass surveillance have not played out simply because no one had the means to process that much ad-hoc unstructured data. Now they do.

TeMPOraL

> It's easy to think of a CEO deciding that all the communications between their employees should just be fed into an AI that they can query . (Ironically that would be easier to enforce if everyone was remote).

This is already happening, whether the CEOs want it or not - when there's a legal issue requiring discovery, e-discovery software may be used to pull in all digital communications that can be accessed, and feed it all to AI for, among other things, sentiment analysis. Applications of GenAI for legal work, in general, is a hot topic in legal circles now.

rglover

> We are about to face many hard questions about these systems, including some difficult questions about whether they will actually be working for us at all.

And how. I'd lean towards no. Where we're headed feels like XKEYSCORE on steroids. I'd love to take the positive, optimistic bent on this, but when you look at where we've been combined with the behavior of the people in charge of these systems (to be clear, not the researchers or engineers, but c-suite), hope of a neutral, privacy-first future seems limited.

ActorNightly

Given how politics and companies evolved, I actually trust those people in charge of XKEYSCORE systems more than ever. They may wear suits, but those people usually come from some military background, and have a sense of duty towards defending US, from threats both foreign and domestic, and historically have not really abused their powers no matter what the administration is. XKEYSCORE for example, wasn't really about hacking people, it was just about collecting mass metadata and building profiles, well within the legal system, and the blame should be on the companies that didn't provide privacy tools, because any big government could have build the same system.

Meanwhile, the anti anti-establishment Republican Party since 2016 who cried about big tech turned out to be the biggest pro-establishment fans, giving Elmo an office in a white house and Zucc bending a knee to avoid prosecution.

With these new systems, Id rather have smart people who only work in US defensive forces because of a sense of duty (considering they could get paid much more in the private sector) in charge.

rglover

Unfortunately, there are far too many examples of those very people abusing these tools. They shot the "sense of honor and duty" argument point blank just for allowing these things to exist in the first place.

If what you say is true, there would have been more than one honorable person to step up and say "hey, wait a minute." In the case of XKEYSCORE, there was precisely one, and he's basically been marooned in Russia for over a decade (and funny enough, XKEYSCORE still exists and is likely still utilized in the exact same way [1]).

Never underestimate the effect the threat of character destruction—and by extension, loss of income—will have on even the most honorable person's psyche. In situations involving matters like these, it's always far more likely that the "pressure" will be ratcheted up until the compliance (read: keep your mouth shut) rate is 100%.

[1] https://documents.pclob.gov/prod/Documents/OversightReport/e...

ActorNightly

> far too many examples of those very people abusing these tools.

Name one. And not about some agency collecting data, or targeting a foreign national with suspected ties to terrorist, all which are within the bounds of the law. I want to hear an example where a US citizen, fully innocent, who was targeted for no reason what so ever for someone for personal gain.

You can't. Because it doesn't happen. Even in the report that you linked (which I know you didn't read btw), it literally states the multitude of guardrails in place for using XKEYSCORE.

>If what you say is true, there would have been more than one honorable person to step up and say "hey, wait a minute." In the case of XKEYSCORE, there was precisely one, and he's basically been marooned in Russia for over a decade

Here is a pro tip: anytime you hear or read about Bad Big Brother Government, ask yourself why should the person reporting it be given the benefit of the doubt and not the government. People took a lot of what Snowden said as gospel, despite him being technically wrong on a lot of stuff, all because its "cool" to be anti big brother, no matter what the actual truth is.

schmidtleonard

> well within the legal system

It's not a search if we don't find anything, and it's not a seizure if we charge the money with the crime. These are court approved arguments, so they must be correct interpretations.

Point is: modern bureaucrats have proven that they are absolutely willing to abuse power, even in the best of times when there is no real domestic political strife.

ActorNightly

Given the technical nature of this forum, its absolutely mind boggling that people still don't understand what the surveillance programs were about.

If you want to use an analogy, its more along the lines of people living in houses and driving cars made out of pure glass that are completely see through, with faces blurred, and NSA just having a cameras around. If you are going to tell me that this is an abuse of power, its like an argument comparing US to absolute utopia.

toss1

Good thoughts but as you point out about Elmo & Zucc, there is no way it stays with just the responsible people. It will also not be limited to protest. Just look at what Florida, Texas, and other states are doing about women's healthcare - any general agent worth its salt and with a bit of data will know about any woman's periods, pregnancies, miscarriages, and travel - which is being criminalized ....

ActorNightly

From personal experience in the government contracting world with a TS/SCI clearance, I have a lot of faith in people in charge not letting bad actors abuse these sort of powers.

Less so than before these days, but still Id wager on them holding true to duty to defend the constitution.

saagarjha

> historically have not really abused their powers

How would you know?

rainonmoon

We do know - that they demonstrably have abused their powers. I didn't realise it was possible to know about XKEYSCORE with no context or understanding of the Snowden leaks but GP seems to have missed that the "suits" "in charge of XKEYSCORE", the NSA, have repeatedly illegally wiretapped American citizens, to say nothing of the FISA abuses, Five Eyes, etc. Regardless of how you feel about the three-letter agencies' impacts on the rest of the world, the thought that anyone on Hacker News would consider these programs defensible is shocking.

HeatrayEnjoyer

I absolutely do not trust it, but AFAIK the military doesn't feed much intelligence to law enforcement on US soil. (We'll see if that's still the case in the near future.)

rapjr9

My guess is that the main purpose of agents will be to train the AI on your data. Companies have run out of data on the internet for training AI's, so they'll use agents as an excuse to get access to your personal real-time data. This has always been the business model, you are the product.

null

[deleted]

ozgune

> Apple even says it will publish its software images (though unfortunately not the source code) so that security researchers can check them over for bugs.

I think Apple recently changed their stance on this. Now, they say that "source code for certain security-critical PCC components are available under a limited-use license." Of course, would have loved it if the whole thing was open source. ;)

https://github.com/apple/security-pcc/

> The goal of this system is to make it hard for both attackers and Apple employees to exfiltrate data from these devices.

I think Apple is claiming more than that. They are saying 1/ they don't keep any user data (data only gets processed during inference), 2/ no privileged runtime access, so their support engineers can't see user data, and 3/ they make binaries and parts of the source code available to security researchers to validate 1/ and 2/.

You can find Apple PCC's five requirements here: https://security.apple.com/documentation/private-cloud-compu...

Note: Not affiliated with Apple. We read through the PCC security guide to see what an equivalent solution would look like in open source. If anyone is interested in this topic, please hit me up at ozgun @ ubicloud . com.

saagarjha

Some of the core elements of the boot process are not source available, unfortunately.

Animats

> Who does your AI agent actually work for?

Yes. I made that point a few weeks ago. The legal concept of principal and agent applies.

Running all content through an AI in the cloud to check for crimethink[1] is becoming a reality. Currently proposed:

- "Child Sexual Abuse Material", which is a growing category that now includes AI-generated images in the US and may soon extend to Japanese animation.

- Threats against important individuals. This may be extended to include what used to be considered political speech in the US.

- Threats against the government. Already illegal in many countries. Bear in mind that Trump likes to accuse people of "treason" for things other than making war against the United States.

- "Grooming" of minors, which is vague enough to cover most interactions.

- Discussing drugs, sex, guns, gay activity, etc. Variously prohibited in some countries.

- Organizing protests or labor unions. Prohibited in China and already searched for.

Note that talking around the issue or jargon won't evade censorship. LLMs can deal with that. Run some ebonics or leetspeak through an LLM and ask it to translate it to standard English. Translation will succeed. The LLM has probably seen more of that dialect than most people.

"If you want a vision of the future, imagine a boot stepping on a face, forever" - Orwell

[1] https://www.orwell.org/dictionary/

iugtmkbdfil834

A cynic in me is amused at the yet unknown corporation being placed under investigation due to a trigger phrase in one of the meetings transcribed incorrectly.

Your point is worth reiterating.

Terr_

Or poisoned-data that sets up a trap, so that a system will later confabulate false innocence or guilt when certain topics or targets come up.

crooked-v

"Grooming" in particular is the angle that Republicans want to use to illegalize any kind of gender-nonconforming behavior, up to and including desired states like "women wearing pants is crossdressing, and doing so around children is a felony".

nashashmi

The most depressing realization in all of this is that the vast treasure trove of data that we used to have in the cloud thinking it was not scannable even for criminal activity has now become a vector where we shall have thought police coming down upon us for simple ideas of dissent.

AlexandrB

A lot of people tried to sound the alarm. It's not "the cloud", it's "other people's computers". And given that other people own these machines, their interests - whether commercial or ideological - will always come first.

Terr_

Plus the machines they don't technically own--like Microsoft's attempts to force online accounts, bloatware, telemetry, etc.

tokioyoyo

To be fair, most people understand that risk. It’s just it is very convenient in a lot of scenarios and some businesses might have not even started without it. Privacy is not that big of a concern for a big chunk of people. And they’re basically voting with their wallets.

like_any_other

"Voting with their wallets", where all the choices are picked by entities hostile to consumer privacy and autonomy, and then they mislead you about those choices.

People who bought LG (and now most other, now "smart") TVs did not in any meaningful way "vote" to be spied on and support DRM - simply all the TVs in a store would spy and show ads, and not disclose any of it at time of sale.

vaylian

Trump will take office on Monday. If he chooses to declare some progressive idea as anti-american, a lot of people, who previously said nothing illegal, could face hostility from "patriots".

I hope this doesn't happen. But I wouldn't be surprised if it did. Old data can become toxic waste.

walrus01

It's a good thing that encrypted data at rest on your local device is inaccessible to cloud based "AI" tools. The problem is that your average person will blithely click "yes/accept/proceed/continue/I consent" on pop up dialogs in a GUI and agree to just about any Terms of Service, including decrypting your data before it's sent to some "cloud" based service.

I see "AI" tools being used even more in the future to permanently tie people to monthly recurring billing services for things like icloud, microsoft's personal grade of office365, google workspace, etc. You'll pay $15 a month forever, and the amount of your data and dependency on the cloud based provider will mean that you have no viable path to ever stop paying it without significant disruption to your life.

flossposse

Green, (the author), makes an important point: > a technical guarantee is different from a user promise. [...] End-to-end encrypted messaging systems are intended to deliver data securely. They don’t dictate what happens to it next.

Then Green seems to immediately forget the point they just made, and proceed to talk about PCC as if it were something other than just another technical guarantee. PCC only helps to increase confidence that the software running on the server is the software Apple intended to be there. It doesn't give me any guarantees about where else my data might be transferred from there, or whether Apple will only use it for purposes I'm okay with. PCC makes Apple less vulnerable to hacks, but doesn't make them any more transparent or accountable. In fact, to the extent that some hackers hack for pro-social purposes like exposing corporate abuse, increased security also serves as a better shield against accountability. Of course, I'm not suggesting that we should do away with security to achieve transparency. I am, however, suggesting that transparency, moreso than security, is the major unaddressed problem here. I'd even go so far as to say that the woeful state of security is enabled in no small part by lack of transparency. If we want AI to serve society, then we must reverse the extreme information imbalance we currently inhabit wherein every detail of each person's life is exposed to the service provider, but the service provider is a complete black-box to the user. You want good corporate actors? Don't let them operate invisibly. You want ethical tech? Don't let it operate invisibly.

(Edit: formatting)

bee_rider

The author helpfully emphasized the interesting question at the end

> This future worries me because it doesn’t really matter what technical choices we make around privacy. It does not matter if your model is running locally, or if it uses trusted cloud hardware — once a sufficiently-powerful general-purpose agent has been deployed on your phone, the only question that remains is who is given access to talk to it. Will it be only you? Or will we prioritize the government’s interest in monitoring its citizens over various fuddy-duddy notions of individual privacy.

I do think there are interesting policy questions there. I mean it could hypothetically be mandated that the government must be given access to the agent (in the sense that we and these companies exist in jurisdictions that can pass arbitrary laws; let’s skip the boring and locale specific discussion of whether you think your local government would pass such a law).

But, on a technical level—it seems like it ought to be possible to run an agent locally, on a system with full disk encryption, and not allow anyone who doesn’t have access to the system to talk with it, right? So on a technical level I don’t see how this is any different from where we were previously. I mean you could also run a bunch of regex’s from the 80’s to find whether or not somebody has, whatever, communist pamphlets on their computers.

There’s always been a question of whether the government should be able to demand access to your computer. I guess it is good to keep in mind that if they are demanding access to an AI agent that ran on your computer, they are basically asking for a lossy record of your entire hard drive.

cryptonector

> The author helpfully emphasized the interesting question at the end

We're already there. AI or not doesn't affect the fact that smartphones gather, store, and transmit a great deal of information about their users and their users' actions and interests.

_boffin_

Unreasonable search?

bee_rider

> (in the sense that we and these companies exist in jurisdictions that can pass arbitrary laws; let’s skip the boring and locale specific discussion of whether you think your local government would pass such a law)

Anyway the idea of what’s a reasonable search in the US has been whittled away to almost nothing, right? “The dog smelled weed on your hard drive.” - A cop, probably.

chgs

Boring locale specific discussion.

fragmede

The article hinges on a bad assertion that

> Apple can’t rely on every device possessing enough power to perform inference locally. This means inference will be outsourced to a remote cloud machine.

If you go look at Apple's site https://www.apple.com/apple-intelligence/ and scroll down, you get:

Apple Intelligence is compatible with these devices. iPhone 16 A18 iPhone 16 Plus A18 iPhone 16 Pro Max A18 Pro iPhone 16 Pro A18 Pro iPhone 15 Pro Max A17 Pro iPhone 15 Pro A17 Pro iPad Pro M1 and later iPad Air M1 and later iPad mini A17 Pro MacBook Air M1 and later MacBook Pro M1 and later iMac M1 and later Mac mini M1 and later Mac Studio M1 Max and later Mac Pro M2 Ultra

If you don't have one of those devices, Apple did the obvious thing and disabled features on devices that don't have the hardware to do it.

While Apple has this whole private server architecture, they're not sending iMessages off device for summarization, that's happening on device.

EGreg

I heard that homomorphic encryption can actually preserve all the operations in neural networks, since they are differentiable. Is this true? What is the slowdown in practice?

crackalamoo

This is true in principle, yes. In practice, the way this usually works is by converting inputs to bits and bytes, and then computing the result as a digital circuit (AND, OR, XOR).

Doing this encrypted is very slow: without hardware acceleration or special tricks, running the circuit is 1 million times slower than unencrypted, or about 1ms for a single gate. (https://www.jeremykun.com/2024/05/04/fhe-overview/)

When you think about all the individual logic gates involved in just a matrix multiplication, and scale it up to a diffusion model or large transformer, it gets infeasible very quickly.

j2kun

There are FHE schemes that do better than binary gates (cf. CKKS) but they have other problems in that they require polynomial approximations for all the activation functions. Still they are much better than the binary-FHE schemes for stuff like neural networks, and most hardware accelerators in the pipeline right now are targeting CKKS and similar for this reason.

For some numbers, a ResNet-20 inference can be done in CKKS in like 5 minutes on CPU. With custom changes to the architecture you can get less than one minute, and in my view HW acceleration will improve that by another factor of 10-100 at least, so I'd expect 1s inference of these (still small) networks within the next year or two.

LLMs, however, are still going to be unreasonably slow for a long time.