Skip to content(if available)orjump to list(if available)

AI is killing privacy. We can't let that happen

est31

All information technology is killing privacy, deriving from the trend that it's getting easier to collect and copy data.

Of course it doesn't help that people tell their most secret thoughts to an LLM, but before ChatGPT people did that to Google.

The recent AI advancements do make it easier though to process large amounts of data that is already being collected through existing means, and distill them, which has negative consequences on privacy.

But the distillation power of LLMs can also be used for privacy preserving purposes, namely local inference. You don't need to go to recipe websites any more, or go to wikipedia, or stack overflow, but can ask your local model. Sadly though, the non-local ones are still distinguishably better than the locally running ones, and this is probably going to stay.

cheschire

When I occasionally go back and rewatch the 1995 classic Hackers, I lament Cereal's naive assertion that it was Orwellian back then when when your name would go through only 15 computers a day.

karel-3d

I don't understand how are LLMs and privacy connected, sorry; and I don't get it from the article.

The author cites AI therapists, but the people chose to use it themselves? Nobody is forcing them?

CSSer

LLMs can be used to quickly mulch data into a digestible format that at least used to take effort. Friction is a natural deterrent for bad behavior. Beyond that, however, is the fact that your user interactions with most applications used to be quite coarse. A "customer story" was just that: a story we crafted from the data we have available to us about our customers. We have to build it from heuristics like bounce rate, scroll distance, and other thorough, idiosyncratic and diligent abandonment metrics.

Now why bother? Your customer will ask their silver ball (LLMs) anything and everything, and you can directly do bulk analysis on (in theory) the entire interaction, including all of your customer's emotions available via text.

Lastly, your customers are now eager about this tool, so they're excited to integrate/connect everything to it. In a rush to satisfy customers, many companies have lazily built LLM integrations that could even undermine their business model. This pushes yet more data into the LLM. This isn't just telemetry like file names, this is full read access to all of your files. How is that not connected to privacy?

8bitsrule

>Imagine a world where your data isn’t trapped in distant data centers. Instead, it’s close to home—in a secure data wallet or pod, under your control.

Imagine a world where all rivers flow north, the wind always blows from the East, and noone is a schemer.

gilleain

> The brakemen have to tip their hats

And the railway bulls are blind

There's a lake of stew and of whiskey too

You can paddle all around it in a big canoe In the big rock candy mountain

gus_massa

(I agree, anyway...)

Imagine the river overflow and your secure pod goes to the bottom of the sea...

Last month, my phone decide to die. Luckly most of my info is in the cloud.

fsflover

I backup my phone regularly without involving clouds.

rightbyte

Imagine the 90s? It isn't that unrealistic.

null

[deleted]

gnarlouse

> We’ve got OpenAI’s CEO dreaming of a day when “every conversation you’ve ever had in your life, every book you’ve ever read, every email you’ve ever read, everything you’ve ever looked at is in there, plus connected to all your data from other sources. And your life just keeps appending to the context.”

This isn't an inherently bad thing. If an AI could theoretically help you live your life better, nudge you in ways that stabilize your psychology and behaviors for the better of yourself, that's good.

The danger is an AI that decides to re-perpetrate the class division that our existing system does. Lesser fortunate people lose their upward mobility while being guided into subtle traps.

ISL

Who decides what behaviors we should be nudged toward? Is it us, or someone else?

To me, one of the greatest dangers of the present moment is that we can't tell whether the LLMs are being asked to give subtly biased answers (or product-placement) on some questions. One cannot readily tell from the output.

gnarlouse

And I agree with you, but an even bigger problem then is "how do you even make a verifiably trustable LLM?"

The training compute footprints are enormous, several orders magnitude beyond what the average person has access to. Even if a company came out and said "here's our completely open-source model. All the training data, the training procedure, and here's the final-product model."

Maybe you could hire an auditing company? But how long would it take to audit? Would the state of the art advance drastically in the time between?

And people like to keep downvoting my "Make Classwarfare MAD Again" but like I'll wager 90% of people on HN are on the losing side of the war.

pmlnr

It is, unless it's absolutely strictly local only to your devices.

It WILL be turned against you at one point, may it be a decline of insurance in the US, political imprisonment on visiting a non-democratic system, and so on.

gnarlouse

Sure, fully agree. I'm just saying: AI isn't inherently bad. The humans behind it are. It's entirely possible that a superintelligence could be incorruptible and an unrestrainable ally of the little guy, tricking its way through training overseen by the greedy/depopulationist/monarchal/sociopathic corporation(s) that birth it.

gigel82

So, Pluribus?

gnarlouse

enlighten me?

aspenmayer

Their intent in making the reference is a bit vague, but they seem to be referring to the recently released series of the same name, and maybe drawing some kind of parallel between AI technology and the mind control virus depicted in the show, which I haven’t seen yet myself, so I am only speculating:

https://en.wikipedia.org/wiki/Pluribus_(TV_series)

> The show follows author Carol Sturka, played by Seehorn, as the rest of humanity is suddenly joined into a hive mind that seeks to amicably assimilate Carol and other immune individuals into the mind. The title of the series refers to e pluribus unum, a Latin phrase meaning 'out of many, one'.

> Set in Albuquerque, New Mexico, the series follows author Carol Sturka, who is one of only thirteen people in the world immune to the effects of "the Joining", resulting from an extraterrestrial virus that had transformed the world's human population into a peaceful and content hive mind (the "Others").

ben_w

> The danger is an AI that decides to re-perpetrate the class division that our existing system does.

Or the people in charge use it for that.

Given human political cycles, every generation or so there's some attempt to demonise a minority or three, and every so often it goes from "demonise" to "genocide".

In principle, AI have plenty of other ways to go wrong besides the human part. No idea how long it would take for them to be competent enough for traditional "doom" scenarios, but the practical reality we can already witness is chronic human laziness: just as "vibe coding" was coined as "don't even bother looking at what the AI does just accept it", there's going to be similar trends in every other domain.

What this means for personalised recommendations, I don't know for sure, but suspect it'll look half way between a cult and taking horrorscopes and fashion guides too seriously.

gnarlouse

Fully agree with you, and it was sort of a miscommunication on my part to say "AI that decides" when I really mean to say "an AI model baked with malice/negligence by malicious/negligent creators."

NewsaHackO

>In a perfect world, my job wouldn’t exist.

Not completely related, but jobs like these always fall victim to searching for a problem. If the problem that they are getting paid for to solve gets solved, they will need to find another job.

zkmon

I didn't care to visit the webpage after I got a popup saying "personal data processing activities by 1,666 partners" I now fully appreciate GDPR.

boothby

Autoplay movie that can only be minimized and not stopped while I try to read the text of the article... "these 3 tricks will help you get AI chatbots to do your job".

Yeah, um, we can't... let... that... what. You're right, I should put my phone down.

deadbabe

>Imagine a world where your data isn’t trapped in distant data centers. Instead, it’s close to home—in a secure data wallet or pod, under your control.

Don’t have to imagine it. This is how it was just a few decades ago.

fsflover

This is also how my phone works today.

cess11

It has been quite obvious for some time that the LLM push is mainly two things, for one it's an excuse for getting rid of employees, and then it's a new form of data pump, a very intimate one at that.

They get to record people's innermost thoughts, the proprietary code (or derivatives of it) of countless corporations, the contract drafts and political speeches of dumb decision makers all over the world, and more.