Avoiding AI is hard – but our freedom to opt out must be protected
92 comments
·May 12, 2025Bjartr
GeorgeCurtis
> It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online.
I think, more frighteningly, the potential for it to make decisions on insurance claims and medical care.
Saigonautica
I know someone in this space. The insurance forms are processed first-pass with AI or ML (I forget which). Then the remainder are processed by humans in Viet Nam. This is not for the USA.
I've also vaguely heard of a large company that provides just this as a service -- basically a factory where insurance claims are processed by humans here in VN, in one of the less affluent regions. I recall they had some minor problems with staffing as it's not a particularly pleasant job (it's very boring). On the other hand, the region has few employment opportunities, so perhaps it's good for some people too.
I'm not sure which country this last one is processing forms for. It may, or may not be the USA.
I don't really have an opinion to offer -- I just thought you might find that interesting.
danielmarkbruce
There is an underlying assumption there that is certainly incorrect.
So many stupid comments about AI boil down to "humans are incredibly good at X, we can't risk having AI do it". Humans are bad at all manner of things. There are all kinds of bad human decisions being made in insurance, health care, construction, investing, everywhere. It's one big joke to suggest we are good at all this stuff.
chii
The fear is that by delegating to an ai, there's no recourse if the outcome for the person is undesirable (correctly or not).
What is needed from the AI is a trace/line-of-reasoning to which a decision is derived. Like a court judgement, which has explanations attached. This should be available (or be made as part of the decision documentation).
j1436go
Looking at the current state of AI models that assist in software engineering I don't have much faith in it being any better, quite the contrary.
kazinator
Bad decisions in insurance are, roughly speaking, on the side of over-approving.
AI will perform tirelessly and consistently at maximizing rejections. It will leave no stone unturned in search for justifications why a claim ought to be denied.
otabdeveloper4
AI will be used to justify the existence of bad decisions. Now that we have an excuse in the form of "AI" we don't need to fix or own our bad decision mistakes.
beloch
>* "AI decision making also needs to be more transparent. Whether it’s automated hiring, healthcare or financial services, AI should be understandable, accountable and open to scrutiny."
You can't simply look at a LLM's code and determine if, for example, it has racial biases. This is very similar to a human. You can't look inside someone's brain to see if they're racist. You can only respond to what they do.
If a human does something unethical or criminal, companies take steps to counter that behaviour which may include removing the human from their position. If an AI is found to be doing something wrong, one company might choose to patch it or replace it with something else, but will other companies do the same? Will they even be alerted to the problem? One human can only do so much harm. The harm a faulty AI can do potentially scales to the size of their install base.
Perhaps, in this sense, AI's need to be treated like humans while accounting for scale. If an AI does something unethical/criminal, it should be "recalled". i.e. Taken off the job everywhere until it can be demonstrated the behaviour has been corrected. It is not acceptable for a company, when alerted to a problem with an AI they're using, to say, "Well, it hasn't done anything wrong here yet."
amelius
The question I have is: should an AI company be allowed to push updates without testing by an independent party (e.g. in self driving cars)?
hedora
Maybe people will finally realize that allowing companies to gather private information without permission is a bad idea, and should be banned. Such information is already used against everyone multiple times a day.
On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!
This tradeoff has basically nothing to do with recent advances in AI though.
Also, with the current performance trends in LLMs, we seem very close to being able to run models locally. That’ll blow up a lot of the most abusive business models in this space.
On a related note, if AI decreases the number of mistakes my doctor makes, that seems like a win to me.
If the AI then sold my medical file (or used it in some other revenue generating way), that’d be unethical and wrong.
Current health care systems already do that without permission and it’s legal. Fix that problem instead.
heavyset_go
> On the other hand, blocking training on published information doesn’t make sense: If you don’t want your stuff to be read, don’t publish it!
There's a difference between reading something and ripping it off, no matter how you launder it.
MoltenMan
I think the line is actually much blurrier than it might seem. Realistically everything is a remix of things people have done before; almost nothing is truly brand new. So why specifically are people allowed to build on humanity's past achievements, but not AI?
chii
> So why specifically are people allowed to build on humanity's past achievements, but not AI?
because those people seem to think that individuals building it will be too small a scale to be commercially profitable (and thus the publisher is OK to have them be a form of social credit/portfolio building).
As soon as it is made clear that these published data can be monetized (if only by large corporations with money), they want a piece of the pie that they think they deserve (and not getting).
protocolture
>There's a difference between reading something and ripping it off, no matter how you launder it.
Yes but that argument cuts both ways. There is a difference, and its not clear that training is "Ripping off"
danielmarkbruce
Just like there is a difference between genuine and disingenuous.
Zoethink
[dead]
lacker
I think most people who want to "opt out of AI" don't actually understand where AI is used. Every Google search uses AI, even the ones that don't show an "AI panel" at the top. Every iOS spellcheck uses AI. Every time you send an email or make a non-cryptocurrency electronic payment, you're relying on an AI that verifies that your transaction is legitimate.
I imagine the author would respond, "That's not what I mean!" Well, they should figure out what they actually mean.
tkellogg
Somewhere in 2024 I noticed that "AI" shifted to no longer include "machine learning" and is now closer to "GenAI" but still bigger than that. It was never a strict definition, and was always shifting, but it made a big shift last year to no longer include classical ML. Even fairly technical people recognize the shift.
poslathian
I’ve worked in this field for 20+ years and as far as I can tell the only consistent colloquial definition of AI is “things lay people are surprised a computer can do right now”
jedbrown
The colloquial definitions have always been more cultural than technical, but it's become more acute recently.
> I think we should shed the idea that AI is a technological artifact with political features and recognize it as a political artifact through and through. AI is an ideological project to shift authority and autonomy away from individuals, towards centralized structures of power. https://ali-alkhatib.com/blog/defining-ai
JackeJR
It swings both ways. In some circles, logistic regression is AI, in others, only AGI is AI.
leereeves
I imagine the author would respond: "That's what I said"
"Opting out of AI is no simple matter.
AI powers essential systems such as healthcare, transport and finance.
It also influences hiring decisions, rental applications, loans, credit scoring, social media feeds, government services and even what news or information we see when we search online."
Robotbeat
Okay, then I guess they’ll agree to paying more for those services since they’ll cost more to deal with someone’s boutique Amistics.
lacker
It's not even about paying more. Think of email. Every time you send an email, there's an AI that scans it for spam.
How could there be a system that lets you opt out, but keep sending email? Obviously all the spammers would love to opt out of spam filtering, if they could.
The system just fundamentally does not work without AI. To opt out of AI, you will have to stop sending email. And using credit cards. And doing Google searches. Etc etc etc...
codr7
If I was given the choice, I would without exception pay more for non-AI service.
mistrial9
tech people often refer to politicians as somehow dumb, but big AI safety legislation two years ago on both sides of the North Atlantic, deeply dives into exactly this as "safety" for the general public.
simonw
Came here to say exactly that. The use of "AI" as a weird, all-encompassing boogeyman is a big part of the problem here - it's quickly growing to mean "any form of technology that I don't like or don't understand" for a growing number of people.
The author of this piece made no attempt at all to define what "AI" they were talking about here, which I think was irresponsible of them.
drivingmenuts
I'm just not sure I see where AI has made my search results better or more reliable. And until it can be proven that those results are better, I'm going to remain skeptical.
I'm not even sure what form that proof would take. I do know that I can tolerate non-deterministic behavior from a human, but having computers demonstrate non-deterministic behavior is, to me, a violation of the purpose for which we build computers.
simonw
"I'm just not sure I see where AI has made my search results better or more reliable."
Did you prefer Google search results ten years ago? Those were still using all manner of machine learning algorithms, which is what we used to call "AI".
lacker
Even 20 years ago, it wasn't using AI for the core algorithm, but for plenty of subsystems, like (IIRC) spellchecking, language classification, and spam detection.
danielmarkbruce
Tongue in cheek or just dumb? Non-deterministic behavior is a core part of much of computing... practically no current system powering the world could work without it. Ever hear of encryption?
roxolotl
Reminds me of the wonderful Onion piece about a Google Opt Out Village. https://m.youtube.com/watch?v=lMChO0qNbkY
I appreciate the frustration that, if not quite yet, it’ll be near impossible to live a normal life without having exposure to GenAI systems. Of course as others say here, and the date on the Onion piece shows, it’s not sadly not a new concern.
tim333
The trouble with his examples of doctors or employers using AI is it's not really about him opting out, it's about forcing others, the doctors and employers, not to use AI which will be tricky.
yoko888
I’ve been thinking about what it really means to say no in an age where everything says yes for us.
AI doesn’t arrive like a storm. It seeps in, feature by feature, until we no longer notice we’ve stopped choosing. And that’s why the freedom to opt out matters — not because we always want to use it, but because knowing we can is part of what keeps us human.
I don’t fear AI. But I do fear a world where silence is interpreted as consent, and presence means surrender by default.
chii
> silence is interpreted as consent
silence is indeed concent (of the status quo). You need to vote with your wallet, personal choice and such - if you want to be comfortable, choosing the status quo is the way, and thus consent.
There's no possibility of a world where you get to remain comfortable, but still get to have a "choice" to dictate a choice contrary to the status quo.
yoko888
That’s fair. You’re speaking from a world where agency is proven by cost — where the only meaningful resistance is one that hurts. I don’t disagree. But part of me aches at how normalized that has become. Must we always buy our way out of the systems we never asked to enter? I’m not asking to be safe or comfortable. I’m asking for the space to notice what I didn’t choose — and to have that noticing matter.
djoldman
> Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it. Or imagine visiting a doctor where treatment options are chosen by a machine you can’t question.
I wonder when/if the opposite will be as much of an article hook:
"Imagine applying for a job, only to find out that a human rejected your resume before an algorithm powered by artificial intelligence (AI) even saw it. Or imagine visiting a doctor where treatment options are chosen by a human you can’t question."
The implicit assumption is that it's preferred that humans do the work. In the first case, probably most would assume an AI is... ruthless? biased? Both exist for humans too. Not that the current state of AI resume processing is necessarily "good".
In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.
userbinator
Humans can be held accountable. Machines can't. AI dilutes responsibility.
kevmo314
AI didn't bring anything new to the table, this is a human problem. https://www.youtube.com/watch?v=x0YGZPycMEU
whilenot-dev
GenAI absolutely brings something new to the table! These models should be perceived as human-like intelligence when it's time to bring value to shareholders, but are designed to provide just enough non-determinism to avoid responsibilities.
All problems are human and nothing will ever change that. Just imagine the effects anyone is facing when being affected by something like the British Post Office scandal[0], only this time it's impossible to comprehend any faults in the software system.
[0]: https://en.wikipedia.org/wiki/British_Post_Office_scandal
djoldman
At least in the work setting, employers are generally liable for stuff in the workplace:
tbrownaw
How fortunate that AIs are tools operated by humans, and can't cause worse responsibility issues than when a human employee is required to blindly follow a canned procedure.
whilenot-dev
I can't tell if this is sarcasm...
GenAI interfaces are rolled out as chat products to end users, they just evaporate this last responsibility that remains on any human employee. This responsiblity shift from employee to end user is made on purpose, "worse responsibility issues" are real and well designed to be on the customer side.
andrewmutz
What do you mean held accountable? No HR human is going to jail for overlooking your resume.
If you mean that a human can be fired when they overlook a resume, an AI system can be be similarly rejected and no longer used.
theamk
No, not really. If a single HR person is fired, there are likely others to pick up the slack. And others will likely learn something from the firing, and adjust their behavior accordingly if needed.
On the other hand, "firing" an AI from AI-based HR department will likely paralyze it completely, so it's closer to "let's fire every single low-level HR person at once" - something very unlikely to occur.
The same goes with all other applications too: firing a single nurse is relativel easy. Replacing AI system with a new one is a major project which likely takes dozens of people and millions of dollars.
locopati
Humans can be held accountable when they discriminate against groups of people. Try holding a company accountable for that when they're using an AI system.
drivingmenuts
You cannot punish an AI - it has no sense of ethics or morality, nor a conscience. An AI cannot be made to feel shame. You cannot punish an AI for transgressing.
A person can be held responsible, even when it's indirect responsibility, in a way that serves as a warning to others, to avoid certain behaviors.
It just seems wrong to allow machines to make decisions affecting humans, when those machines are incapable of experiencing the the world as a human being does. And yet, people are eager to offload the responsibility onto machines, to escape responsibility themselves.
eWeSaYYY
[dead]
linsomniac
>Or imagine visiting a doctor where treatment options are chosen by a human you can’t question
That really struck a chord with me. I've been struggling with chronic sinusitis, without really much success. I had ChatGPT o3 do a deep research on my specific symptoms and test results, including a negative allergy (on my shoulder) test but that the doctor observed allergic reactions in my sinuses.
ChatGPT seemed to do a great job, and in particular came up with a pointer to an NIH reference that showed 25% of patients in a study showed "local rhinitis" (isolated allergic reactions) in their sinuses that didn't show elsewhere. I asked my ENT if I could be experiencing a local reaction in my sinuses that didn't show up in my shoulder, and he completely dismissed that idea with "That's not how allergies work, they cause a reaction all over the body."
However, I will say that I've been taking one of the second gen allergy meds for the last 2 weeks and the sinus issues have been resolved and staying resolved, but I do need another couple months to really have a good data point.
The funny thing is that this Dr is a evening programmer, and every time I see him we are talking about how amazing the different LLMs are for programming. He also really seems to keep up with new ENT tech, he was telling me all about a new "KPAP" algorithm that they are working on FDA approval for and apparently is much less annoying to use than CPAP. But he didn't have any interest in looking at the at the NIH reference.
davidcbc
> I do need another couple months to really have a good data point.
You need another couple months to really have a good anecdote.
linsomniac
I think whether I'm cured or not only slightly minimizes the story of a physician who discounted something that seemingly impacts 25% of patients... It's also interesting to me that ChatGPT came up with research supporting an answer to my primary question, but the Dr. did not.
The point being that there's a lot that the LLMs can do in concert with physicians, discounting either one is not useful or interesting.
leereeves
> In the second, I don't understand as no competent licensed doctor chooses the treatment options (absent an emergency); they presumably know the only reasonable options, discuss them with the patient, answer questions, and the patient chooses.
I wish that were the case, but in my experience it is not. Every time I've seen a doctor, they offered only one medication, unless I requested a different one.
zdragnar
There's a few possible reasons this can happen.
First is that the side effect profile of one option is much better known or tolerated, so the doctor will default to it.
Second is that the doctor knows the insurance company / government plan will require attempting to treat a condition with a standard cheaper treatment before they will pay for the newer, more expensive option.
There's always the third case where the doctor is overworked, lazy or prideful and doesn't consider the patient may have some input on which treatment they would like, since they didn't go to medical school and what would they know anyway?
linsomniac
>they offered only one medication
I've had a few doctors offer me alternatives and talk through the options, which I'll agree is rare. It sure has been nice when it happened. One time I did push back on one of the doctor's recommendations: I was with my mom and the doctor said he was going to prescribe some medication. I said "I presume you're already aware of this but she's been on that before and reacted poorly to it and we took her off it because of that. The doctor was NOT aware of that and prescribed something else. I sure was glad to be there and be able to catch that.
null
lokar
They include no functional definition of what counts as AI.
Without that the whole thing is just noise
daft_pink
i’m not sure it’s just code. it’s just an algorithm similar to any other algorithm. i’m not sure that you can opt out of algorithms.
Nasrudith
This seems like one of those 'my personal neurosis deserve to be treated like a societal problem' articles. I've seen the exact same sort of thing when complaining about inability to opt out of being advertised to.
> Imagine applying for a job, only to find out that an algorithm powered by artificial intelligence (AI) rejected your resume before a human even saw it
The article says this like it's a new problem. Automated resume screening is a long established practice at this point. That it'll be some LLM doing the screening instead of a keyword matcher doesn't change much. Although, it could be argued that an LLM would better approximate an actual human looking at the resume... including all the biases.
It's not like companies take responsibility for such automated systems today. I think they're used partly for liability cya anyway. The fewer actual employees that look at resumes, the fewer that can screw up and expose the company to a lawsuit. An algorithm can screw up too of course, but it's a lot harder to show intent, which can affect the damages awarded I think. Of course IANAL, so this could be entirely wrong. Interesting to think about though.