AI Is Dehumanization Technology
171 comments
·June 26, 2025perching_aix
reval
The counter-counter-argument is that the messy part of human interaction is necessary for social cohesion. I’ve already witnessed this erosion prior to LLMs in the rise of SMS over phone calls (personal) and automated menu systems for customer service (institutional).
It is sad to me that the skill required to navigate everyday life are being delegated to technology. Pretty soon it won’t matter what you think or feel about your neighbors because you will only ever know their tech-mediated facade.
tobr
> It is sad to me that the skill required to navigate everyday life are being delegated to technology.
Isn’t this basically what technology does? I suppose there is also technology to do things that weren’t possible at all before, but the application is often automation of something in someone’s everyday life that is considered burdensome.
perching_aix
I'm not sure I'd agree with characterizing heavily asymmetric social interactions, such as customer service folks assisting tens or hundreds of people on the same issues every week and similar, a "necessarily messy part of human interaction for social cohesion".
svieira
It is well noted that it is very hard to get in contact with a human at Google when you have a problem. And then we wonder why Google never seems to understand its user base.
jofla_net
It is, in fact, all insulation. The technology, that is. It cuts out face-to-face, vid-to-vid, voice-to-voice, and even direct text as in sms or email. To the point that agents will be advocating for users instead of people even typing back to one another. Until and unless it affects the reproduction cycle, and I think it already has, people will fail to socialize since there is also zero customary expectation to do so (that was the surprisingly good thing about old world customs), so only the overtly gregarious will end up doing it. Kind of a long tailed hyperbolic endgame but, well, there it is.
Edit: one point i forgot to make is that it has already become absurd how different someones online persona or confidence level is when they are AFK, its as if theyve been reduce to an infantile state.
tines
> I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.
This would be a good counter if this were all that this technology is being used for.
perching_aix
I don't think it's necessary for me to counter everything they're saying. They're making a unilateral judgement - as long as I can demonstrate a good counter for one part of it, the unilateral judgement will fail to hold.
They can of course still argue that it's majority-bad for whatever list of reasons, but that's not what bugs me. What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what". Because this is what the title, and the tone, and just about everything else in this article comes across as to me, and I find it equal parts terrifying and disagreeable.
asciimov
> What bugs me is the absolute tunnel vision of "for principled reasons I must find this thing completely and utterly bad, no matter what".
AI companies also only sell the public on the upside of these technologies. Behind closed doors they are investing hard on this with the hope to reduce or eliminate their labor cost with no regard to any damage to society.
Lerc
>This would be a good counter if this were all that this technology is being used for.
Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?
Most criticisms cite examples demonstrating the existence of harm because proving existence requires a single example. Calculating the sum of an effect is much harder.
Even if the current impact of a field is predominantly harmful, it does not stand that the problem is with the what is being attempted. Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
adamc
I don't think the path was fixed by healthcare, per se. It was fixed by adopting scientific investigation.
So I think your argument is kind of misleading.
ToucanLoucan
> Do you hold the position that a thing is bad because it is possible to do harm, or that it is predominantly causing harm?
Mere moments later...
> Even if the current impact of a field is predominantly harmful
So let's just skip the first part then, you conceded it's predominantly harmful. On this we agree.
> it does not stand that the problem is with the what is being attempted.
Well, it's not a logical 1-to-1, no. But I would say if the current impact of a field is predominantly harmful, then revisiting what is being attempted isn't the worst idea.
> Consider healthcare, a few hundred years ago much of healthcare did more harm than good, charlatans and frauds were commonplace. Was that because healthcare itself was a bad thing? Was it a mistake to even go down that path?
If OpenAI and company were still pure research projects, this would hold some amount of water, even if I would still disagree with it. However that exempts the context that OpenAI is actively (and under threat of financial ruin) turning itself into a for-profit business, and is actively selling it's products, as are it's competitors, to firms in the market with the explicit notion of reducing headcount for the same productivity. This doesn't need a citation, look at any AI product marketing and you see a consistent theme is the removal of human labor and/or interaction.
m4rtink
I am not sure about your experience, but these types of channles seem to mostly have the issues of people being to bussy to repply, but when they do, there is often an interesting interaction & this is how users often become contributors to the project over time.
Sure, if you want to make sure you don't get any more contributors, you can try to replace that with a chatbot that will always reply immediately, but might just be wrong like 40% of the time, is not actually working on the project and will certainly not help in building sociel interactions between the project and its users.
perching_aix
I have participated in such channels for multiple years on the assisting side, and have been keeping in touch with some of the folks I knew from there still doing it. Also note that the projects I helped around with were more end-user focused.
Most interactions start with users being vague. This can already result in some helpers getting triggered, and starting to be vaguely snarky, but usually this is resolved by using prepared bot commands... which these users sometimes just won't read.
Then the misunderstandings start. Or the misplaced expectations. Or the lies. Or maybe the given helper has been having a bad day, but due to their long time presence in the project, they won't be moderated out properly. And so on. It's just not a good experience.
Ever since I left, I got screencaps of various kinds of conversations. In some cases, the user was being objectively insufferable - I don't think it's fair to expect a human to put up with that. Other times, the helper was being unnecessarily mean - they did not appreciate my feedback on that. Neither happens with LLMs. People don't grow resentful of the never ending horde of what feels like increasingly clueless users, and innocent folk don't get randomly chewed out for not living up to the optimality expectations of those who tend to 1000s of cases similar to theirs every week.
whatevertrevor
I think the solution is neither AI nor human in this case.
While direct human support is invaluable in many cases, I find it really hard to believe how our industry has completely forgotten the value of public support forums. Here are some pure advantages over Discord/Slack/<Insert private chat platform of your liking>
- Much much better search functionality out of the box, because you can leverage existing search engines.
- From the above it follows that high value contributors do not need to spend their valuable time repeatedly answering the same basic questions over and over.
- Your high value contributors don't have to be employees of the company, as many enthusiastic power users often participate and contribute in such places.
- Conversations are _much_ easier to follow without having to resort to hidden threads and forums posts on Discord that no one will ever read or search.
- Over time you build a living library of supporting documentation instead of useful information being strewn in many tiny conversations over months.
- No user expectation to be helped immediately. A forum sets the expectation that this is an async method of communication, so you're less likely to see entitled aggravating behavior (though you won't see many users giving you good questions with relevant information attached even on forums).
Vegenoid
Do we have examples of LLMs being used successfully in these scenarios? I’m skeptical that the insufferable users will actually be satisfied and able to be helped by an LLM, unless the LLM is actually presented as a human, which seems unethical. It also hinges on an LLM being able to get the user to provide the required information accurately, without lying or simply getting frustrated, angry, and unwilling to cooperate.
I’m not sure there is a solution to help people who don’t come to the table willing to put in the effort required to get help. This seems like a deep problem present in all kinds of ways in society, and I don’t think smarter chatbots are the solution. I’d love to be wrong.
butundstand
In their bio:
“I’m not a tech worker…” they like to tinker with code and local Linux servers.
They have not seen how robotic the job has become and felt how much pressure there is to act like a copy-paste/git pull assembly line.
passwordoops
Counter-counter : there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with draconian automated systems.
Also, try to come up with a less esoteric example than Discord Help channels. In fact, this is the issue with most defenses of LLMs. The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
perching_aix
> there is nothing as intellectually miserable and antisocial as large corporations and institutions replacing Helpdesks with (...) automated systems
Should be fairly obvious, but I disagree. Also I think you mean asocial, not antisocial. What's uniquely draconian about automated systems though? They're even susceptible to the same social engineering attacks humans are (it's just referred to as jailbreaking instead).
> Also, try to come up with a less esoteric example than Discord Help channels.
No.
> The benefits are so niche, or minor, that the example itself shows why they are not worth the money being poured in
Great. This is already significantly more intellectually honest than the entire blogpost.
eikenberry
> [..] if these projects all integrated LLMs into their chatbots for just a few bucks [..]
No matter your position on the AI helpfulness, asking volunteers to not only spend time helping support a free software project but to also pony up money is just doubling down on the burden free software maintainers face as was highlighted in the recent libxml2 discussion.
perching_aix
A lot of the projects that maintain Discord servers in my experience will receive plenty enough donations to make up for the $5 it'd take to serve the traffic that hits their Discord for help with AI. Yes I did run the numbers. It's so (intentionally) cheap, this is a non-issue.
But then one could also just argue that this is something the individual projects can decide for themselves. Not really for either of us to make this call. You can consider what I said as just an example you disagree with in that case.
MarcelOlsz
You forgot one key thing: I don't want to talk to an AI.
trod1234
This assumes the conclusion that AI would solve the lack of resources interaction issue. Unfortunately the data has been in on this for quite a long time (longer the LLM chatbots have been in existence) if you've worked in IT at a large carrier provider or for large call centers you know about this.
The simple fact of the matter is, there is a sharp gap between what an AI can do, and what a human does in any role involving communications, especially customer service.
Worse, there are psychological responses that naturally occur when you do any number of a few specific things that escalate conflict if you leave this to an AI. A qualified CSR person is taught how to de-escalate, diffuse, and calm the person who has been wound up to the point of irrationality. They are the front-line punching bags.
AI can't differentiate between what's acceptable, and what's not because the tokens it uses to identify these contexts have two contradictory states in the same underlying tokens. This goes to core classical computer science problems of halting, and other aspects.
The companies that were ahead of the curb for this invested a lot into this almost a decade and a half ago, and they found that in most cases these types of systems exponentiated the issues once they did finally get to a person, and they took it out on that person irrationally because they were the representative of the company that put them through what amounts to torture.
Some examples of behavior that causes these types of responses are when you are being manipulated in a way that you know is manipulation, it causes stress through perceptual blindspots causing an inconsistent internal mental state resulting in confusion. When that happens it causes a psychological reversal often of irrational anger. An infinite or byzantine loop designed to run people in circular hamster wheels is one such structure.
If you've ever been in a social interaction where you offer an olive branch and they seem to accept it, but at the last minute through it back in your face, you've experienced this. The smart individual doesn't ever do this because they know they will make an enemy for life who will always remember.
This is also how through communication, you can impose coercive cost on people, and companies have done this for years where anti-trust and FTC weren't being enforced. These triggers are inherent to a lesser or greater degree in all of us, every person alive.
The imposition of personal cost through this and other psychological blindspots is how torturous and vexatious processes are created.
Empathy and care are a two way street. It requires both entities to be acting in good faith through reflective appraisal. When this is distorted, it drives people crazy, and there is a critical saturation point where assumptions change because the environment has changed. If people show the indicators that they are acting in bad faith, others will treat them automatically as acting in bad faith. Eventually, the environment dictates that those people must prove they are acting in good faith (somehow) but proving this is quite hard. The environment switches from innocent benefit of the doubt to, guilty until proven innocent.
These erosions of the social contract while subtle, dictate social behavior. Can you imagine a world where something bad happens to you, and everyone just turns their backs, or prevents you from helping yourself?
Its the slipperly slope of society failing back to violence, few today commenting on things like this have actually read the material published by the greats on the social contract and don't know how society arose from the chaos of violence.
rolha-capoeira
This presupposes that human value only exists in the things current AI tech can replace—pattern recognition/creation. I'd wager the same argument was made when hand-crafted things were being replaced with industrialized products.
I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there. And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers. I don't believe that is the case.
munificent
> I'm not saying those things aren't valuable, or that humans can't express social and spiritual value in those ways, but that human value doesn't only exist there.
This sounds sort of like a "God of the gaps" argument.
Yes, we could say that humanity is left to express itself in the margins between the things machines have automated away. As automation increases its capabilities, we just wander around looking for some untouched back-alley or dark corner the robots haven't swept through yet and do our dancing and poetry slams there until the machines arrive forcing us to again scurry away.
But at that point, who is the master, us or the machines?
mattgreenrocks
If this came to pass, the population would be stripped of dignity pretty much en masse. We need to feel competent, useful, and connected to people. If people feel they have nothing left, then their response will be extremely ugly.
rolha-capoeira
What we still get paid to do is different than what we're still able to do. I'm still able to knit a sweater if I find it enjoyable. Some folks can even do it for a living (but maybe not a living wage)
danielbln
It kind of makes sense if following a particular pattern is your purpose and life, and maybe your identity.
malux85
We should actively encourage fluidity in purpose, too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.
Resilience and strength in our civilisation comes from confidence in our competence,
not sanctifying patterns so we don’t have to think.
We need to encourage and support fluidity, domain knowledge is commoditised, the future is fluid composition.
asciimov
Great, tell that to someone who spent years honing their skills that it's too bad the rug was pulled out from beneath you, time to start over from the bottom again.
Maybe there would be merit to this notion if society provided the necessary safety net for this person to start over.
haswell
> We should actively encourage fluidity in purpose
I don't think we should assume most people are capable of what you describe. Assigning "should" to this assumes what you're describing is psychologically tenable across a large population.
> too much rigidity or militant clinging to ideas is insecurity or attempts at absolving personal responsibility.
Or maybe some people have a singular focus in life and that's ok. And maybe we should be talking about the responsibility of the companies exploiting everyone's content to create these models, or the responsibility of government to provide relief and transition planning for people impacted, etc.
To frame this as a personal responsibility issue seems fairly disconnected from the reality that most people face. For most people, AI is something that is happening to them, not something they are responsible for.
And to whatever extent we each do have personal responsibility for our careers, this does not negate the incoming harms currently unfolding.
adamc
People come with all sorts of preferences. Telling people who love mastery that they have to be "fluid" isn't going to lead to happy outcomes.
danielbln
Absolutely, I agree with that.
MichaelZuo
How would this matter?
People can self assign any value whatsoever… that doesn’t change.
If they expect external validation then that’s obviously dependent on multiple other parties.
pojzon
Due to how AI works its only a matter of time till its better at pretty much everything humans do beside “living”.
People tend to talk about any AI related topic comparing it to any industrial shift that happened in the past.
But its much Much MUCH bigger this time. Mostly because AI can make itself better, it will be better and it is better with every passing month.
Its a matter of years until it can completely replace humans in any form of intellectual work.
And those are not mine words but smartest ppl in the world, like AI grandfather.
We humans think we are special. That there wont be something better than us. But we are in the middle of the process of creating something better.
It will be better. Smarter. Not tired. Wont be sick. Wont ever complain.
And it IS ALREADY and WILL replace a lot of jobs and it will not create new ones purely due to efficiency gains and lack of brainpower in majority of ppl who will be laid off.
Not everyone is a noble prize winner. And soon we will need only such ppl to advance AI.
serbuvlad
> because AI can make itself better
Can it? I'm pretty sure current AI (not just LLMs, but neural nets more generally) require human feedback to prevent overfitting. Fundamentally eschewing any fear or hope of the singularity as predicted.
AI can not make itself better because it can not meaningfully define what better means.
pojzon
AlphaEvolved reviewed how its trained and found a way to improve the process.
Its only the beginning. aI agents are able to simulate tasks, get better at them and make themselves better.
At this point its silly to say otherwise.
sonofhans
> Its a matter of years until it can completely replace humans in any form of intellectual work.
This is sensationalism. There’s no evidence in favor of it. LLMs are useful in small, specific contexts with many guardrails and heavy supervision. Without human-generated prior art for that context they’re effectively useless. There’s no reason to believe that the current technical path will lead to much better than this.
z0r
Call me when 'AI' cook meals in our kitchens, repairs the plumbing in our homes and removes the trash from the curb.
Automation has costs and imagining what LLMs do now as the start of the self-improving, human replacing machine intelligence is pure fantasy.
NitpickLawyer
To say that this is pure fantasy when there are more and more demos of humanoid robots doing menial tasks, and the costs of those robots are coming down is ... well something. Anger, denial (you are here)...
unsui
not entirely.
The risk raised in the article is that AI is being promoted beyond its scope (pattern recognition/creation) to legal/moral choice determination.
The techo-optimists will claim that legal/moral choices may be nothing more than the sum of various pattern-recognition mechanisms...
My take on the article is that this is missing a deep point: AI cannot have a human-centered morality/legality because it can never be human. It can only ever amplify the existing biases in its training environments.
By decoupling the gears of moral choice from human interaction, whether by choice or by inertia, humanity is being removed from the mechanisms that amplify moral and legal action (or, in some perverse cases, amplify the biases intentionally)
giraffe_lady
> And so, to give AI the power of complete dehumanization is to reduce humans to just pattern followers.
It would but I don't think that's what they're saying. The agent of dehumanization isn't the technology, but the selection of what the technology is applied to. Or like the quip "we made an AI that creates, freeing up more time for you to work."
Wherever human value, however you define that, exists or is created by people, what does it look like to apply this technology such that human value increases? Does that look like how we're applying it? The article seems to me to be much more focused on how this is actually being used right now rather than how it could be.
null
kelseyfrog
Whether we like it or not, AI sits at the intersection of both Moravec's and Jevon's paradox. Just as more efficient engines lead to increased gas usage, as AI gets increasingly better at problems difficult for humans, we see even greater proliferation within that domain.
The reductio on this is the hollowing out of the hard-for-humans problem domain, leaving us to fight for the scraps of the easy-for-humans domain. At first glance this sounds like a win. Who wouldn't want something else to solve the hard problems? The big issue with this is easy-for-human problems are often dull, devoid of meaning, and low-wage. Paradoxically, the hardest problems have always been the ones that make work meaningful.
We stand at the crossroads where one path leads to an existence where with a poverty of meaning and although humans create and play by their own rules, we feel powerless to change it. What the hell are we doing?
ololobus
Interesting point of view, didn't know about Jevon's paradox before. To me, the outcome still depends on whether AI can get superhuman [1] (and beyond) at some point. If it can, then, well, we will likely indeed see that suitable-for-human areas of the intellectual labor are shrinking. If it cannot, then it becomes an even more philosophical question similar to the agnosticism beliefs. Is the universe completely knowable? Because if it's not, then we might as well have an infinite more hard problems, and AI just rises a bar for what we can achieve by paring a human with AI compared to just human alone.
[1] I know it's a bit hard to define, but I'd vaguely say that it's significantly better in the majority of intelligence areas than the vast majority of the population. Also it should be scalable. If we can make it slightly better than human by burning the entire Earth's energy, then it doesn't make much sense.
serbuvlad
Prioritize goals over the process and what AIs can do doesn't matter.
Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
Whether it's people captured by film, animations in Blender or AI slop, what matters is the outcome. Is it good? Do people like it?
I do the infrastructure at a department of my Uni as sort of a side-gig. I would have never had the time to learn Ansible, borg, FreeIPA, wireguard, and everything else I have configured now and would have probably resorted to a bunch of messy shell scripts that don't work half the time like the people before me.
But everything I was able to set up I was able to set up in days, because of AI.
Sure, it's really satisfying because I also have a deep understanding of the fundamentals, and I can debug problems when AI fails, and then I ask it "how does this work" as a faster Google/wiki.
I've tried windsurf but given up because the AI does something that doesn't work and I can give it the prompts to find a solution (+ think for myself) much faster than it can figure out itself (and probably at the cost of a lot less tokens).
But the fact that I enjoy the process doesn't matter. And the moment I can click a button and make a webapp, I have so many ideas in my drawer for how I could improve the network at Uni.
I think the problem people have is that they work corporate jobs where they have no freedom to choose their own outcomes so they are basically just doing homework all their life. And AI can do homework better than them.
Vegenoid
Take this too far and you run into a major existential crisis. What is the goal of life? Most people would say something along the lines of bringing joy to others, experiencing joy yourself, accomplishing things that you are proud of, and continuing the existence of life by having children, so that they can experience joy. The joy of life is in doing things, joy comes from process. Goals are useful in that they enable the doing of some process that you want to be doing, or in the joy of achieving the goal (in which case the joy is usually derived from the challenge in the process of achieving the goal).
> Want to make a movie? The goal should be getting the movie out, seen by people and reviewed.
This especially falls apart when it comes to art, which is one of the most “end-goal” processes. People make movies because they enjoy making movies, they want movies to be enjoyed by others because they want to share their art, and they want it to be commercially successful so that they can keep making movies. For the “enjoying a movie” process, do you truly believe that you’d be happy watching only AI-generated movies (and music, podcasts, games, etc.) created on demand with little to no human input for the rest of your life? The human element is truly meaningless to you, it is only about the pixels on the screen? If it is, that’s not wrong - I just think that few people actually feel this way.
This isn’t an “AI bad” take. I just think that some people are losing sight of the role of technology. We can use AI to enable more people than ever before to spend time doing the things they want to do, or we can use it to optimize away the fun parts of life and turn people even further into replaceable meat-bots in a great machine run by and for the elites at the top.
kelseyfrog
When all we care about is the final product, we miss the entire internal arc, the struggle, the bruised ego, and the chance of failure, and the reward in feeling "holy shit, I did it!" that comprises the essence of being human.
Reducing the human experience as a means to an end is the core idea of dehumanization. Kant addressed this in the "humanity formula of the categorical imperative:
"Act in such a way that you treat humanity, whether in your own person or in the person of any other, always at the same time as an end, never merely as a means."
I'm curious how you feel about the phrase "the real treasure was the friends we made along the way." What does it mean to you?skuxxlife
But the process _does_ matter. That is the whole point of life. Why else are we even here if not to enjoy the process of making? It’s why people get into woodworking or knitting as hobbies. If it was just about the end result, they could just go to a store a buy something that would be way cheaper and easier. But that’s not the point - it’s something that _you_ made with your own hands, as imperfect as they are, and the experience of making something.
rafram
> For example, to create an LLM such as ChatGPT, you'd start with an enormous quantity of text, then do a lot of computationally-intense statistical analysis to map out which words and phrases are most likely to appear near to one another. Crunch the numbers long enough, and you end up with something similar to the next-word prediction tool in your phone's text messaging app, except that this tool can generate whole paragraphs of mostly plausible-sounding word salad.
This explanation might've been passable four years ago, but it's woefully out of date now. "Mostly plausible-sounding word salad"?
mv4
What would be a better explanation today?
diggan
I think "mostly plausible-sounding" is, albeit simplified, OK for an analogy I guess. But the "word salad" part gives the impression it doesn't even look like real human text, which it kind of does at the surface. I think it's mostly "word salad" that makes it sound far off from the truth.
djhn
Over the past year the chat bots have improved in many ways, but their written output has regressed to the mean: the average RLHF-driven preference.
It is word salad, unless you’re a young, underpaid contractor from a country previously colonised by the British or the United States.
relaxing
Word salad refers to human writing with poor diction and/or syntax.
mrcwinn
Yes, before AI, society was doing fantastically well on "social relations, empathy, and care." XD
I remain an optimist. I believe AI can actually give us more time to care for people, because the computers will be able to do more themselves and between each other. Unproven thesis, but so is the case laid out in this article.
ACCount36
Anyone who thinks that AI is bad for "empathy and care" should be forced to work for a year in a tech support call center, first line.
There are some jobs that humans really shouldn't be doing. And now, we're at the point where we can start offloading that to machines.
tim333
>The push to adopt AI is, at its core, a political project of dehumanization
I can't really say I've seen that. The article seems to be about adoption of AI in the Canadian public sector, not something I'm really familiar with as a Brit. The government here hopes to boost the economy with it and Hassabis at Deepmind hopes to advance science and cure diseases.
I think AI may well make the world more humane by dealing with a variety of our problems.
old_man_cato
Dehumanization might be the wrong word. It's certainly anti social technology, though, and that's bad enough.
munificent
I believe that our socializing is the absolute most fundamentally human aspect of us as a species.
If you cut off a bird's wings, it can't bird in any real meaningful sense. If you cut off humans from others, I don't think we can really be human either.
ACCount36
There are a lot of incredibly offended kiwi birds out there now.
old_man_cato
And I think a lot of people would agree with you.
PolyBaker
I think there is a fundamental non-understanding of power present in the post. By that I mean that the author doesn't appreciate that technology (or any tool for that matter) gives power and control to the user. This is used to further our understanding of the world with the intent of creating more technology (recursive process). The normies just support those at the forefront that actually change society. The argument in the post is fundamentally anti-technology. Follow this argument and you end up at a place where we live in caves rather than buildings.
Also the anti-technology stance is good for humanity since it fundamentally introduces opposition to progress and questions the norm, ultimately killing the weak/inefficient parts of progress.
hayst4ck
To call AI a dehumanization technology is like calling guns a murder technology.
There is obviously truth to that, but guns are also used for self defense and protecting your dignity. Guns are a technology, and technology can be used for good or evil. Guns have been used to colonially enslave people, but also been used to gain independence.
I disagree with the assessment that AI is intrinsically dehumanizing. AI is a tool, a very powerful tool, and because the very rich in America doesn't see the people they rule as humans of equal dignity, the technology itself betrays their feelings.
Attacking the technology is wrong, the problem is not the technology but that every company has a tyrant king at it's helm who answers to no one because they have purchased the regulators that might have bound their behavior, meaning that their are no consequences for a CEO/King of a company's misdeeds. So every company's king ends up using their company/fiefdom to further their own personal ambitions of power and nobody is there to stop them. If the technology is powerful, then failure to invest in it, while other even more oppressive regimes do invest in it, potentially gives them the ability to dominate you. Imagine you argue nuclear weapons are a bad technology, while your neighbor is busy developing them. Are you better off if your neighbor has nuclear weapons and you don't?
The argument that AI is a dehumanization technology is ultimately an anarchist argument. Anarchy's core belief is that no one should have power to dominate anyone else, which inevitably means that no one is able to provide consequences for anyone who ambitiously betrays that belief system. Reality does not work that way. The only way to provide consequences to a corrupt institution is an even more powerful institution based on collective bargaining (founded by the threat of consequences for failing to reach a compromise, such as striking). There is no way around realpolitik, you must confront pragmatic power relationships to have a cogent philosophy.
The author is mistaking AI for wealth disparity. Wealth is power and power is wealth, and when it is so concentrated, it puts bad actors above consequences and turns tools that could be used for the public good into tools of oppression.
We do not primarily have an AI problem, but a wealth concentration problem and this is one of many manifestation of it.
k__
To be fair, people only need guns to protect themselves because guns literally are murder tech.
hayst4ck
That is a truth but not the truth. By framing guns as a murder technology, you ignore that they are also a self defense technology, equalizing technology, or any other set of valid frames.
My point was that guns can be used for murder, in the same way that AI can be used to influence or surveil, but guns are also what you use to arrest people, fight oppressors and tyrants, and protect your property. Fists, knives, bows and arrows, poison, bombs, tanks, fighter jets, and drones are all forms of weapons. The march of technology is inevitable, and it's important not to be on the losing side of it.
What the technology is capable of us less interesting then who has access to it and the power disparity that it creates.
The authors argument is that AI is a (1) high leverage technology (2) in the hands of oligarchs.
My argument is that the fact that it is a high leverage technology is not as interesting, meaningful, or important as the existence of oligarchs who do not answer to any regulatory body because they have bought and paid for it.
The author is arguing that a particular weapon is bad, but failing to argue that we are in a class war that we are losing badly. The author is focusing on one weapon being used to wage our class war, instead of arguing about the cost of losing the classwar.
It is not AI de-humanizing us, but wealth disparity that is de-humanizing us, because there is nothing forcing the extremely wealthy to treat others with dignity. AI is not robbing people of dignity, ultra wealthy people are robbing people of dignity using AI. AI is not dehumanizing people. Ultra wealthy people are using AI to dehumanize people. Those are different arguments with different implications and prescriptions on how to act or what to do.
AI is bad is a different argument than oligarchs are bad.
dwaltrip
The meat of the post does not depend on the characterization AI as “mere statistical correlations” that produces “plausible sounding word salad”.
I encourage people to not get too hung up on that and look at the arguments about the effects on society and how we function as humans.
I have very mixed feelings about AI, and this blog hits some key notes for me. If I have time later I will try to highlight those.
adamc
I think it's an interesting piece, and calls us to consider how the technology will actually be used.
A lot of things that are possible enable evil purposes as or more readily than noble ones. (Palantir comes to mind.) I think we have an ethical obligation to be aware of that and try to steer to the light.
gchamonlive
Everything is dehumanization technology when society is organized to foster competition and narcissism and not cooperation and care.
Technology is always an extension of the ethos. It doesn't stand on its own, it needs and reflects the mindset of humans.
serbuvlad
The fundamental advantage of our society as designed is that it weaponizes narcissism, and makes narcissists do useful stuff for society.
Don't care about competition? Find a place where rent prices are reasonable and you'll find it's actually surprisingly easy to earn a living.
Oh, but you want the fancy stuff, don't you?
munificent
I suspect that if you find a place where rent prices are reasonable, you'll find it's actually surprisingly hard to find job there that pays a good wage, healthcare that keeps you healthy, decent schools to educate your children, and a community that shares your values and interests.
People don't move to high cost of living areas because they want nice TVs. Fancy stuff is the same price everywhere.
serbuvlad
I live in Romania so I have different problems. I understand that Americans have problems with rent and healthcare. We have problem with other stuff, like food prices.
But at the end of the day, it's extremely unhealthy to let these problems force us into feeling like we have to make a lot of money. You can find cheap solutions for almost everything almost everywhere if you compromise.
gchamonlive
I'm talking about narcissism as in Burnout Society.
Give the book a go if you haven't. It lays out many of the fundamental problems of current social organization way better than I can.
> Oh, but you want the fancy stuff, don't you?
Just some food for thought, though. Is weaponizing hyperpositivity the only way to produce fancy stuff? Think about it, you'll see by yourself this is a false dichotomy embedded in a realism that prevents us from improving society.
dandanua
Technology that ultimately breaks power balance in a society is asking for fascism to come. Without strong and working checks and balances the doom scenario is inevitable. Yet, we are witnessing the destruction of our previous, weaker checks and balances. This will only accelerate us to the dead end.
> Rather than enhancing our human qualities, these systems degrade our social relations, and undermine our capacity for empathy and care.
I don't genuinely expect the author of a blogpost who titles their writing "AI is Dehumanization Technology" to be particularly receptive of a counterargument, but hear me out.
I can think of few things as intellectually miserable as the #help channels of the many open source projects on Discord, for example. Should I wager a guess, if these projects all integrated LLMs into their chatbots for just a few bucks and let them take on the brunt of the interactions, all participants on either side would be left with much more capacity to maintain and express empathy and care, or to nurture social connections. This extends beyond these noncommercial contexts of course, to stuff like customer service for example.