Skip to content(if available)orjump to list(if available)

The AI Replaces Services Myth

The AI Replaces Services Myth

50 comments

·July 17, 2025

kelseyfrog

> You have to start from how the reality works and then derive your work.

Every philosopher eventually came to the same realization: We don't have access to the world as it is. We have access to a model of the world as it predicts and is predicted by our senses. In so far as there is a correlation between the two in whatever fidelity we can muster, we are fated to direct access to a simulacrum.

For the most part they agree, but we have a serious flaw - our model inevitably influences our interpretation of our senses. This sometimes gets us into trouble when aspects of our model become self-reinforcing by framing sense input in ways that amplify the part of the model that confers the frame. For example, you live in a very different world if you search for and find confirmation for cynicism.

Arguing over metaphysical ontology is exemplified by kids fighting about which food (their favorite) is the best. It confuses subjectivity and objectivity. It might appear radical, but all frames are subjective even ones shared by the majority of others.

Sure, Schopenhauer's philosophy is the mirror of his own nature, but there is no escape hatch. There is no externality - no objective perch to rest on, even ones shared by others. That's not to say that all subjectivities are equally useful for navigating the world. Some models work better than others for prediction, control, and survival. But we should be clear that useful does not equate with truth, as all models are wrong, some are useful.

JC, I read the rest. The author doesn’t seem to grasp how profit actually works. Price and value are not welded together: you can sell something for more or less than the value it generates. Using his own example, if the AI and the human salesperson do the same work, their value is identical, independent of what each costs or commands in the market.

He seems wedded to a kind of market value realism, and from this shaky premise, he arrives at some bizarre conclusions.

card_zero

Urgh. I feel the stodge of relativism weighing down on me.

OK, yes, all models (and people) are wrong. I'll also allow that usefulness is not the same as verisimilitude (truthiness). But there is externality, even though nobody can as you say "perch" on it: it's important that there is objective reality to approach closer to, however uncertainly.

kelseyfrog

I'm willing to grant non-symbolic externality. Though, I don't know how useful that is.

We will never access the signified only the signifier. When we believe that signifiers exist externally, we are engaging in a suspension of epistemic honesty, and I get why we do it - it makes talking about and engaging with the world infinitely easier. But we shouldn't ever believe our own trick. That's reverting to a pre-operational version of cognition.

harwoodjp

Your dualism between model and world is nearly Cartesian. The model itself isn't separate from the world but produced materially (by ideology, sociality, naturally, etc.).

kelseyfrog

> The model itself isn't separate from the world but produced materially

To me this is like drawing a circuit diagram on a piece of paper and trying to convince someone that, "Really there is electricity flowing through it."

Models are relations between signifiers. There exists a transformation between the signified relations and the relations of the signifiers, but they are, in fact, two separate categories and the transformation isn't bijective. ie it doesn't form an isomorphism.

nine_k

A map drawn on a flat piece of land is still not the whole land it depicts, even though it literally consists of that land. Any representation is a simplification, as much as we can judge, there's no adequately lossless compressing transform of large enough swaths of reality.

null

[deleted]

neuroelectron

I have yet to see LLMs solve any new problems. I think it's pretty clear a lot of the bouncing ball programming demos are specifically trained on to be demoed at a marketing/advertising thing. Asking AI the most basic logical question about a random video game like, what element synergies with ice spike shield in Dragon Cave Masters and it will make up some nonsense despite it being something you can look up on gamefaqs.org. Now I know it knows the game I'm talking about but in the latent space it's just another set of dimensions that flavor likely next token patterns.

Sure, if you train an LLM enough on gamefaqs.org, it will be able to answer my question as accurately as an SQL query, and there's a lot of jobs that are just looking up answers that already exist, but these systems are never going to replace engineering teams. Now, I definitely have seen some novel ideas come out of LLMs, especially in earlier models like GPT-3, where hallucinations were more common and prompts weren't normalized into templates, but now we have "mixtures" of "experts" that really keep LLMs from being general intelligences.

outworlder

I don't disagree, but your comment is puzzling. You start talking about a game (which probably lacks a lot of training data) and then extrapolate that to mean AI won't replace engineering teams. What?

We do not need AGI to cause massive damage to software engineering jobs. A lot of existing work is glue code, which AI can do pretty well. You don't need 'novel' solutions to problems to have useful AI. They don't need to prove P = NP

sublinear

Can you give an example of a non-trivial project that is pure glue code?

arevno

Parent never said pure glue code, they said "a lot", which is roughly correct.

Any nontrivial business application will be on the order of ~60% glue, API, interface/model definition, and CRUD UI code, which LLMs are already quite good at.

They're also good at writing tests, with the caveat that a human reviews them.

They're pretty decent at emitting documentation from pure code, too.

The only way these models don't result in mass unemployment in this industry is if the amount of work required expands to fill the gap. Which is certainly possible! The Jevons Paradox of software development.

XenophileJKO

I don't know, I've had O3 create some surprisingly effective Magic the Gathering decks based on newly released cards it has never seen. It just has to look up what cards are available.

Quarrelsome

Do execs really dream of entirely removing their engineering departments? If this happens then I would expect some seriously large companies to fail in the future. For every good idea an exec has, they have X bad ideas that will cause problems and their engineers save them from those. Conversely an entirely AI engineering team will say "yes sir, right on it" to every request.

pjmlp

Yes, that is exactly how offshoring and enterprise consulting takes place.

eikenberry

.. and why they fail.

pjmlp

Apparently not, given that it is the bread and butter of Fortune 500 consulting.

crinkly

Yes. Execs love AI because it’s the sycophant they need to massage their narcissism.

I’d really love to be replaced by AI. At that point I can take a few months off paid gardening leave before they are forced to rehire me.

Quarrelsome

Idk I feel like execs would run out of make up before they accept their ideas are a pig. I worry this stuff is gonna work "just enough" to let them fool themselves for long enough to sink their orgs.

I'm envisioning a blog post on linkedin in the future:

> "How Claude Code ruined my million dollar business"

crinkly

Working out how to capitalise on their failures is the only winning proposition. My brother did pretty well out of selling Aerons.

null

[deleted]

AkshatM

> Need an example? Good. Coding.

> You must be paying your software engineers around $100,000 yearly.

> Now that vibecoding is out there, when was the last time you committed to pay $100,000 to Lovable or Replit or Claude?

I think the author is attacking a bit of a strawman. Yes, people won't pay human prices for AI services.

But the opportunity is in democratization - becoming the dominant platform - and bundling - taking over more and more of the lifecycle.

Your customers individually spend less, but you get more customers, and each customer spends a little extra for better results.

To respond to the analogy: not everyone had $100,000 to build their SaaS before. Now everyone who has a $100 budget can buy Lovable, Replit and Claude subbscriptions. You only need 1,000 customers to match what you made before.

Sol-

How much demand for software is there, though? I don't buy the argument that the cake will grow faster than jobs are devalued. On the bright side, prices might collapse accordingly and we'll end up in some post scarcity world. No money in software, but also no cost, maybe.

jsnk

""" Not because AI can't do the work. It can.

But because the economics don't translate the way VCs claim. When you replace a $50,000 employee with AI, you don't capture $50,000 in software revenue. You capture $5,000 if you're lucky. """

So you are saying, AI does replace labour.

graphememes

Realistically, AI makes the easiest part of the job easier, not all the other parts.

deepfriedbits

For now

DanHulton

Citation needed.

warthog

Maybe I should change the title indeed. Intention was to point to the fact that from the perspective of a startup, even if you replace it fully, you are not capturing 100x the previous market.

null

[deleted]

tuatoru

The title is slightly misleading.

What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans. That idea being muddle-headed.

It points out that businesses think of AI as software, and will pay software-level money for AI, not wage-level money. It finishes with the rhetorical question, are you paying $100k/year to an AI company for each coder you no longer need?

satyrnein

It's almost more of a warning to founders and VCs, that an AI developer that replaces a $100k/year developer might only get them $10k/year in revenue.

But that means that AI just generated a $90k consumer surplus, which on a societal level, is huge!

tines

Not sure I quite get the point of the article. Sure, you won't capture $100k/year/dev. But if you capture $2k/year/dev, and you replace every dev in the world... that's the goal right?

aerostable_slug

They're saying expectations that AI revenues will equal HR expenditures, like you can take the funds from one column to the other, are wrong-headed. That makes sense to me.

tines

I agree, but that doesn't have to be true for investors to be salivating, is my point.

gh0stcat

I don't think the value stacks like that. Hiring 10 low level workers that you can pay 1/10th the salary to replace one higher level worker doesn't work.

RedOrZed

Sure it does! Let me just hire 9 women for 1 month...

blibble

that $2k won't last long as you will never maintain a margin on a service like that

employee salaries are high because your competitors can't spawn 50000 into existence by pushing a button

competition in the industry will destroy its own margins, and then its own customer base very quickly

soon after followed by the economies of the countries they're present in

the whole thing is a capitalism self destruct button, for entire economies

Revisional_Sin

> What the article is really about is the idea that all of the money that is now paid in wages will somehow be paid to AI companies as AI replaces humans.

Is anyone actually claiming this?

lelandbatey

Not directly, but indirectly. It's what's leading to the FOMO w.r.t. investors. See the image in the parent blog post, where VCs are directly comparing the amount of money spent on AI at the moment (tiny) against the amount of money spent on headcount in various industries, with the implication being that "AI could be making all the money that was being spent on headcount, what an opportunity!"

null

[deleted]