Skip to content(if available)orjump to list(if available)

The Leverage Paradox in AI

The Leverage Paradox in AI

14 comments

·August 25, 2025

pcfwik

> This is the leverage paradox. New technologies give us greater leverage to do more tasks better. But because this leverage is usually introduced into competitive environments, the result is that we end up having to work just as hard as before (if not harder) to remain competitive and keep up with the joneses.

Off-topic, but in biology circles I've heard this type of situation (where "it takes all the running you can do, to keep in the same place" because your competitors are constantly improving as well) called a "Red Queen's race" and really like the picture that analogy paints.

https://en.wikipedia.org/wiki/Red_Queen%27s_race

EGreg

Also known as induced demand, and why adding a lane on the highway doesn’t help for long

https://en.wikipedia.org/wiki/Induced_demand

cortesoft

Do people really try to one-shot their AI tasks? I have just started using AI to code, and I found the process very similar to regular coding… you give a detailed task, then you iterate by finding specific issues and giving the AI detailed instructions on how to fix the issues.

It works great, but I can’t imagine skipping the refinement process.

sdesol

> Do people really try to one-shot their AI tasks?

Yes. I almost always end with "Do not generate any code unless it can help in our discussions as this is the design stage" I would say, 95% of my code for https://github.com/gitsense/chat in the last 6 months were AI generated, and I would say 80% were one shots.

It is important to note that I can easily get into the 30+ messages of back and forth before any code is generated. For complex tasks, I will literally spend an hour or two (that can span days) chatting and thinking about a problem with the LLM and I do expect the LLM to one shot them.

ssharp

Every tool I've tinkered with that hints at one-shotting (or one-shot and then refine) ends up with a messy app that might be 60-70% of what you're looking for but since the foundation is not solid, you're never going to get the extra 30-40% of your initial prompt, let the multiples of work needed to bolt of future functionality.

Compare that to the approach you're using (which is what I'm also doing), and you're able have have AI stay much closer to what you're looking for, be less prone to damaging hallucinations, and also guide it to a foundation that's stable. The downside is that it's a lot more work. You might multiply your productivity by some single digit.

To me, that 2nd approach is much more reasonable than trying to 100x your productivity but actually end up getting less done because you end up stuck in a rabbit hole you don't know you're in and you'll never refine your way out of it.

antithesizer

I love it when non-marxists stumble upon something marxists have known for decades or more. Look who's still in the race! Don't give up, guys!

https://www.oshanjarow.com/essays/treadmill-tendency

https://press.uchicago.edu/ucp/books/book/distributed/T/bo24...

https://commons.nmu.edu/cgi/viewcontent.cgi?article=1075&con...

kazinator

If we give runners motorcycles, they reach finish lines faster. But the motor sport is still competitive and takes effort; everyone else has a bike, too. And since the bike parameters are tightly controlled (basically everyone is on the same bike), the competition is intense.

lawlessone

I've been thinking something similar about any company that has AI do all it's software dev.

Where's your moat? If you can create the software with prompts so can your competitors.

Attackers knowing which model(s) you use could also do similar prompts and check the output code, to speculate what kind of exploits your software might have.

A lawyer knowing what model his opposition uses could speculate on their likely strategies.

hamdingers

The set of commercially successful software that could not be reimplemented by a determined team of caffeinated undergrads was already very small before LLM assistance.

Turns out being able to write the software is not the only, or even the most important factor in success.

davidhunter

I’d suggest reading about competitive moats and where they come from. The ability to replicate another’s software does not destroy their moat.

personjerry

This seems like an unsubstantial article, ironically it might have been written by AI. Here's the entire summary:

AI makes slop

Therefore, spend more time to make the slop "better" or "different"

[No, they do not define what counts as "better" or "different"]

satisfice

This article says that the stairs have been turned into an escalator. But I think it’s an escalator to slop.

Therefore, it doesn’t affect my work at all. The only thing that affects my prospects is the hype about AI.

Be a purple cow, the guy says. Seems to me that not using AI makes me a purple cow.

sdesol

> Therefore, it doesn’t affect my work at all.

But that isn't what the author is talking about. The issues is, your good code can be equal to slop that works. What the author says needs to happen is, you need to find a better way to stand out. I suspect for many businesses where software superiority is not a core requirement, slop that works will be treated the same as non-slop code.

bonoboTP

TL;DR relative status is zero sum