Skip to content(if available)orjump to list(if available)

Patience too cheap to meter

Patience too cheap to meter

12 comments

·May 17, 2025

bee_rider

Huh, I expected going in that this would actually be about LLMs waiting on customer service lines or whatever. That actually seems like it would be a rare social good produced by these things; plenty of organizations seem to shirk their responsibility to provide prompt customer service by hoping people will give up…

I’m less convinced of the good of an AI therapist. Seems too healthcare-y for these current buggy messes. But if somebody is aided by having a digital shoulder to cry on… eh, ok, why not?

ggm

"I'm sorry, you have exceeded my budget for today and must either ask again later or pay for a higher level of service" is not infinite patience. More specifically it's also not "too cheap to meter" because its patently both metered, and not too cheap.

And yes, despite what they might say, people were not seeking intelligence, which is under-defined and highly misunderstood. They were seeking answers.

BrenBarn

When a person can't do something because it exhausts their patience, we usually describe it not by saying the task is difficult but that it is tedious, repetitive, boring, etc. So this article reinforces my view that the main impact of LLMs is their abilities at the low end, not the high end: they make it very easy to do a bad-but-maybe-adequate job at something that you're too impatient to do yourself.

perrygeo

I agree with this more daily.

Converting a dictionary into a list of records when you known that's what you want ... easy, mechanical, boring af, and something we should almost obviously outsource to machines. LLMs are great at this.

Deciding whether to use a dictionary or a stream of records as part of your API? You need to internalize the impacts of that decision. LLMs are generally not going to worry about those details unless you ask. And you absolutely need to ask.

skydhash

> easy, mechanical, boring af, and something we should almost obviously outsource to machines

That’s when you learn vim or emacs. Instead of editing character wise, you move to bigger structures. Every editing task becomes a short list of commands and with the power of macros, repeatable. Then if you do it often, you add (easily) a custom command for it.

andyferris

Speaking of tedious and exhausting my patience… learning to use vim and emacs properly. I do like vim but I barely know how to use it and I’ve had well over a decade of opportunity to do so!

Pressing TAB with copilot to cover use cases you’ve never needed to discover a command or write a macro for is actually kinda cool, IMO.

Animats

The near future: receiving a huge OpenAI bill because your kid asked "Why" over and over and got answers.

Centigonal

>However, there doesn’t seem to be a huge consumer pressure towards smarter models. Claude Sonnet had a serious edge over ChatGPT for over a year, but only the most early-adopter of software engineers moved over to it. Most users are happy to just go to ChatGPT and talk to whatever’s available.

I want to challenge this assumption. I think ChatGPT is good enough for the use cases of most of its users. However, for specialist/power user work (e.g. coding, enterprise AI, foundation models for AI tools) there is strong pressure to identify the models with the best performance/cost/latency/procurement characteristics.

I think most "vibe coding" enthusiasts are keenly aware of the difference between Claude 3.7/Gemini Pro 2.5 and GPT-4.1. Likewise, people developing AI chatbots quickly become aware of the latency difference between e.g. OpenAI's and Claude (via Bedrock)'s batch APIs.

This is similar to how most non-professionals can get away with Paint.NET, while professional photo/graphic design people struggle to jump from Photoshop to anything else.

kepano

I had a similar though a while ago[1]:

> the most salient quality of language models is their ability to be infinitely patient > humans have low tolerance when dealing with someone who changes their mind often, or needs something explained twenty different ways

Something I enjoy when working with language models is the ability to completely abandon days of work because I realize I am on the wrong track. This is difficult to do with humans because of the social element of sunk cost fallacy — it can be hard to let go of the work we invested our own time into.

[1] https://x.com/kepano/status/1842274557559816194

ChrisMarshallNY

It takes practice, skill, and self-actualization, to become a really good listener. I know I’m not there, yet, and I’ve been at it, a long time. I suspect most folks aren’t so good at it.

It’s entirely possible that LLMs could make it so that people expect superhuman patience from other people.

I think there was a post, here, a few days ago, about people being “lost” to LLMs.

th0ma5

That's the ultimate goal of these models, though, to exhaust you of any sass. They will eventually approach full hallucination I'd imagine for any eventually long enough context.

timewizard

> Most users are happy to just go to ChatGPT and talk to whatever’s available. Why is that?

Perhaps their use case is so unremarkable and unsophisticated that the quality of output is immaterial to it.

> Most good personal advice does not require substantial intelligence.

Is that what therapy is to this author? "Good advice given unintelligently?"

> They’re platitudes because they’re true!

And the appeal is you can get an LLM to repeat them to you? How exactly is that "appealing?"

> However, they are fundamentally a good fit for doing it because they are

...bad technology that can only solve a limited number of boring problems unreliably. Which is what saying platitudes to a person in trouble is and not at all what therapy is meant to be.