Stop over-thinking AI subscriptions
61 comments
·June 3, 2025prepend
roxolotl
Yea I’d love to know what these look like. I’ve been using Claude and Claude code a bit for things I’m too lazy to do myself but know it’ll be solid at, for example a parser for iso dates in Janet. I set a $10 budget in February and have used $4 of it. I genuinely don’t know how to use these tools to spend $20 a month but at the same time they've been very helpful.
I similarly get confused about normal blocking. I genuinely don’t understand it. I get procrastination. I go for runs during work. But rarely is something genuinely so much that it’ll take hours of programming.
Edit: of course some tasks take multiple days. But visible progress is almost always achieved on the scale of hours not days.
dgan
Tried Claude yesterday to help me extract rows from a financial statement PDf. Let's automate boring stuff !! After multiple failures , I did it myself
terhechte
I sat down at a Piano yesterday and tried to play a beautiful song even though I never really used a piano before. Sounded horrible, must be the pianos fault.
One reason to use LLMs is to understand when and how to use them properly. They're like an instrument you have to learn. Once you do that, they're like a bicycle for your mind.
dgan
except the piano's seller advertised that i don't need to know how to play "just ask it, it will play!"
Here ya go, fixed your analogy
benwad
If you buy a machine to play the piano for you, you won't learn how to play the piano. You'll just become really good at using that machine.
112233
Which model have you picked to learn and keep using? Because each new model means learning new instrument.
xela79
ah yes, all those LLM models have such unique and different system prompts, impossible to re-use your knowledge from model A for model B...
or wait... https://github.com/elder-plinius/CL4R1T4S nah, they are very much the same when it comes to prompting
null
lapcat
LLMs can't disappoint. They can only be disappointed.
kaptainscarlet
LLMs are like a child with undiagnosed ADD, it is a constant roller coaster with peaks of amazement and and equally dips of disappointment.
jrs235
Currently my biggest gripe is asking AI to make one modification to file, it then decides to do 4 other "extra" changes I didn't ask for. AI please stay focused!
Etheryte
As with any other tool, half the art of using LLMs productively is the ability to recognize when they're a good fit and when they're not. So long as AGI doesn't come around, there will always be cases where they fall on their face and as far as I'm concerned, we're pretty far from AGI.
robertlagrant
Think how much energy would be saved if you could embed a sqlite database in a pdf with the data in it.
graemep
There are formats that are structured - XBRL.
If you want it to be human readable you can use iXBRL which is embedded in HTML.
submeta
Try Mistral OCR. They trained their model to extract data from PDF. Used it to convert pdf forms to json schema. Worked excellently.
dgan
i will give it a try tonight & report back!
keepsweet
Most people don't realize that LLMs by design were not made for document processing, data extraction etc. For that, you would have to use a dedicated tool like Klippa DocHorizon, which built its own AI OCR from scratch. It also provides an API that you can use to send your documents and receive formatted data. It's less popular than, say, Textract or Tesseract, but it's far more accurate, especially if you're dealing with sensitive data that you wouldn't want an LLM to hallucinate.
benterix
Claude is not good for thus, try Mistral OCR. I'm using it on a daily basis on English and non-English texts and it works very well.
Flemlo
Did you try Claude to just extract it or to write a python script to do so?
jusomg
> Let’s be conservative and say $800/day (though I’d assume many of you charge more). The AI subscription math is a no-brainer. One afternoon saved per month = $200 in billable time. Claude Max pays for itself in 5 saved hours. Cursor pays for itself in 45 minutes.
Is the argument that by using these AI subscriptions, you have free time that didn't have before and now you work less hours? Or that the extra productivity you get from that AI subscription allows you to charge more per hour? or maybe that you can do more projects simultaneously, and therefore get more $ per day?
Otherwise I don't get how the AI subscription "pays for itself".
JimDabell
If a task that normally takes X hours without AI now takes Y hours with AI and you charge $Z/hr, then the value of AI is X - Y * Z for this single task. If the value across all tasks is greater than the amount you are paying for AI, then it has “paid for itself”.
jusomg
I've never done contractor or $/hr work and I have no idea how these things work in reality, but:
If a task takes you five hours to do without AI and 1 hour with AI charged at 100$, in the without-AI case you're making 500$, in the with-AI case you're making 100$ - price_of_AI, right?
Otherwise your example assumes you're charging someone 5 hours of work when in reality it took you 1 hour and then you spent an additional 4 hours watching TV.
In any case this thinking exercise made me realize that maybe it's more about staying competitive against other peers than about "AI paying for itself". If you're really charging for hours of work, then it is really a competitive advantage against people not using AI.
Assuming an AI-enhanced contractor can do the same amount of work than a non-AI-enhanced contractor in fewer hours, then I'd assume they would get more contracts, because the overall project is cheaper for whoever is hiring them. Does that really lead to you making more money, though? No idea honestly. Probably not? I just can't see how using AI "pays for itself". At best you're making now less money than before, because you're required to pay for the AI subscription if you want to stay competitive.
JimDabell
It depends on the type of contracting really. Some people charge by time, some people charge by value. If you charge by value, a good way to look at it is that your hourly rate is your “costs” and anything on top is your profit margin.
prepend
It sort of seems like the author is billing for the time saved and having a coffee or something. Seems a bit off.
Paying programmers by the hour always seemed weird for me because of the productivity spikes. I prefer paying by the job or just buying a weekly block that assumes someone works full time on my project.
I would be annoyed with a consultant who billed me for 4 hours when it was 30 minutes of work and an estimated savings from using AI.
lapcat
Eventually it's going to become a race to the bottom.
kissgyorgy
I suggest to try Claude Code out and see for yourself. Solving problems is now cost fraction of the time. What took hours/days before, now might take MINUTES., it's insane.
That's why anyone suggesting AI won't take your jobs is delusional. We will need significantly less developer time.
dotancohen
Or, as has typically been the case in IT, the pie will get bigger.
Maybe I'll program my oven to cook the pizza at a slightly lower temperature, but crisp the living carbon out of it for the final 90 seconds. Just like I'm doing manually now, but less prone to errors because I ignored the pizza while browsing HN and can smell it bur.......
My pizza!
prepend
I think developers will just do more. Already in a team of 10 programmers there’s a few who don’t do very much. I think they will be gone. And a 1x programmer becomes a 10x programmer using AI.
I think there will be more work for developers, but it will be harder to slack off.
Flemlo
It's a no brainer I agree.
What I hate is that I will not pay more than 20-50$ per month for side projects and that gap annoys me.
hengheng
Or get a 3090 24 GB off eBay for 700€ and run mistral, devstral, qwen3 and others indefinitely.
I'm a big fan of two-tier systems where the 90% case is taken on by something cheap, and renting the premium tier. Works for cars, server hosting, dining ...
scuderiaseb
Would be cheaper in the long run maybe but heat, noise, electricity cost and the fact that the models are not as good as sonnet 4 and opus 4 are some things to consider.
justlikereddit
For testing always-on concepts and long-running agent experiments a local model will be superior.
For investigating concepts, can AI for this at all? Subscriptions will be a better approach.
Local models are underutilized in comparison to the promise they have. Imagine browser-agnostic universal AdBlock and that's just the surface.
dist-epoch
I did the math, it just doesn't check out, especially if you take into account time to generate and electricity.
Even assuming you generate 100 tok/second, 24 hr, that's only 8 mil tokens. About $5 for Gemini 2.5 Flash or $50 for 2.5 Pro. But you will not be able to run it 24/7 locally - sleep, thinking between prompts, etc
Even at continuous usage (24 hr/day), it's still cheaper to just pay Google for Gemini 2.5 Flash tokens for half a year.
And this is NOT taking into account that mistral/devstral/qwen3 are vastly inferior models.
How can Google be so cheap? TPUs, batching multiple requests, economy of scale, loss-leader, ...
Running stuff locally makes sense for fun, as backup for loss of connectivity, if you want to try uncensored models, but it doesn't make sense economically.
scosman
Slower tokens, maintenance time (installing new models/updating tools), lower SWE bench scores. For me it’s not worth it - optimize for speed.
If I worked on a codebase I couldn’t send to the hosted tools I’d go this route in a heartbeat.
maccard
How do you connect that 3090 to a MacBook? How does that work when I’m not tethered to my desk?
I think if you’re comparing it to Claude max it’s ok, but the payback period on GitHub copilot for example is almost 7 years.
benterix
> How do you connect that 3090 to a MacBook?
This is probably the easiest problem to solve, isn't it? Most tools offer a n API endpoint setup that you can use locally and also expose on a local network easily.
JKCalhoun
I haven't paid for the nicer models — now wondering if you could use less capable models to get some code up and running, then pay for better models just to "polish up" the code, get it to "final".
null
iLoveOncall
I tried running those locally via Ollama on my 3080 (latency doesn't matter) and the performance is abysmal. They're almost random token generators, literally unusable. My hallucination rate was near 100% for basic tasks like data extraction from markdown.
dist-epoch
Have you tried Qwen-32b-8bit? It's pretty solid. Anything lower, yeah, it's kind of garbage.
Joeboy
I don't know the author, but it seems like they're writing for a quite rarefied audience.
james-bcn
Coders that are consultants? I think that's fairly common isn't it?
prepend
Coders who can’t code without using AI.
Joeboy
It was more the "conservatively billing $800 a day" part that seems like a different universe to me (I am a contract software developer).
msgodel
I think my employer is charging close to that for me. I don't think it's that hard to get to that point but most people are bad at selling themselves.
tarasglek
These articles seem to justify spending money without considering alternatives It's like saying "cold meds let me get back to work, I am well justified paying $100/day for NyQuil".
I would appreciate a less "just take my money" and more "here are features various tools offer for particular price, I chose x over y cos z". Would sound more informed.
Would also like to see a reason on not using open source tools and locking yourself out of various further ai-integration opportunities because $200/mo service doesn't support em.
samuel
Regarding the o3 "trick", if I understand it correctly, I'm trying to do something similar with the "MCP Super Assistant" paired with the DesktopCommander MCP.
I still haven't managed to make the mcp proxy server reliable enough, but looks promising. If it works, the model would have pretty direct access to the codebase, although any tool call is requires a new chat box.
I guess aider in copy-paste mode would be another solution for cheapskates like myself (not a dev and I barely do any programming, but I like to tinker).
dakiol
I don’t want to pay to use LLM tools, just like I don’t pay for Linux, Git, or Postgres. Sadly, open source LLMs are way behind private LLMs (whereas Linux/Git/Postgres are top notch software better than private ones)
terhechte
No they're not. I prefer closed source LLMs because I'm not interested in having a small hardware park, but DeepSeek 0528 or Qwen 3 235 are really good models. Not as good as Claude 4, but absolutely good enough for a lot of development use cases. You just have to spend money on the right hardware.
its-summertime
Having money must be nice
msgodel
I can run qwen2.5/qwen3 on my CPU and not think about it at all. No budget, no token limits, just completions whenever I want them.
dedicate
Honestly, part of me just wants to pay a subscription so I don't have to think about any of that backend stuff. Like, isn't the 'it just works and I get the latest updates without lifting a finger' a huge unspoken benefit for most people? Or am I just being lazy here?
iLoveOncall
But that's the whole issue, with those AI subscriptions you have to pay a subscription AND think about all that "backend stuff" (AKA not exhausting your message limit which is really low).
brettermeier
Advertisements?
> I spent around $400 on o3 last month because I was banging my head against a wall with some really tricky code. When I hit truly difficult problems, throwing o3 at them for a few hours beats getting stuck in debugging rabbit holes for days.
I wish I could get some examples of this. I remember asking seniors for help on problems and they could help in minutes what would take me hours. And, likewise, I’ve had people ask me for a minute of help to stop them being blocked for days.
It surprised me that people would block for days and I wonder what that looked like. Do they just stare at the screen, walk around, play candy crush?
I’m trying to figure out if the author is a junior, average, or senior programmer and what they get stuck on would be something that a good programmer learns how to debug around. So you can finally get a good ROI on being a good programmer by comparing the time to what an AI would cost.