Skip to content(if available)orjump to list(if available)

Claude's Max Plan

Claude's Max Plan

140 comments

·April 9, 2025

thomassmith65

Turned out to be four months, not one

  Leadership at this crop of tech companies is more like followership. Whether it's 'no politics', or sudden layoffs, or 'founder mode', or 'work from home'... one CEO has an idea and three dozen other CEOs unthinkingly adopt it.
  Several comments in this thread have used Anthropic's lower pricing as a criticism, but it's probably moot: a month from now Anthropic will release its own $200 model.
https://news.ycombinator.com/item?id=42333969

zamadatix

This is just pricing the same model to higher limits at a volume discount (which makes the pricing per usage lower, not higher). To actually "release a $200 model" they'd have to make something expensive to run like a ChatGPT 4.5 and then have the only plan which can use it worth a damn be that price. Given how bad 4.5 flopped in terms of performance I doubt many will go that route though, it'd have to be different kinds of services not in the current plan instead.

dimitri-vs

o1-pro (and high limit Deep Research) is technically the $200/mon offering. 4.5 is kind of a dud IMO, even in creative uses. That said gemini-2.5-pro is nearly as good if not better than o1-pro and now that Gemini has a Deep Research equivalent I'm finding it hard to justify the $200/mon sub.

ozten

I want different capabilities at $200.

I paid for that to get access to Deep Research from OpenAI and I feel I got more than $200 of value back out.

These companies have a hard time communicating value. Capabilities make that easier for me to understand. Rate-limiting and outages don't.

tibbon

My current dream is a model that's good at coding with a ~10m token content window. I understand Llama 4 has a window approximately that size, but I'm hearing mixed results on its coding capacity.

If it had deep research and this, with a large number of API requests, I'd consider $200/month.

imiric

Has anyone found the output at these large context windows usable at all?

IME the quality of all models goes down considerably after just a few thousand tokens. The chances of hallucinating, mixing up prompts, forgetting previous prompts, etc., are much more likely as context size increases. I couldn't imagine a context of 1M tokens, let alone 10M, being usable at all. Not to mention that any query is going to come to a crawl just to move that amount of data around (which still annoyingly happens on every query...).

So usually at around 10K tokens I ask it to summarize what was discussed, or I manually trim down the current state, and start a new fresh chat from there. I've found this to work much better than wasting my time fighting bad output. This is also cheaper if you're on a metered plan (OpenRouter, etc.).

vessenes

The results are not mixed, Llama 4 is terrible at coding. I agree on longer context window being the dream.

ZeroCool2u

I mean you get a 2 Million token context window and by far my favorite coding model with Gemini 2.5 Pro.

sva_

I just subscribed to the free trial yesterday, and I've been hooked tbh. I haven't subscribed to any of the other LLM companies so far. I hope something else comes out within a month because I really don't want to spend 22 Euro per month for it.

The 1M context window (2M?) really sets it apart.

null

[deleted]

siva7

Has someone tried the 2m context window for a code base and can report how it compares over claude or o1?

matwood

I waited until Deep Research came to the normal paid plans, but it's been very useful the times I have thought to use it.

mvdtnz

I have been a heavy user of Claude but cancelled my Pro subscription yesterday. The usage limits have been quietly tightened up like crazy recently and 3.7 is certainly feeling dumber lately.

But the main reason I quit is the constant downtime. Their status page[0] is like a Christmas tree but even that only tells half the story - the number of times I have input a query only to have Claude sit, think for a while then stop and return nothing as if I had never submitted at all is getting ridiculous. I refuse to pay for this kind of reliability.

[0] https://status.anthropic.com/

bayarearefugee

Also cancelled Claude Pro due to a combination of unreasonable amounts of downtime and Gemini 2.5 just doing a better job for me.

I'd estimate there's probably like another 6 months to a year where I'll jump around to whatever 'cloud' LLM is currently winning on quality of the model (in terms of usefulness as a coding assistant) and just basic UI/availability and then I'll build a locally hosted system and just use that.

I certainly don't have anything like brand loyalty to any of them, so I'm down for a race to the bottom.

bakugo

Can confirm. As an API user, "overloaded" errors have been happening pretty often lately.

tesch1

At 99.38+ uptime - seems like the reds in the xmas tree are OR'd into the bars rather averaged in, making them look worse than they actually are, which is refreshingly honest to see in an uptime monitor.

mohsen1

I cancelled my OpenAI plan. Gemini 2.5 Pro is extremely good compared to OpenAI and Anthropic models. Unless things change, I don't see why I need to pay those subscription fees?

nanfinitum

Yeah, I'm not really sure what the long play is here. $200 is what I spend on groceries for an entire month.

paulddraper

I'm impressed.

A pound of chicken breast, a pound of apples, a third loaf of bread cost at least $7. And that's only 1500 kcal.

throwup238

A ten pound bag of russet potatoes costs $2-3 here (high CoL SoCal), and that's >3,000 kcal. A four pound bag of pinto beans is $4, that's >5,000 kcal. That's four days of 2,000 kcal per day for $7. Likewise 32,000 kCal of rice at Costco is $24 so it gets even cheaper when you buy those 20-40 pound bags. That goes for quinoa, lentils, and all kinds of other staples. Base caloric requirements are really cheap to cover with the basics, and should cost $50-60/mo. The rest can be spent on fresh meat, veggies, and fruit.

Under $200/mo is relatively easy to achieve as long as you know how to cook or can tolerate a repetitive diet. Stretching it to $250-300/mo takes it up a notch and makes it a very balanced and varied diet with whatever fruit and vegetables you want. I only run it up to $300/mo when I buy higher quality meats at Costco and eat an avocado a day.

npteljes

Definitely highly location dependent. In Hungary we spend ~3 times that for 2 people. And I definitely don't buy the cheapest. So to me, $200 looks realistic.

fragmede

COL isn't the same everywhere. That $8 of chicken in downtown San Francisco whole foods that closed is $4 elsewhere and those differences add up.

yzydserd

Probably spends another $600 on takeout.

davidf18

[dead]

AaronAPU

Interesting, I’ve been using o1-pro and Gemini 2.5 Pro with identical profiles and prompts and o1-pro has won every single time without exception.

Where win means the problem I set out to solve was solved and passed tests. Where both are aware of the tests.

boringg

How is Gemini 2.5 Pro v Deep Research? I've found that function on OpenAI quite impressive.

simonw

They upgraded Gemini Deep Research to use the 2.5 model a few days ago and the stylist shot up - I've seen a bunch of people comparing the new version favorably to OpenAI's, I agree that it's as good and maybe even better now.

joshstrange

I really like Claude and have the money to spend on something like this if I wanted to but it's not compelling at all.

No new models, no new capabilities, just higher limits. Which I know some people are asking/begging for but that's not been an issue for me. If I needed more I'd probably use the API.

I only continue to pay because some of the features in Claude Web are better than what I've seen elsewhere but their latest web redesign is making me seriously reconsider that stance. It's /bad/, breaks scrolling while it's generating a response, break copy/paste/selection, etc. It's incredibly hostile and I have to assume anyone seriously using Claude internally is using a different client.

gwd

> If I needed more I'd probably use the API.

The API gets overloaded and you get blocked out as well; yesterday I had to go look up a bunch of bash runes on stackoverflow because Claude's API was busy in the one day a year* I happened to be writing bash scripts. (Maybe I should have tried out Gemini for a change.)

Part of the promise here isn't just usage, but priority: you pay your $200/mo, and (I presume) you never have to worry about being locked out.

* This is an exaggeration but not much of one

attentive

you can use sonnet3.7 on aws or gcloud and you shouldn't hit any reasonable limits

sharkjacobs

I use the free model and API, when I look at the pricing page I see (for the extant $20 Pro plan)

    - More usage
    - Access to Projects to organize chats and documents
    - Ability to use more Claude models
    - Extended thinking for complex work
What does "more usage" mean? It doesn't say anywhere what the free tier usage limits are. What are "more" models? It also doesn't make clear what models are available with each tier (except for "extended thinking" which is a separate bullet point)

The only thing I can reason is that they want to keep this vague so that they don't have to update their marketing copy each time they update their offerings, but that's absurd.

jsheard

> It doesn't say anywhere what the free tier usage limits are.

Ah, I see they've been to the Cloudflare school of free tier bait-and-switching.

pdyc

can you elaborate you mean CF workers or the pro plans?

jsheard

I mean how CF has advertises "unlimited bandwidth" and "unmetered DDOS protection" on their free and flat-rate paid plans, which abruptly turns into not-unlimited pay-per-gigabyte if they decide you're using them too much, but you're not allowed to know where that line is so you can plan around it. It's a fun surprise where they suddenly ask you to pay 10x more out of nowhere.

mathgeek

mvdtnz

5x what though? Even if we assume this is the true figure, everyone who uses Claude regularly knows that usage limits fluctuate over the course of days and weeks.

fragmede

not just limits, but response times. I swear there's a Cron job or something that kicks in a 2300 pacific because Claude backs up right around then.

section_me

Surge pricing? I'm not sure if my remark is sarcasm or a prediction on the future.

miki123211

"more" usage likely means that they have a limited number of GPUs, and what models you get access to depends on how much you've used them recently, but also on how busy the GPUs are at this moment.

This is also how batching works for API users. If you don't need the results immediately, you can give them a batch with an attached 24-hour deadline, and they'll slot you in whenever they expect low usage, in exchange for better prices.

prompt_overflow

F

I can't believe we're going to high pricing based subscription. This makes me think we need than ever open source models like qwen and deepseek.

Pikamander2

On the other hand, the free options are better than ever. I've only used the free versions of ChatGPT, Claude, and ImageFX and have been impressed by how much you get without spending a dime. The only real limitation I regularly feel is not being able to upload something like a 1GB CSV file for analysis, but I guess that's a fair restriction for a free web tool.

johnisgood

I am impressed by the free version of ChatGPT, not so much with Claude's (claude.ai). I love Claude Pro, more so than the paid version of GPT, but damn, the free version of Claude is awful, you reach the limit extremely quickly. The free version of ChatGPT works quite well.

yahoozoo

It’s because these companies are burning billions of dollars.

int_19h

The problem is that comparable-quality open models will still require lots of hardware to run at reasonable speeds, so even then it's not cheap.

jtwaleson

A bit off-topic, but I wonder when Cursor is going to see a massive price increase. I've tried Aider (granted, this was when GPT4 prices were still much higher) and spent $10 in one hour easily. Now I use cursor a LOT (100h focussed programming time per month) and mostly stay within my $20 monthly fee. I think they must be losing money on customers like me.

aitchnyu

Are they an LLM wrapper or do they have own models? I feel Sonnet 3.7 should cost $30 per month for regular coding, and Refact.ai pricing (10$, 20$, 30$ - most popular, 40$, 50$ and above for higher prepaid limits) reflects that.

danialtz

it is already super expensive if you want to get an actual work done using their max model. On a midsize project, it cost ca. €200 for my vibe coding tests to get something reasonable done (each call, each tool costs 0.05c). Their normal claude window is super short, and almost unusable for serious work. Stack was python and Nuxt.

SkyPuncher

> to get an actual work done using their max model

I get plenty of work done without the max model.

The key is breaking out a planning step before implementation.

ZeroCool2u

Have you tried Roo Code or Cline instead? I never felt the need to use Cursor after trying them and they're just extensions, so it's easy to install and use your own API key.

jtwaleson

How bad is the bill?

seunosewa

With Gemini 2.5 the bill is zero for now.

aranw

I currently pay for Claude and I still find it best for coding but it's definitely bit unreliable at times and I find it rate limited or responses limited and also unresponsive at times. I wouldn't be upgrading to a $100/$200 packages unless the reliability improved on the $20 offering first

Maxious

As per https://www.anthropic.com/pricing, the reliability of the $20 offering will be further deteriating to prioritise $100+ package traffic "Priority access during high traffic periods"

creddit

I had not had almost any issues with reliability until recently and suddenly I'm getting transient UI and server issues on a daily basis. Not happy with it.

Even worse is I bought a yearly subscription from a deal they offered. However, now that reliability will be going down, I'm feeling a bit scammed!

mrcwinn

I would rather pay for o1-pro and also have access to the fantastic 4o image generation. o1-pro feels, to me, far ahead for complex coding tasks.

I do really prefer Claude’s unified model interface. Hopefully OpenAI improves that soon. Their product UX is a mess.

petercooper

I'd rather just be able to paste an API key into the desktop client and PAYG it.

ChadMoran

I basically do this using self-hosted LibreChat. Can even use MCP with it.

fathermarz

My experience with Claude 3.7 with thinking has been incredible for coding tasks. I did not find the same level of success with Gemini even though the context window is nice.

Before they rolled this out, I rarely hit usage limits. Now it seems the usage limits have been lowered for Pro to add more value to Max. That is a less than ideal experience for users.

I agree with what most comments here are saying, that there should be more than just usage limits and I hope this changes (as it likely will because the state of competition is still high)