Claude Code Unleashed
7 comments
·July 15, 2025cheema33
This article reads like an ad for the author’s product.
futuraperdita
This article is an ad for the author's product.
chadcmulligan
How are people using Claude to use this much API? I use it to write the occasional bit and the free one seems enough - I've only once been told to come back tomorrow.
serf
write any project with a lot of math and memory shuffling and the structure will generally start eating lots of tokens.
write any project that has a lot of interactive 'dialogues', or exacting and detailed comments, eats a lot of tokens.
My record for tapping out the Claude Max API quickly was sprint-coding a poker solver and accompanying web front end w/ Opus. The backend had a lot of gpgpu stuff going on , and the front end was extremely verbose w/ a wordy ui/ux.
0_gravitas
How much does it seem like this will be affected by the recent headline saying that Max rate-limits are getting shadow-tightened?
ipnon
It's pretty clear to me that every member Big Token is converging on practically identical models and they're now just competing on compute efficiency based on their particular capital situation. So in the short-term products like Terragon might be nerfed. But in a year or two (2027!) it's hard to imagine OpenAI, Google, Anthropic, Mistral, and so on not all having terminal modes with automatic background agent creation and separate Git branching. At that point it's a race to the bottom for consumer prices.
I am using Claude Code full-time for about 6 weeks* with the $20/month subscription. I am trying out building different products from ideas I have already had. It frees me a lot of time to talk about my founder journey.
I have not needed multiple agents or using CC over an SSH terminal to run overnight. The main reason is that LLMs are not correct many times. So I still need time to test. Like run the whole app, or check what broke in CI (GitHub Actions), etc. I do not go through code line by line anymore and I organize work with tickets (sometimes they are created with CC too).
Both https://github.com/pixlie/Pixlie and https://github.com/pixlie/SmartCrawler are vibe coded (barely any code that I wrote). With LLMs you can generated code 10x than writing manually. It means you can also get 10x the errors. So the manual checks take some time.
Our existing engineering practices are very helpful when generating code through LLMs and I do not have mental bandwidth to review a mountain of code. I am not sure if we scale out LLMs, it will help in building production quality software. I already see that sometimes CC makes really poor guesses. Imagine many such guesses in parallel, daily.
edit: typo - months/weeks