How Anthropic teams use Claude Code
214 comments
·July 25, 2025minimaxir
throwmeaway222
Thanks for the tip - we employees should run and re-run the code generation hundreds of times even if the changes are pretty good. That way, the brass will see a huge bill without many actual commits.
Sorry boss, it looks like we need to hire more software engineers since the AI route still isn't mathing.
mdaniel
> we employees should run and re-run the code generation hundreds of times
Well, Anthropic sure thinks that you should. Number go up!
drewvlaz
One really has to wonder what their actual margins are though, considering the Claude Code plans vs API pricing
wahnfrieden
It is accurate though. I even run multiple attempts in parallel. Which is a strategy that can work with human teams too.
godelski
Unironically this can actually be a good idea. Instead of "rerunning," run in parallel. Then pick the best solution.
Pros:
- Saved Time!
- Scalable!
- Big Bill?
Cons:
- Big Bill
- AI written code
a_bonobo
This repo has a pattern where the in parallel jobs have different personalities: https://github.com/tokenbender/agent-guides/blob/main/claude...
yodsanklai
Usually, when you re-run, you change your prompt based on the initial results. You can't just run several tasks in parallel hoping for one of them to complete.
DANmode
Have you seen human-written code?
gmueckl
Data centers are CapEx, employees are OpEx. Building more data centers is cheap. Employees can always supervise more agents...
zer00eyz
Data centers are cap ex
Except the power and cooling demands of the current crop of GPU's means you are not fitting full density in a rack. There is a real material increase in fiber use because of your now more distributed equipment. (because 800gbps interconnects are NOT cheap).
You can't capitalize power costs: this is now a non-trivial cost to account for. And the more power you use for compute the more power you have to use for cooling... (Power density is now so high that cooling with something other than air is looking not just attractive but like it is going to be a requirement.)
Meanwhile the cost of lending right now is high compared to recent decades...
The accounting side of things isnt as pretty as one would like it to be.
Graziano_M
Don’t forget to smash the power looms as well.
aprilthird2021
Is it OK to be a Luddite?
https://archive.nytimes.com/www.nytimes.com/books/97/05/18/r...?
preommr
> A repeated trend is that Claude Code only gets 70-80% of the way, which is fine and something I wish was emphasized more by people pushing agents.
I have been pretty successful at using llms for code generation.
I have a simple rule that something is either 90%>ai or none at all (exluding inline completions, and very obvious text editing).
The model has an inherent understanding of some problems due to it's training data (e.g. setting up a web server with little to no deps in golang), that it can do with almost 100% certainty, where it's really easy to blaze through in a few minutes, and then I can setup the architecture for some very flat code flows. This can genuinely improve my output by 30%-50%
MPSimmons
Agree with your experiences. I've also found that if I build a lightweight skeleton of the structure of the program, it does a much better job. Also, ensuring that it does a full fledged planning/non-executing step before starting to change things leads to good results.
I have been using Cline in VSCode, and I've been enjoying it a lot.
randmeerkat
> I have a simple rule that something is either 90%>ai or none at all…
10% is the time it works 100% of the time.
maerch
> A repeated trend is that Claude Code only gets 70-80% of the way, which is fine and something I wish was emphasized more by people pushing agents.
Recently, I realized that this applies not only to the first 70–80% of a project but sometimes also to the final 70-80%.
I couldn’t make progress with Claude on a major refactoring from scratch, so I started implementing it myself. Once I had shaped the idea clearly enough but in a very early state, I handed it back to Claude to finish and it worked flawlessly, down to the last CHANGELOG entry, without any further input from me.
I saw this as a form of extensive guardrails or prompting-by-example.
bavell
I need to try this - started using Claude code a few days ago and have been struggling to get good implementations with some high-complexity refactors. It keeps over engineering and creating more problems than it solves. It's getting close though, and I think your approach would work very well for this scenario!
LeafItAlone
The best way I’ve found to interact with it is to treat it like an overly eager junior developer who just read books like Gang of Four and feels the need to prove their worth as a senior. Explain that simplicity matters, or you have an existing pattern to follow, or even more specific.
As I’ve worked with a number of people like what I’ve described above, the way I’ve worked with them has helped me get better results from LLMs for coding. The difference is that you can help a junior grow over time. LLMs forget after that context (Claude.md helps, but not perfect).
theshrike79
Claude has a tendency to reinvent the wheel heavily.
It'll create a massive bespoke class to do something that is already in the stdlib.
But if there's a pattern of already using stdlib functions, it can copy that easily.
golergka
That’s why I like using it and get more fulfilment from coding than before: I do the fun parts. AI does the mundane.
benreesman
The slot machine thing has a pretty compelling corollary: crank the formal systems rigor up as high as you can.
Vibe coding in Python is seductive but ultimately you end up in a bad place with a big bill to show for it.
Vibe coding in Haskell is a "how much money am I willing to pour in per unit clean, correct, maintainable code" exercise. With GHC cranked up to `-Wall -Werror` and some nasty property tests? Watching Claude Code try to weasel out with a mock goes from infuriating to amusing: bam, unused parameter! Now why would the test suite be demanding that a property holds on an unused parameter...
And Haskell is just an example, TypeScript is in some ways even more powerful in it's type system, so lots of projects have scope to dabble with what I'm calling "hyper modern vibe coding": just start putting a bunch of really nasty fastcheck and generic bounds on stuff and watch Claude Code try to cheat. Your move, Claude Code, I know you want to check off that line on the TODO list like I want to breathe, so what's it gonna be?
I find it usually gives up and does the work you paid for.
kevinventullo
Interesting, I wonder if there is a way to quantify the value of this technique. Like give Claude the same task in Haskell vs. Python and see which one converges correctly first.
null
AzzyHN
Not to mention, if an employee could usually write pretty good code but maybe 30% of the time they wrote something so non-functional it had to be entirely scrapped, they'd be fired.
melagonster
But what if he only want 20$/per month?
FeepingCreature
Yeah my most common aider command sequence is
> /undo
> /clear
> ↑ ↑ ↑ ⏎
threatofrain
This is an easy calculation for everyone. Think about whether Claude is giving you the a sufficient boost in performance, and if not... then it's too expensive. No doubt some people are in some combination of domain, legacy, complexity of codebase, etc., where Claude just doesn't cut it.
bdangubic
That's easy to say when the employee is not personally paying the massive amount of compute running Claude Code for a half-hour.
you can do the same for $200/month
tough
it has limits too, it lasted like 1-2 weeks without only (for me personally at least)
artvandelai
The limits are in 5 hour windows. You'd have to heavily work on 2+ projects in that window to hit the limit using ~500k tokens/min for around 4.5 hours, and even then it'll reset on the next window.
bdangubic
with all dues respect you really need to learn the tools you are using which includes any usage limits (which are temporal). I run CC in 4 to 8 terminals my entire workday, every workday…
tomlockwood
Yeah sweet what's the burn rate?
forgotmypw17
I’ve implemented and maintained an entire web app with CC, and also used many other tools (and took classes and taught workshops on using AI coding tools).
The most effective way I’ve found to use CC so far is this workflow:
Have a detailed and also compressed spec in an md file. It can be called anything, because you’re going to reference it explicitly in every prompt. (CC usually forgets about CLAUDE.md ime)
Start with the user story, and ask it to write a high-level staged implementation plan with atomic steps. Review this plan and have CC rewrite as necessary. (Another md file results.)
Then, based on this file, ask it to write a detailed implementation plan, also with atomic stages. Then review it together and ask if it’s ready to implement.
Then tell Claude to go ahead and implement it on a branch.
Remember the automated tests and functional testing.
Then merge.
stillsut
Great advice, matches up to my experience. Personally I go a little cheaper and dirtier on the first prompt, then revise as needed. By the way what classes / workshops did you teach?
I've written a little about some my findings and workflow in detail here: https://github.com/sutt/agro/blob/master/docs/case-studies/a...
forgotmypw17
Thank you for sharing. I taught some workshops on AI-assisted development using Cursor a Windsurf for MIT students (we built an application and wrote a book) and TAed another similar for-credit course. I’ve also been teaching high schoolers how to code, and we use ChatGPT to help us understand and solve leetcode problems by breaking them down into smaller exercises. There’s also now a Harvard CS course on developing with GenAI which I followed along with. The field is exploding.
stillsut
> ...development using Cursor a Windsurf for MIT students...
Richard Stallman is rolling in his grave with this.
But in all seriousness, nice work, I think this _is_ where the industry is going, hopefully we don't have to rely on using proprietary models forever though.
beambot
Can you provide any source materials - course notes, book, etc?
amedviediev
This matches my experience as well. But what I also found is that I hate this workflow so much that I would almost always rather write the code by hand. Writing specs and user stories was always my least favorite task.
AstroBen
Is working this way actually faster, or any improvement than just writing the code yourself?
forgotmypw17
Much, much faster, and I’d say the code is more formal, and I’ve never had such a complete test suite.
The downside is I don’t have as much of a grasp on what’s actually happening in my project, while with hand-written projects I’d know every detail.
AstroBen
How do you know the test suite is comprehensive if you don't have a grasp on what's happening?
Not a gotcha I'm just extremely skeptical that AI is at a point to have the level of responsibility you're describing and have it turn into good code long term
eagerpace
Do you have an example of this you could share?
stillsut
I can share my own ai-generated codebase:
- there's a devlog showing all the prompts and accepted outputs: https://github.com/sutt/agro/blob/master/docs/dev-summary-v1...
- and you can look at the ai-generated tests (as is being discussed above) and see they aren't very well thought out for the behavior, but are syntactically impressive: https://github.com/sutt/agro/tree/master/tests
- check out the case-studies in the docs if you're interested in more ideas.
Nimitz14
We're all gonna be PMs.
forgotmypw17
That’s basically what it amounts to, being a PM to a team of extremely cracked and also highly distractable coders.
jasonthorsness
Claude Code works well for lots of things; for example yesterday I asked it to switch weather APIs backing a weather site and it came very close to one-shotting the whole thing even though the APIs were quite different.
I use it at home via the $20/m subscription and am piloting it at work via AWS Bedrock. When used with Bedrock APIs, at the end of every session it shows you the dollar amount spent which is a bit disconcerting. I hope the fine-grained metering of inference is a temporary situation otherwise I think it will have a chilling/discouraging effect on software developers, leading to less experimentation and fewer rewrites, overall lower quality.
I imagine Anthropic gets to consume it unmetered internally so I they probably completely avoid this problem.
spike021
a couple weekends ago i handed it the basic MLB api and asked it to create some widgets for MacOS to show me stuff like league/division/wildcard standings along with basic settings to pick which should be shown. it cranked out a working widget in like a half hour with minimal input.
i know some swift so i checked on what it was doing. for a quick hack project it did all the work and easily updated things i saw issues with.
for a one-off like that, not bad at all. not too dissimilar from your example.
corytheboyd
> it shows you the dollar amount spent which is a bit disconcerting
I can assure you that I don’t at all care about the MAYBE $10 charge my monster Claude Code session billed the company. They also clearly said “don’t worry about cost, just go figure out how to work with it”
lumost
once upon a time - engineers often had to concern themselves with datacenter bills, cloud bills, and eventually SaaS bills. We'll probably have 5-10 years of being concerned about AI bills before the AI expense is trivial compared to the human time.
achierius
"once upon a time"? Engineers concern themselves with cloud bills right now, today! It's not a niche thing either, probably the majority of AWS consumers have to think about this, regularly.
whatever1
This guy has not been hit with a 100k/mo cloudwatch bill
lumost
I’ve managed a few 8 figure infrastructure bills. The exponential decrease in hardware costs combined with the exponential growth in software engineer salaries has meant that these bills become inconsequential in the long run. I was at one unicorn which had to spend 10% of Cost of Goods Sold on cloud. Today their biggest costs could run on a modest Postgres cluster in the cloud thanks to progressively better hardware.
100k/mo on cloud watch corresponds to a moderately large software business assuming basic best practices are followed. Optimization projects can often run into major cost overruns where the people time exceeds the discounted future free cash flow savings from the optimization.
That being said, a team of 5 on a small revenue/infra spend racking up 100k/mo is excessive. Pedantically, cloud watch/datadog are SaaS vendors - 100k/mo on Prometheus would correspond to a 20 node SSD cluster in the cloud which could easily handle several 10s of millions of metrics per second from 10s of thousands of metric producers. If you went to raw colocation facility costs - you’d have over a hundred dual Xeon machines with multi-TB direct attached SSD. Supporting hundreds of thousands of servers producing hundreds of millions of data points per second.
Human time is really the main trade-off.
fragmede
Datadog has entered the chat.
philomath_mn
AI bills are already trivial compared to human time. I pay for claude max, all I need to do is save an hour a month and I will be breaking even.
byzantinegene
on the other hand, it could also you mean you are overpaid
oblio
$200h * 8 * 5 * 4 * 12 = $384 000 per year.
You're like in the top 0.05% of earners in the software field.
Of course, if you save 10 hours per month, the math starts making more sense for others.
And this is assuming LLM prices are stable, which I very much doubt they are, since everyone is price dumping to get market share.
nextworddev
You will start seriously worrying about coding AI bills within 6 months
alwillis
You will start seriously worrying about coding AI bills within 6 months
Nope.
More open models ship everyday and are 80% cheaper for similar and sometimes better performance, depending on the task.
You can use Qwen-3 Coder (a 480 billion parameter model with 35 billion active per forward pass (8 out of 160 experts)) for $0.302/M input tokens $0.302/M output tokens via openrouter.
Claude 4 Sonnet is $3/M input tokens and $15/M output tokens.
Several utilities will let you use Claude Code to use these models at will.
philomath_mn
Why is that?
duped
Meanwhile I ask it to write what I think are trivial functions and it gets them subtly wrong, but obvious in testing. I would be more suspicious if I were you.
theshrike79
Ask it to write tests first and then implement based on the tests, don't let it change the tests.
ActionHank
On more than one occasion I've asked for a change, indicated that the surrounding code was under test, told it not to change the tests, and it off it goes making something that is slightly wrong and rewrites the tests to compensate.
lovich
> I use it at home via the $20/m subscription and am piloting it at work via AWS Bedrock. When used with Bedrock APIs, at the end of every session it shows you the dollar amount spent which is a bit disconcerting. I hope the fine-grained metering of inference is a temporary situation otherwise I think it will have a chilling/discouraging effect on software developers, leading to less experimentation and fewer rewrites, overall lower quality.
I’m legitimately surprised at your feeling on this. I might not want the granular cost put in my face constantly but I do like the ability to see how much my queries cost when I am experimenting with prompt setup for agents. Occasionally I find wording things one way or the other has a significantly cheaper cost.
Why do you think it will lead to a chilling effect instead of the normal effect of engineers ruthlessly innovating costs down now that there is a measurable target?
mwigdahl
I’ve seen it firsthand at work, where my developers are shy about spending even a single digit number of dollars on Claude Code, even when it saves them 10 times that much in opportunity cost. It’s got to be some kind of psychological loss aversion effect.
jasonthorsness
I think it’s easy to spend _time_ when the reward is intangible or unlikely, like an evening writing toy applications to learn something new or prototyping some off-the-wall change in a service that might have an interesting performance impact. If development becomes metered in both time and to-the-penny dollars, I at least will have to fight the attitude that the rewards also need to be more concrete and probable.
ants_everywhere
I've been trying Claude Code for a few weeks after using Gemini Cli.
There's something a little better the tool use loop, which is nice.
But Claude seems a little dumber and is aggressive about "getting things done", often ignoring common sense or explicit instructions or design information.
If I tell it to make a test pass, it will sometimes change my database structure to avoid having to debug the test. At least twice it deleted protobufs from my project and replaced it with JSON because it struggled to immediately debug a proto issue.
adregan
I’ve seen Claude code get halfway through a small sized refactor (function parameters changed shape or something like that), say something that looks like frustration at the amount of time it’s taking, revert all of the good changes, and start writing a bash script to automate the whole process.
In that case, you have put a stop to it and point out that it would already be done if it hadn’t decided to blow it all up in an effort to write a one time use codemod. Of course it agrees with that point as it agrees with everything. It’s the epitome of strong opinions loosely held.
animex
I just had the same thing happen. Some comprehensive tests were failing, and it decide to write a simple test instead rather than investigate why these more complicated tests were failing. I wonder if the team is trying to save compute by urging it to complete tasks more quickly! Claude seems to be under a compute crunch as often I get API timeouts/errors.
jonstewart
The hilarious part I’ve found is that when it runs into the least bit of trouble with a step on one of its plans, it will say it has been “Deferred” and then make up an excuse for why that’s acceptable.
It is sometimes acceptable for humans to use judgment and defer work; the machine doesn’t have judgment so it is not acceptable for it to do so.
physix
Talking about hilarious, we had a Close Encounter of the Hallucinating Kind today. We were having mysterious simultaneous gRPC socket-closed exceptions on the client and server side running in Kubernetes talking to each other through an nginx ingress.
We captured debug logs, described the detailed issue to Gemini 2.5 Flash giving it the nginx logs for the one second before and after an example incident, about 10k log entries.
It came back with a clear verdict, saying
"The smoking gun is here: 2025/07/24 21:39:51 [debug] 32#32: *5902095 rport:443 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.233.100.128, server: grpc-ai-test.not-relevant.org, request: POST /org.not-relevant.cloud.api.grpc.CloudEventsService/startStreaming HTTP/2.0, upstream: grpc://10.233.75.54:50051, host: grpc-ai-test.not-relevant.org"
and gave me a detailed action plan.
I was thinking this is cool, don't need to use my head on this, until I realized that the log entry simply did not exist. It was entirely made up.
(And yes I admit, I should know better than to do lousy prompting on a cheap foundation model)
quintu5
My favorite is when you ask Claude to implement two requirements and it implements the first, gets confused by the the second, removes the implementation for the first to “focus” on the second, and then finishes by having implemented nothing.
theshrike79
This is why you ask it to do one thing at a time.
Then clear the context and move on to the next task. Context pollution is real and can hurt you.
fragmede
After the first time that happened, why would you continue to ask it to do two things at once?
aaronbrethorst
The implementation is now enterprise grade with robust security, :rocketship_emoji:
ants_everywhere
Oh yeah totally. It feels a bit deceptive sometimes.
Like just now it says "great the tests are consistently passing!" So I ran the same test command and 4 of the 7 tests are so broken they don't even build.
enobrev
I've noticed in the "task complete" summaries, I'll see something like "250/285 tests passing, but the broken tests are out of scope for this change".
My immediate and obvious response is "you broke them!" (at least to myself), but I do appreciate that it's trying to keep focused in some strange way. A simple "commit, fix failing tests" prompt will generally take care of it.
I've been working on my "/implement" command to do a better job of checking that the full test suite is all green before asking if I want to clear the task and merge the feature branch
stkdump
Well I would say that the machine should not override the human input. But if the machine makes up the plans in the first place, then why should it not be allowed to change the plans? I think that the hilarious part in modifying tests to make them work without understanding why they fail is that it probably happens due to training from humans.
mattigames
"This task seems more appropriate for lesser beings e.g. humans"
Fade_Dance
I even heard that it will aggressively delete your codebase and then lie about it. To your face.
victorbjorklund
You are using version control so what is the issue?
Fade_Dance
(exactly. It's a sarcastic reference to this story that was going around: https://news.ycombinator.com/item?id=44632575)
chubot
I use Claude and like it, but this post has kind of a clunky and stilted style
So I guess the blog team also uses Claude
maxnevermind
Also started to suspect that, but I have a bigger problem with the content than styling:
> "Instead of remembering complex Kubernetes commands, they ask Claude for the correct syntax, like "how to get all pods or deployment status," and receive the exact commands needed for their infrastructure work."
Duh, you can ask LLM tech questions and stuff. What is the point of putting something like that on the tech blog of the company which supposed to be working on beading edge tech.
LeafItAlone
To get more people using it, and more. I’ve encountered people who don’t use it because they think that it isn’t something that will help them, even in tech. Showing how different groups find value in it might get people in those same positions using it.
Even with people who do use it, they might thinking about it narrowly. They use it for code generation, but might not think to use it for simplified man pages.
Of course there are people who are the exact opposite and use it for every last thing they do. And maybe from this they learn how to better approach their prompts.
politelemon
I think this is meant to serve as a bit of an advert/marketing and bit of a signal to investors that look, we're doing things.
kylestanfield
The MCP documentation site has the same problem. It’s basically just a list of bullet points without any details
AIPedant
I don't think the problem is using Claude - in fact some of the writing is quite clumsy and amateurish, suggesting an actual human wrote it. The overall post reads like a collection of survey responses, with no overarching organization, and no filtering of repetitive or empty responses. Nobody was in charge.
jonstewart
You’re absolutely right!
mepiethree
Yeah this is kind of a stunning amount of information to provide but also basically like polished bullet points
vlovich123
Must feel that way because that’s probably exactly what it is
mixdup
The first example was helping debug k8s issues, which was diagnosed as IP pool exhaustion, and Claude helped them fix it without needing a network expert
But, if they had an expert in networking build it in the first place, would they have not avoided the error entirely up front?
mfrye0
My optimization hack is that I'm using speech recognition now with Claude Code.
I can just talk to it like a person and explain the full context / history of things. Way faster than typing it all out.
apwell23
any options for ubuntu ?
foob
I've been pretty happy with the python package hns for this [1]. You can run it from the terminal with uvx hns and it will listen until you press enter and then copy the transcription to the clipboard. It's a simple tool that does one thing well and integrates smoothly with a CLI-based workflow.
mfrye0
I'll check that one out.
The copy aspect was the main value prop for the app I chose: Voice Type. You can do ctrl-v to start recording, again to stop, and it pastes it in the active text box anywhere on your computer.
onprema
The SuperWhisper app is great, if you use a Mac.
mfrye0
I checked that one out. The one that Reddit recommended was Voice Type. It's completely offline and a one-time charge:
https://apps.apple.com/us/app/voice-type-local-dictation/id6...
The developer is pretty cool too. I found a few bugs here and there and reported them. He responds pretty much immediately.
jwr
There is also MacWhisper which works very well. I've been using it for several months now.
I highly recommend getting a good microphone, I use a Rode smartlav. It makes a huge difference.
sipjca
Open source and cross platform one: https://handy.computer
theshrike79
So you sit in a room, talking to it? Doesn't it feel weird?
I type a lot faster than I speak :D
mfrye0
Haha yeah, it does feel a bit weird.
I often work on large, complicated projects that span the whole codebase and multiple micro services. So it's often a blend of engineering, architectural, and product priorities. I can end up talking for paragraphs or multiple pages to fully explain the context. Then Claude typically has follow-up questions, things that aren't clear, or issues that I didn't catch.
Honestly, I just get sick of typing out "dissertations" every time. It's easier just to have a conversation, save it to a file, and then use that as context to start a new thread and do the work.
jon-wood
Not only do I type faster than I speak I'm also able to edit as I go along, correcting any mistakes or things I've stumbled over and can make clearer. Half my experience of using even basic voice assistants is starting to ask for something and then going "ugh, no cancel" because I stumbled over part of a sentence and I know I'll end up with some utter nonsense in my todo list.
mfrye0
I know what you mean. This new generation of speech recognition is different though. It's able to understand fairly technical terms, specialized product names and other stuff that previously would have been garbled text.
Even if it gets slightly garbled, I often will add a note in my context that I'm using speech recognition. Then Claude will handle the potentially garbled or unclear sections perfectly or ask follow-up questions if it's unclear.
amacneil
It’s a funny post because our team wanted to use Claude Code, but the team plan of Claude doesn’t include Claude Code (unlike the pro plan at a similar price point). Found this out after we purchased it :( We’re not going to ask every engineer to purchase it separately.
Maybe before boasting about how your internal teams use your product, add an option for external companies to pay for it!
Industry leading AI models but basic things like subscription management are unsolved…
baby
Why not ask them to purchase it separately?
theshrike79
It's a billing nightmare pretty much.
The only reason why "team" plans exist is to have centralised billing and licensing.
zuInnp
I prefer to use Claude code like a smart rubber ducky while I write most the code in the end. I added rules to make claude first explain what it want to do in chat (and code changes should only be implemented if I ask for it). I like it to talk about ideas, discuss solutions and approaches. But I am in control of it - in the end, I am the person repsonsible for the code.
Since I don't like it to automatically do changes to files, I copy&paste the code from terminal to the IDE. That seems slow at first, but it allows me to correct the bigger and smaller issues on the fly faster than prompting Claude to my preffered solution. In my opinion, this makes more sense since I have more control and it is easier to spot problematic code. When fixing such issues, I point Claude to changes afterwards to add to its context.
For me Claude is like a very (over) confident junior developer. You have to keep an eye on them and if it is faster do it yourself, then just do it and explain them why you did it. (That might be a bad approach for Juniors, but for Claude it works for me)
Btw, can we talk about that this blog post is written by the company that tries to sell the tool? So we should take it with a huge grain of salt... . Like all what these AI companies are telling us, should probably be ignored for 90 % of time. They either want to raise money or or getting bought by some other company in the end ...
theshrike79
Just keep it in plan mode and it won't change anything?
Unlike Gemini CLI which will just rush into implementation without hesitation :D
onprema
> When Kubernetes clusters went down and weren't scheduling new pods, the team used Claude Code to diagnose the issue. They fed screenshots of dashboards into Claude Code, which guided them through Google Cloud's UI menu by menu until they found a warning indicating pod IP address exhaustion. Claude Code then provided the exact commands to create a new IP pool and add it to the cluster, bypassing the need to involve networking specialists.
This seems rather inefficient, and also surprising that Claude Code was even needed for this.
ktzar
They're subsidizing a world where we need ai instead of understanding or, at the very least, knowing who can help us. Eventually for us to be so dumb we are the ai slaves.
moomoo11
Not really.
Is it really value add to my life that I know some detail on page A or have some API memorized?
I’d rather we be putting smart people in charge of using AI to build out great products.
It should make things 10000x more competitive. I’m for one excited AF for what the future holds.
If people want to be purists and pat themselves on the back sure. I mean people have hobbies like arts.
AstroBen
> Is it really value add to my life that I know some detail on page A or have some API memorized?
yes, actually. Maybe not intimate details but knowing what's available in the API heavily helps with problem solving
otabdeveloper4
> for what the future holds
AI mostly provides negative efficiency gains.
This will not change in the future.
whycombagator
Yes. This is what id expect from an intern or very junior engineer (which could be the case here)
mhrmsn
It'd be more interesting if they shared actual examples of complete prompts, CLAUDE.md files, settings and MCP servers to achieve certain things.
The documentation is good, but is kept relatively general and I have a feeling that the quality of Claude Code's output really depends on the specific setup and prompts you use.
bgwalter
So this style of articles is the future? Pages of unconnected bullet points that mention the word "Claude" at least 100 times. No real information and nothing to remember.
ipnon
The future is that articles will generally be read by machine intelligences more often than human ones. You have to optimize for your audience.
joe_the_user
Yeah, I don't think all the information in the article is useful but it seems like disorganized dump - or a melding together of several article - boiler plate "we use Claude for all good things" and specific useful tips all run together.
It definitely has a "we use AI enough that we've lost the ability to communicate coherently" vibe to it.
kylestlb
I'm assuming there was a directive from comms or someone else requiring each cost center write up examples/demos of what they use Claude Code for. Then dogfooded and turned into what you see here.
A repeated trend is that Claude Code only gets 70-80% of the way, which is fine and something I wish was emphasized more by people pushing agents.
This bullet point is funny:
> Treat it like a slot machine
> Save your state before letting Claude work, let it run for 30 minutes, then either accept the result or start fresh rather than trying to wrestle with corrections. Starting over often has a higher success rate than trying to fix Claude's mistakes.
That's easy to say when the employee is not personally paying the massive amount of compute running Claude Code for a half-hour.