Skip to content(if available)orjump to list(if available)

Living Dangerously with Claude

Living Dangerously with Claude

63 comments

·October 22, 2025

ZeroConcerns

So, yeah, only tangentially related, but if anyone at Anthropic would see it fit to let Claude loose on their DNS, maybe they can create an MX record for 'email.claude.com'?

That would mean that their, undoubtedly extremely interesting, emails actually get met with more than a "450 4.1.8 Unable to find valid MX record for sender domain" rejection.

I'm sure this is just an oversight being caused by obsolete carbon lifeforms still being in charge of parts of their infrastructure, but still...

almosthere

Anyone from the Cursor world already YOLO's it by default.

A massive productivity boost I get is using to do server maintenance.

Using gcloud compute ssh, log into all gh runners and run docker system prune, in parellel for speed and give me a summary report of the disk usage after.

This is an undocumented and underused feature of basic agentic abilities. It doesn't have to JUST write code.

wrs

Yesterday I was trying to move a backend system to a new AWS account and it wasn’t working. I asked Claude Code to figure it out. About 15 minutes and 40 aws CLI commands later, it did! Turned out the API Gateway’s VPCLink needed a security group added, because the old account’s VPC had a default egress rule and the new one’s didn’t.

I barely understand what I just said, and I’m sure it would have taken me a whole day to track this down myself.

Obviously I did NOT turn on auto-approve for the aws command during this process! But now I’m making a restricted role for CC to use in this situation, because I feel like I’ll certainly be doing something like this again. It’s like the AWS Q button, except it actually works.

normie3000

Is this what ansible does? Or some other classic ops tool?

simonw

Does Cursor have a good sandboxing story?

tuhgdetzhh

I run multiple instances of cursor cli yolo in a 4 x 3 tmux grid each in an isolated docker container. That is a pretty effective setup.

mandevil

There are a million different tools that are designed to do this, e.g. this task (log into a bunch of machines and execute a specific command without any additional tools running on each node) is literally the design use case for Ansible. It would be a simple playbook, why are you bringing AI into this at all?

giobox

Agreed, this is truly bizarre to me. Is OP not going to have to do this work all over again in x days time once the nodes fill with stale docker assets again?

AI can still be helpful here if new to scheduling a simple shell command, but I'd be asking the AI how do I automate the task away, not manually asking the AI to do the thing every time, or using my runners in a fashion that means I don't have to even concern myself with scheduled prune command calls.

almosthere

No, we have a team dedicated to fixing this long term, but this allowed 20 engineers to get working right away. Long term fix is now in.

bdangubic

> but I'd be asking the AI how do I automate the task away

AI said “I got this” :)

ericmcer

Yeah that sounds like a CI/CD task or scheduled job. I would not want the AI to "rewrite" the scripts before running them. I can't really think of why I would want it to?

almosthere

Because I didn't have to do anything other than write that english statement and it worked. Saved me a long time.

matthewdgreen

So let me get this straight. You’re writing tens of thousands of lines of code that will presumably go into a public GitHub repository and/or be served from some location. Even if it only runs locally on your own machine, at some point you’ll presumably give that code network access. And that code is being developed (without much review) by an agent that, in our threat model, has been fully subverted by prompt injection?

Sandboxing the agent hardly seems like a sufficient defense here.

daxfohl

That's kind of tangential though. The article is more about using sandboxes to allow `--dangerously-skip-permissions` mode. If you're not looking at the generated code, you're correct, sandboxing doesn't help, but neither does permissioning, so it's not directly relevant to the main point.

tptacek

Where did "without much review" come from? I don't see that in the deck.

enraged_camel

Yeah. Personally I haven't found a workflow that relies heavily on detailed design specs, red/green TDD followed by code review. And that's fine because that's how I did my work before AI anyway, both at the individual level and at the team level. So really, this is no different than reviewing someone else's PR, aside from the (greatly increased) turnaround time and volume.

null

[deleted]

tyre

I’ve found it helpful to have a model write a detailed architecture and implementation proposal, which I then review and iterate on.

From there it splits out each phase into three parts: implementation, code review, and iteration.

After each part, I do a code review and iteration.

If asked, the proposal is broken down into small, logical chunks so code review is pretty quick. It can only stray so far off track.

I treat it like a strong mid-level engineer who is learning to ship iteratively.

simonw

What is your worst case scenario from this?

null

[deleted]

noitpmeder

Bank accounts drained, ransomware installed, ...

deadbabe

Silently setup a child pornographer exchange server and run it on your machine for years without you ever noticing until you are caught and imprisoned.

mike_hearn

sandbox-exec isn't really deprecated. It's just a tiny wrapper around some semi-private undocumented APIs, it says that because it's not intended for public use. If it were actually deprecated Apple would have deleted it at some point, or using it would trigger a GUI warning, or it'd require a restricted entitlement.

The reason they don't do that is because some popular and necessary apps use it. Like Chrome.

However, I tried this approach too and it's the wrong way to go IMHO, quite beyond the use of undocumented APIs. What you actually want to do is virtualize, not sandbox.

krackers

Fun fact: the sandboxing rules are defined using scheme!

stuaxo

I've been thinking about this a bit.

I reckon something lie Qubes could work fairly well.

Create a new Qube and have control over network connectivity, and do everything there, at the end copy the work out and destroy it.

zxilly

I should like to know how much this would cost? Even Claude's largest subscription appears insufficient for such token requirements.

simonw

I ran a cost estimate on the project I describe in https://simonwillison.net/2025/Oct/23/claude-code-for-web-vi... - which was covered by my Claude Max account, but I dug through the JSONL log files for that session to try and estimate the cost if I had been using the API.

The cost estimate came out to 63 cents - details here: https://gistpreview.github.io/?27215c3c02f414db0e415d3dbf978...

jampa

I don't understand why people advocate so strongly for `--dangerously-skip-permissions`.

Setting up "permissions.allow" in `.claude/settings.local.json` takes minimal time. Claude even lets you configure this while approving code, and you can use wildcards like "Bash(timeout:*)". This is far safer than risking disasters like dropping a staging database or deleting all unstaged code, which Claude would do last week, if I were running it in the YOLO mode.

The worst part is seeing READMEs in popular GitHub repos telling people to run YOLO mode without explaining the tradeoffs. They just say, "Run with these parameters, and you're all good, bruh," without any warning about the risks.

I wish they could change the parameter to signify how scary it can be, just like React did with React.__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED (https://github.com/reactjs/react.dev/issues/3896)

bdangubic

changing the parameter name to something scary will only increase its usage

dist-epoch

I tried this path. The issue is that agents are very creating in coming up with new variations. "uv run pytest", "python3 -m pytest", "bash -c pytest"

It's a never ending game of whitelisting.

lacker

The sandbox idea seems nice, it's just a question of how annoying it is in practice. For example the "Claude Code on the web" sandbox appears to prevent you from loading `https://api.github.com/repos/.../releases/latest`. Presumably that's to prevent you from doing dangerous GitHub API operations with escalated privileges, which is good, but it's currently breaking some of my setup scripts....

simonw

Is that with their default environment?

I have been running a bunch of stuff in there with a custom environment that allows "*"

lacker

I whitelisted github.com, api.github.com, *.github.com, and it still doesn't seem to work. I suspect they did something specifically for github to prevent the agent from doing dangerous things with your credentials? But I could be wrong.

boredtofears

I like the best of both worlds approach of asking Claude to refine a spec with me (specifically instructing it to ask me questions) and then summarize an implementation or design plan (this might be a two step process if the feature is big enough)

When I’m satisfied with the spec, I turn on “allow all edits” mode and just come back later to review the diff at the end.

I find this works a lot better than hoping I can one shot my original prompt or having to babysit the implementation the whole way.

wahnfrieden

I recommend trying a more capable model that will read much more context too when creating specs. You can load a lot of full files into GPT 5 Pro and have it produce a great spec and give more surgical direction to CC or Codex (which don’t read full files and often skip over important info in their haste). If you have it provide the relevant context for the agent, the agent doesn’t waste tokens gathering it itself and will proceed to its work.

boredtofears

Is there an easy way to get a whole codebase into GPT 5 Pro? It's nice with claude to be able to say "examine the current project in the working directory" although maybe that's actually doing less than I think it is.

simonw

I wrote a tool for that: https://github.com/simonw/files-to-prompt - and there are other similar tools like repomix.

These days I often use https://gitingest.com - it can grab any full repo on GitHub has something you can copy and paste, e.g. https://gitingest.com/simonw/llm

igor47

My approach is to ask Claude to plan anything beyond a trivial change and I review the plan, then let it run unsupervised to execute the plan. But I guess this does still leave me vulnerable to prompt injection if part of the plan is accessing external content

abathologist

What guarantees do you have it will actually follow the stated plan instead of doing something else entirely?

ares623

Just don’t think about it too much. You’ll be fine.

danielbln

js2

It's discussed in the linked post.

BoredPositron

[flagged]

simonw

This particular post was a talk I gave in person on Tuesday. I have a policy of always writing up my talks, it's a little inconvenient that the one happened to coincide with a busy week for other content.

What do you think of this one? I'm trying a new format: https://simonwillison.net/2025/Oct/23/claude-code-for-web-vi...