Skip to content(if available)orjump to list(if available)

Claude Sonnet will ship in Xcode

Claude Sonnet will ship in Xcode

412 comments

·August 29, 2025

pjmlp

The irony of this, is that Microsoft was trying to push CoPilot everywhere, however eventually Apple, Google and JetBrains have their own AI integrations, taking CoPilot out of the loop.

Slowly the AI craziness at Microsoft is taking the similar shape, of going all in at the begining and then losing to the competition, that they also had with Web (IE), mobile (Windows CE/Pocket PC/WP 7/WP 8/UWP), the BUILD sessions that used to be all about UWP with the same vigour as they are all AI nowadays, and then puff, competition took over even if they started later, because Microsoft messed up delivery among everyone trying to meet their KPIs and OKRs.

I also love the C++ security improvements on this release.

raincole

Microsoft owns 49% of OpenAI so why they should worry? JetBrains just proudly announce that they now use GPT-5 by default.

> going all in at the begining and then losing to the competition

Sure, but there are counter examples too. Microsoft went late to the party of cloud computing. Today Azure is their main money printing machine. At some point Visual Studio seemed to be a legacy app only used for Windows-specific app development. Then they released VSCode and boom! It became the most popular editor by a huge margin[0].

[0]: https://survey.stackoverflow.co/2025/technology#most-popular...

theshrike79

Anecdotally: Azure is the Teams of cloud services - nobody uses it voluntarily or because it's technically the best solution.

They use it because the corporation mandates it.

pjmlp

Partially, I still consider the Web shell and VSCode based editing experience the best of clound vendors as replacement from what started to me as telnet and X forwarding on the university DG/UX servers.

AWS is the worst of this experience, even IBM Cloud has better tooling in this regard, GCP is somehow in the middle, others like Vercel/Netlify naturally don't offer this kind of setup.

no_wizard

Azure isn’t great but AWS continues to be worse by a mile. I don’t know why anyone puts up with their terrible SDKs and poor documentation.

IMO Firebase should be the gold standard of how to do cloud platforms

thevillagechief

Truer words have never been spoken. Every time I hear an exec say we're a Microsoft shop so we've got to use copilot/Azure, I wonder if they hear themselves.

ryanjshaw

I’ve used Meet, Slack, Zoom and Teams extensively. Teams beats the others by miles in my opinion.

jabart

Visual Studio is a bad example. It's used for Windows, Web, and Mobile. The big difference between the two is the cost. Visual Studio Pro is $100/month, Enterprise is $300/month, while VSCode is free. It was an incredibly smart marketing play by Microsoft to do that.

JumpCrisscross

> Microsoft owns 49% of OpenAI

Power at OpenAI seems orthogonal to ownership, precedent or even frankly their legal documents.

orphea

  > At some point Visual Studio seemed to be a legacy app only used for Windows-specific app development. Then they released VSCode and boom!
I'm not sure what the point is. Visual Studio is still Windows-only; VS Code is not related to it in any shape or form, the name is deliberately misleading.

raincole

The point is MS was so, so, so late to the party of cross-platform developer tools. And then suddenly they won the game.

jen20

Indeed I heard directly from someone involved that the VS Code team understood the reputation of Visual Studio and wanted to call the product “Code” instead, and the compromise with marketing leadership was the the binary was called “code”.

icemelt8

> VS Code is not related to it in any shape or form Except they are made by the same company? and literally own the trademark for both?

nonethewiser

>The irony of this, is that Microsoft was trying to push CoPilot everywhere, however eventually Apple, Google and JetBrains have their own AI integrations, taking CoPilot out of the loop.

What is the irony? Microsoft integrated copilot in Vscode, bing, etc. Apple is integrating claude in Xcode, Jetbrains has their own AI.

Microsoft moved first with putting AI into their products then other companies put other AI into their products. Nothing about this seems ironic or in any way surprising.

__alexs

The irony is that most people don't know how to use the word ironic. Personally I blame Alanis Morissette.

SAI_Peregrinus

It's ironic that a song about irony contains no actual examples of irony. But since that's ironic, is it actually an appropriately named song?

arcticbull

The irony is biting.

ergocoder

Yeah, there's no irony.

Apple and Google will never choose to integrate Microsoft's services or products willingly.

It would have been more surprising if they decided to depend on Microsoft.

pjmlp

The irony is that Microsoft has several cases where it gets there first, only to be left behind when competition catches up.

Bing is irrelevant, VSCode might top in some places, but it is cursor and Claude that people are reaching for, VS is really only used by people like myself that still care about Windows development or console SDKs, otherwise even for .NET people are switching to Rider.

yokoprime

CoPilot isn't anything Microsoft is trying to sell outside of their own products. And with GitHub Copilot there is no "copilot" model to choose, you can choose between Anthropic, OpenAI and Google models.

Sure UWP never caught on, but you know why? Win32, which by the way is also Microsoft, was way to popular and more flexible. Devs weren't going to re-write their apps to UWP in order to support phones.

lenkite

People were writing to UWP. There were hundreds of UWP apps that got cancelled and abandoned when Microsoft ditched their Windows Phone once Nadella got in. He kill Windows Phone, he killed native Edge (Chakra JS) and a lot of other stuff to focus fully on Cloud and then AI.

Before that ex-Microsoft guy was responsible for killing Nokia OS/Meego too in favor of Windows Phone - which got abandoned. What a train-wreck of errors leading to the mobile phone duopoly today.

pjmlp

UWP was more than just phones, https://blogs.windows.com/windowsdeveloper/2015/03/02/a-firs...

And Windows 11 was the reboot of Windows 10X,

https://www.youtube.com/watch?v=ztrmrIlgbIc

null

[deleted]

greggsy

Just because you can’t or won’t win the market with your opportunistic investment, doesn’t mean you should let your competitors completely annihilate you by taking that investment for themselves.

Google, Apple, FB or AWS would have been suitors for that licensing deal if MS didn’t bite.

rajnathani

About GitHub Copilot in specific: One big negative was how when GPT-4 became available that Microsoft didn't upgrade paying Copilot users to it, they simply branded this "coming soon"/"beta" Copilot X for a while. We simply cancelled the only Copilot subscription we had at work.

WhyNotHugo

Copilot subscription?

I've been getting monthly emails that my free access for GitHub Copilot has been renewed for another month… for years. I've never used it, I thought that all GitHub users got it for free.

ramchip

There's a free tier, and various paid tiers: https://github.com/features/copilot/plans

d3nj4l

If you are a student or maintain a popular open source project, they give it to you for free. I’m guessing you might fall under that category.

rajnathani

Besides the sibling replies to your comment, GitHub’s Copilot free plan (not the specifically free for just OSS and students) was also launched relatively later: https://github.blog/news-insights/product-news/github-copilo...

jnsaff2

What confuses me about MS Copilot is that there are (according to ChatGPT) 12 distinct services that are all Copilot:

Microsoft Copilot (formerly Bing Chat)

Microsoft 365 Copilot

Microsoft Copilot Studio

GitHub Copilot

Microsoft Security Copilot

Copilot for Azure

Copilot for Service

Sales Copilot

Copilot for Data & Analytics (Fabric)

Copilot Pro

Copilot Vision

const_cast

Out of all of big tech, Microsoft is by far the worst at naming stuff. Its comically bad most of the time.

jen20

Copilot.NET Live Ultimate Edition N for Developers

onion2k

"Taking Copilot out of the loop" if you ignore the massive ecosystems of Github, Visual Studio, and Visual Studio Code.

nihonde

Different CoPilot product. Typical Microsoft naming confusion.

recursive

There's another copilot?

JumpCrisscross

Microsoft mistook a product game for a distribution one. AI quality is heterogenous and advancing enough that people will make an effort to use the one they like best. And while CoPilot is excellently distributed, it’s a crap product, in large part due to the limits Microsoft put on GPT.

Lammy

Interesting to think about how Apple get to make product decisions based on Gatekeeper OCSP analytics now that every app launch phones home. They must know exactly how popular VSCode is.

Facebook got excoriated for doing that with Onavo but I guess it's Good Actually when it's done in the name of protecting my computer from myself lol

doctorpangloss

Apple doesn't need telemetry to send emails about their favorite coding AI to the 2 Xcode users

kennywinker

Off by about 33,999,998 users, but still a decent dunk.

https://appleinsider.com/articles/22/06/06/apple-now-has-ove...

LinuxAmbulance

34 million developers? That number doesn't even pass a basic sniff test. Are there 34 million people that have Xcode installed? That I can believe.

doctorpangloss

We found one of the users!

wahnfrieden

This won't make a dent. It still doesn't support any agentic operation.

The real news is when Codex CLI / Claude Code get integrated, or Apple introduces a competitor offering to them.

Until then this is a toy and should not be used for any serious work while these far better tools exist.

alwillis

I just installed it—definitely not a toy.

Compared to stock Claude Code, this version of Claude knows a lot more about SwiftUI and related technologies. The following is output from Claude in Xcode on an empty project. Claude Code gives a generic response when it looked at the same project:

    What I Can Help You With

    • SwiftUI Development: Layout, state management, animations, etc.
    • iOS/macOS App Architecture: MVVM, data flow, navigation
    • Apple Frameworks: Core Data, CloudKit, MapKit, etc.
    • Testing: Both traditional XCTest and the new Swift Testing framework
    • Performance & Best Practices: Swift concurrency, memory management

    Example of What We Could Do Right Now

    Looking at your current ContentView.swift􀰓, I could help you:
    • Transform this basic "Hello World" into a recovery tracking interface
    • Add navigation, data models, or user interface components
    • Implement proper architecture patterns for your Recovery Tracker app

manmal

If a bunch of markdown files forced into the context is “knowing”, then yes. They are usually located at /Applications/Xcode-beta.app/Contents/PlugIns/IDEIntelligenceChat.framework/Versions/A/Resources/AdditionalDocumentation

You are free to point Claude Code to that folder, or make a slash command that loads their contents. Or, start CC with -p where the prompt is the content of all those files.

Claude Code integration in Xcode would be very cool indeed, but I might still stick with VSCode for pure coding.

wahnfrieden

If it is severely less capable - and not even any cheaper to use - then it’s a toy! A penny farthing can get you somewhere just as a car can but only one is perhaps of professional utility even if the other used to be at one point too

ako

Isn’t that easy to add with some rules and guidelines documents? I usually ask Claude code to research modern best practices for SwiftUI apps and to summarize the learnings in a rules file that will be part of the SwiftUI project.

ghurtado

I'm as crazy about AI as the next dev, but that has to be the weakest example of AI capability that I have ever seen.

einrealist

Its not shipping the model in Xcode. You are still sending your data off to a remote provider, hoping that this provider behaves nicely with all this data and that the government will never force the provider to reveal your data.

ygritte

They are already forcing OpenAI to keep all logs. Go figure.

einrealist

And people talk to GPT about very private things, using it as a shrink. What can go wrong.

null

[deleted]

kridsdale1

China wishes they had that level of access to their people’s thoughts.

marci

Anthropic has a strong stance on privacy. They won't rug pull.

/s

https://news.ycombinator.com/item?id=45062683 (Anthropic reverses privacy stance, will train on Claude chats)

not_your_vase

3 days ago I saw another Claude praising submission on HN, and finally I signed up for it, to compare it with copilot.

I asked 2 things.

1. Create a boilerplate Zephyr project skeleton, for Pi Pico with st7789 spi display drivers configured. It generated garbage devicetree which didn't even compile. When I pointed it out, it apologized and generated another one that didn't compile. It configured also non-existent drivers, and for some reason it enabled monkey test support (but not test support).

2. I asked it to create 7x10 monochromatic pixelmaps, as C integer arrays, for numeric characters, 0-9. I also gave an example. It generated them, but number eight looked like zero. (There was no cross in ether 0 nor 8, so it wasn't that. Both were just a ring)

What am I doing wrong? Or is this really the state of the art?

simonw

"What am I doing wrong?"

Your first prompt is testing Claude as an encyclopedia: has it somehow baked into its model weights the exactly correct skeleton for a "Zephyr project skeleton, for Pi Pico with st7789 spi display drivers configured"?

Frequent LLM users will not be surprised to see it fail that.

The way to solve this particular problem is to make a correct example available to it. Don't expect it to just know extremely specific facts like that - instead, treat it as a tool that can act on facts presented to it.

For your second example: treat interactions with LLMs as an ongoing conversation, don't expect them to give you exactly what you want first time. Here the thing to do next is a follow-up prompt where you say "number eight looked like zero, fix that".

diggan

> For your second example: treat interactions with LLMs as an ongoing conversation, don't expect them to give you exactly what you want first time. Here the thing to do next is a follow-up prompt where you say "number eight looked like zero, fix that".

Personally, I treat those sort of mistakes as "misunderstandings" where I wasn't clear enough with my first prompt, so instead of adding another message (and increasing context further, making the responses worse by each message), I rewrite my first one to be clearer about that thing, and regenerate the assistant message.

Basically, if the LLM cannot one-shot it, you weren't clear enough, and if you go beyond the total of two messages, be prepared for the quality of responses to really sink fast. Even by the second assistant message, you can tell it's having an harder time keeping up with everything. Many models brag about their long contexts, but I still feel like the quality of responses to be a lot worse even once you reach 10% of the "maximum context".

varispeed

You also need to state your background somehow and at what level you want the answer to be. I often found LLM would give answer that what I ask is too complex and would take months to do. Then you have to say like ignore these constraints and assume I am already an expert in the field, outline a plan how to achieve this and that. Then drill down on the plan points. It's a bit of work, but its fascinating.

Or it would say to do X it involves very complex math, instead you could (and proceeds with stripped down solution that doesn't meet goals). So you can tell it to ignore the concerns about complexity and assume that I understand all of it and it is easy to me. Then it goes on creating the solution that actually has legs. But you need to refine it further.

OtherShrezzing

It’s good at doing stuff like “host this all in Docker. Make a Postgres database with a Users table. Make a FastAPI CRUD endpoint for Users. Make a React site with a homepage, login page, and user dashboard”.

It’ll successfully produce _something_ like that, because there’s millions of examples of those technologies online. If you do anything remotely niche, you need to hold its hand far more.

The more complicated your requirements are, the closer you are to having “spicy autocomplete”. If you’re just making a crud react app, you can talk in high level natural language.

ranguna

Did you try claude code and spend actual time going back and forth with it, reviewing it's code and providing suggestions; Instead of just expecting things to work first try with minimal requirements?

I see claude code as pair programming with a junior/mid dev that knows all fields of computer engineering. I still need to nudge it here and there, it will still make noob mistakes that I need to correct and I let it know how to properly do things when it gets them wrong. But coding sessions have been great and productive.

In the end, I use it when working with software that I barely know. Once I'm up and running, I rarely use it.

johnisgood

> Did you try claude code and spend actual time going back and forth with it, reviewing it's code and providing suggestions; Instead of just expecting things to work first try with minimal requirements?

I did, but I always approached LLM for coding this way and I have never been let down. You need to be as specific as possible, be a part of the whole process. I have no issues with it.

gattilorenz

FWIW, I used Gemini to write an Objective-C app for Apple Rhapsody (!) that would enumerate drivers currently loaded by the operating systems (more or less save level of difficulty as the OP, I'd say?), using the PDF manual of NextStep's DriverKit as context.

It... sort of worked well? I had to have a few back-and-forth because it tried to use Objective-C features that did not exist back then (e.g. ARC), but all in all it was a success.

So yeah, niche things are harder, but on the other hand I didn't have to read 300 pages of stuff just to do this...

thefoyer

I remember writing obj-c naturally by hand. Before swift was even a twinkle in tim cooks eye. One of my favorite languages to program in I had a lot of fun writing ios apps back in the day it seems like

fauigerzigerk

I agree, but I think there's an important distinction to be made.

In some cases, it just doesn't have the necessary information because the problem is too niche.

In other cases, it does have all the necessary information but fails to connect the dots, i.e. reasoning fails.

It is the latter issue that is affecting all LLMs to such a degree that I'm really becoming very sceptical of the current generation of LLMs for tasks that require reasoning.

They are still incredibly useful of course, but those reasoning claims are just false. There are no reasoning models.

fx0x309

In other words, the vibe coders of this world are just redundant noobs who don't really belong on the marketplace. They've written the same bullshit CRUD app every month for the past couple of years and now they've turned to AI to speed things up

stpedgwdgfhgdd

Last week I asked Claude to improve a piece of code that downloads all AWS RDS certificates to just the ones needed for that AWS region. It figured out several ways to determine the correct region, made a nice tradeoff and suggested the most reliable way. It rewrote the logic to download the right set, did some research to figure out the right endpoint in between. It only made one mistake, it fallback mechanism was picking EU, which was not correct. Maybe 1 hour of work. On my own it would have taken me close to a working day to figure it all out.

LinuxAmbulance

I think the majority of coders out there write the same CRUD app over and over again in different flavors. That's what the majority of businesses seem to pay for.

If a business needs the equivalent of a Toyota Corolla, why be upset about the factory workers making the millionth Toyota Corolla?

wsc981

Yeah, my experience with LÖVR [0] and LLM (ChatGPT) has been quite horrible. Since it's very niche and quite recently quite a big API change has happened, which I guess the model wasn't trained on. So it's kind of useless for that purpose.

---

[0]: https://lovr.org

drodgers

> What am I doing wrong

Trying two things and giving up. It's like opening a REPL for a new language, typing some common commands you're familiar with, getting some syntax errors, then giving up.

You need how to learn to use your tools to get the best out of them!

Start by thinking about what you'd need to tell a new Junior human dev you'd never met before about the task if you could only send a single email to spec it out. There are shortcuts, but that's a good starting place.

In this case, I'd specifically suggest:

1. Write a CLAUDE.md listing the toolchains you want to work with, giving context for your projects, and listing the specific build, test etc. commands you work with on your system (including any helpful scripts/aliases you use). Start simple; you can have claude add to it as you find new things that you need to tell it or that it spends time working out (so that you don't need to do that every time).

2. In your initial command, include a pointer to an example project using similar tech in a directory that claude can read

3. Ask it to come up with a plan and ask for your approval before starting

designerarvid

I guess many find comfort in being able to task an ai with assignments that it cannot complete. Most sr developers I work with take this approach. It's not really a good way of assessing the usefulness of a tool though.

xoac

He asked what he was doing wrong?

_boffin_

too big of tasks. break them down and then proceed from there. have it build out task lists in a TASKS.md. review those tasks. do you agree? no? work with it to refine. implement one by one. have it add the tests. refactor after awhile as {{model}} doesn't like to do utility functions a lot. right now, about +50k lines in to a project that's vibecoded. i sit back and direct and it plays.

Imagine the CS 100 class where they ask you to make a PB&J. saying for it to make it, there's a lot of steps, but determine known the steps. implement each step. progress.

999900000999

Think of Claude as a typical software developer.

If you just selected a random developer do you think they're going to have any idea why your talking about?

The issue is LLMs will never say, sorry, IDK how to do this. Like a stressed out intern they just make up stuff and hope it passes review.

a_wild_dandan

> What am I doing wrong?

Providing a woefully inadequate descriptions to others (Claude & us) and still expecting useful responses?

mm263

Try this prompt: Create a detailed step by step plan to implement a boilerplate Zephyr project skeleton for Pi Pico with configured st7789 SPI display drivers

Ask Opus or Gemini 2.5 Pro to write a plan. Then ask the other to critique it and fix mistakes. Then ask Sonnet to implement

mm263

I tried this myself and IMO, this might be basic and day-to-day for you, with unambiguous correct paths to follow, but this is pretty niche nevertheless. LLMs thrive when there's a wealth of examples and I struggle to Google what you asked myself, meaning that LLM will perform even worse than my try.

prawn

I found that second line works well for image prompts too. Tell one AI to help you with a prompt, and then take it back to the others to generate images.

yangikan

Is there a way to do this kind of design->critique->implement without switching tools? Like an end-to-end solution that consults multiple LLMs?

mm263

Claude code with Zen MCP. Kiro, but you don’t get a second LLM opinion.

VMG

> It configured also non-existent drivers, and for some reason it enabled monkey test support (but not test support).

If it doesn't have the underlying base data, it tends to hallucinates. (It's getting a bit difficult to tell when it has underlying data, because some models autonomously search the web). The models are good at transforming data however, so give it access to whatever data it needs.

Also let it work in a feedback loop: tell it to compile and fix the compile errors. You have to monitor it because it will sometimes just silence warnings and use invalid casts.

> What am I doing wrong? Or is this really the state of the art?

It may sound silly, but it's simply not good at 2D

throwawayffffas

> It may sound silly, but it's simply not good at can2D.

It's not silly at all, it's not very good at layouts either, it can generally make layouts but there is a high chance for subtle errors, element overlaps, text overflows, etc.

Mostly because it's a language model, i.e it doesn't generally see what it makes, you can send screenshots apparently and it will use it's embedded vision model, but I have not tried that.

breadwinner

It seems every IDE now has AI built-in. That's a problem if you're working on highly confidential code. You never know when the AI is going to upload code snippets to the server for analysis.

baby

Not trying to be mean but I would expect comments on HN on these kind of stories to be from people who have used AI in IDEs at this point. There is no AI integration that runs automatically on a codebase.

ygritte

This could change on a daily basis, and it's a valid concern anyway.

paradite

There is automatic code indexing from Cursor.

Autocomple is also automatically triggered when you place your cursor inside the code.

rafram

Yes, Cursor, “The AI Code Editor.”

LostMyLogin

Cursor is an AI IDE and not what they are describing.

TiredOfLife

This is HN. 10 years ago that would be true, but now I expect 99% of commenters to have newer used the thing they are talking about or used it once 20 years ago for 10 minutes, or even nkt read the article.

factorialboy

> There is no AI integration that runs automatically on a codebase.

Don't be naive.

lalo2302

Gitkraken does

nh43215rgb

> "add their existing paid Claude account to Xcode and start using Claude Sonnet 4"

Wont work by default if I'm reading this correctly

tcoff91

Neovim and Emacs don’t have it built in. Use open source tools.

simonh

They both support it via plugins. Xcode doesn’t enable it by default, you need to enable it and sign into an account. It’s not really all that different.

tcoff91

That seems perfectly fine and noncontroversial then. Good on Apple for doing it that way.

renewiltord

[flagged]

viraptor

This is not a realistic concern. If you're working on highly confidential code (in a serious meaning of that phrase), your while environment is already either offline or connecting only through a tightly controlled corporate proxy. There's no accidental leaks to AI from those environments.

dijit

thanks for giving the security department more reasons to think that way.

I spent the last 6 months trying to convince them not to block all outbound traffic by default.

postalcoder

The right middle ground is running Little Snitch in alert mode. The initial phase of training the filters and manually approving requests is painful, but it's a lot better than an air gap.

troupo

There are ranges of security concerns and high confidentiality.

For most corporate code (that is highly confidential) you still have proper internet access, but you sure as hell can't just send your code to all AI providers just because you want to, just because it's built into your IDE.

jama211

Well that depends on whether you give it access or not, apple’s track record with privacy gives me some hope

sneak

No. It’s always something you have to turn on or log into.

Also, there are plenty of editors and IDEs that don’t.

Let’s stop pretending like you’re being forced into this. You aren’t.

null

[deleted]

Mashimo

On IDEA the organisation who controls the license can disable the build in (remote) AI. (Not the local auto complete one)

But I guess the user could still get a 3rd party plugin.

c_ehlen

Most of the big corporations will have a special contract with the AI labs with 0 retention policies.

I do not think this will be an issue for big companies.

varenc

> In the OpenAI API, “GPT-5” corresponds to the “minimal” reasoning level, and “GPT-5 (Reasoning)” corresponds to the “low” reasoning level. (159135374)

It's interesting that the highest level of reasoning that GPT-5 in XCode supports is actually the "low" reasoning level. Wonder why.

lukasb

Yeah I don't get why they don't support Opus given that you're bringing your own API key.

nezubn

you can use the API key, and it’ll give you access to all the model.

This is Claude sign in using your account. If you’ve signed up for Claude Pro or Max then you can use it directly. But, they should give access to Opus as well.

natch

They should document it that way.

CharlesW

It's available now. Here’s short but more complete context than the submitted title or the Xcode release note: https://sixcolors.com/link/2025/08/apples-new-xcode-beta-add...

throwawa14223

It's getting harder to find IDEs that properly boycott LLMs.

ants_everywhere

In a similar vein I can barely find an OS that refuses to connect to the internet

PessimalDecimal

Wouldn't the more correct analogy be a text editor without "Klippy?"

mackeye

too many of them these days: https://kakoune.org/

dyauspitr

They don’t think it be like it is, but it do.

aurareturn

I hate that most browsers are willing to render React SPAs.

n2h4

lynx, elinks, and w3m don't

bigyabai

Really?

jama211

“Boycott” is a pretty strong term. I’m sensing a strong dislike of ai from you which is fine but if you dislike a feature most people like it shouldn’t be surprising to you that you’ll find yourself mostly catered to by more niche editors.

isodev

I think it's a pretty good word, let's not forget how LLMs learned about code in the first place... by "stealing" all the snippets they can get their curl hands on.

astrange

And by reading the docs, and by autogenerating code samples and testing them against verifiers, and by paying a lot of people to write sample code for sample questions.

jama211

Ah the classic “I don’t want to acknowledge how right that person is about their point, so instead I’ll ignore what they said and divert attention to another point entirely”.

You’re just angry and adding no value to this conversation because of it

armadyl

If you're on macOS there's Code Edit as a native solution (fully open source, not VC backed, MIT licensed), but it's currently in active development: https://www.codeedit.app/.

Otherwise there's VSCodium which is what I'm using until I can make the jump to Code Edit.

yycettesi

Okay dann lass die Ablage erst laufen ohne Teig dann kannst du mit Teig machen wenn du übergaben machst zwischen 13:30 und 14:00 Uhr dann bitte schichtführer/in Bescheid sagen bzw. geben tschüss

internet2000

Just don't use the features.

computerliker

gary_0

I couldn't get it to properly syntax highlight and autosuggest even after spending over an hour hunting through all sorts of terrible documentation for kate, clangd, etc. It also completely hides all project files that aren't in source control, and the only way to stop it is to disable the git plugin. What a nightmare. Maybe I'll try VSCodium next.

typpilol

I thought vscodium was just vscode but open source. Won't any issues in vscode also be present in vscodium?

isodev

Kate is brilliant.

qbane

How about Sublime Text (not really an IDE, just text editor)

kristopolous

Neovim, emacs?

mr_toad

Amusing that Emacs that came out of the MIT AI lab, and heavily uses Lisp, a language that used to be en vogue for AI research.

PessimalDecimal

Amusing is one word for it. Expert systems were all the rage until they weren't. We'll see how LLMs do by comparison.

monkeyelite

You are word associating. The ideas in each part of that chain are unrelated.

guluarte

neovim will support llms natively (though a language server) https://github.com/neovim/neovim/pull/33972

what

That’s not really native support for LLMs? It’s supporting some LSP feature for completions.

justatdotin

LSP != LLM

brigandish

You have to enable it and install a language server, that's not the same as an LLM being baked in.

vrighter

Neovim already supports LSP servers. The fact that a language server exists for anything, doesn't make neovim (or any other editor) "support" the technology. It doesn't, what it does support is LSP, and it doesn't and couldn't care less what language/slop the LSP is working with.

carstenhag

Just disable the feature/plugin in your IDE of choice. Android Studio/IntelliJ: https://i.imgur.com/RvRMvvK.png

pzo

"Claude in Xcode is now available in the Intelligence settings panel, allowing users to seamlessly add their existing paid Claude account to Xcode and start using Claude Sonnet 4"

Headline quite misleading. So not exactly that it will ship in Xcode but will allow connect to paid account.

Archonical

This is great. I've been using Xcode with a separate terminal to run Claude Code, which has been a painful setup.

jacurtis

Agreed. Claude Code is an amazing experience with Jetbrains IDEs, but for some reason Xcode just hates having claude directly edit the files.

folli

How do you use it with Jetbrains? Junie? Or just as a separate CLI session?

spaceywilly

I use VS Code with Claude Code, then I just use Xcode to build and launch

oefrha

The annoying thing is the official Swift extension can sometimes flag errors in perfectly fine code with zero problem in Xcode. So I’m forced to live with persistent “errors” when editing in VS Code/Cursor.

heywoods

I’m building my first iOS app ever so I know it has much more to do with me not understanding Xcode but getting builds to succeed after making changes with Claude code has been a nightmare. If you or anyone have any tips, guides, prayers, incantations for how to get changes in one to not clobber the other and leave me in xproj symlink hell I would be so grateful.

Terretta

> any tips, guides, prayers, incantations for how to get changes in one to not clobber the other

One caveman way:

1. Start your project using Xcode, use it to commit to GitHub, GitLab, whereever. In terminal, change into the dir that has the .git in it and launch claude.

2. Teach Claude Code your own system's path and preferred simulator for build testing. From then on it will build-test every change, so teach it to only commit after build passes. (By teach, I mean, just tell it, then from time to time, tell it to propose updates to claude.md in your project.)

3. Make sure before a PR or push that the project still builds in Xcode, if it doesn't, you can eyeball the changes in Xcode's staged changes viewer and undo them. If you change files via IDE, when you're back in Claude just say: I changed [specific/file, or "a lot of files"].

No xproj or symlinks get harmed in the making of your .swifts.

isodev

Same, only it's Zed for me and Claude Code in a terminal

rusinov

What was your problem with it? I see it running in a terminal more convenient (can point it to read local files outside of a project folder, for example)

spike021

if i could just get claude to properly remember it can directly edit the xcode project file, that'd be great.

for whatever reason it ignores my directive that it can from the CLAUDE file at least half the time. one time it even decided it needed to generate a fancy python script to do it. bizarre.

kelnos

How so? I don't use xcode, but I much prefer having an agent in its own "app" so to speak.

jama211

Likely so it can auto suggest, directly edit code, integrate properly etc

hellonoko

You can use VSCode and XCode will automatically update when the files change.

mirkodrummer

But they won't fix the infinite number of bugs Xcode has, its slowness and subpar ux

Razengan

Does anybody know why Anthropic doesn't let you remove your payment info from your account, or how to get support from them?

I bought a Pro subscription, the send button on their dumb chatbot box is disabled for me (on Safari), and I still get "capacity constraints' limits. Filed a chargeback with my bank just because of the audacity of their post-purchase experience. ChatGPT-5 works good enough for coding too.

From my experience with Claude Opus it seems like it tries to be "too smart" and doesn't seem to keep up with the latest APIs. It suggested some code for a iOS/macOS project that was only valid on tvOS, and other gaffs.

adastra22

The Pro plan ($20/mo?) is not and never was unlimited.