Zedless: Zed fork focused on privacy and being local-first
188 comments
·August 20, 2025pnathan
dilDDoS
I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.
Aurornis
AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
aozgaa
Agreed.
LLM’s are fundamentally text generators, not verifiers.
They might spot some typos and stylistic discrepancies based on their corpus, but they do not reason. It’s just not what the basic building blocks of the architecture do.
In my experience you need to do a lot of coaxing and setting up guardrails to keep them even roughly on track. (And maybe the LLM companies will build this into the products they sell, but it’s demonstrably not there today)
pnathan
That's a fantastic counterpoint. I've found AI reviewers to be useful on a first pass, at a small-pieces level. But I hear your opinion!
chuckadams
I find the summary that copilot generates is more useful than the review comments most of the time. That said, I have seen it make some good catches. It’s a matter of expectations: the AI is not going to have hurt feelings if you reject all its suggestions, so I feel even more free to reject it feedback with the briefest of dismissals.
kstrauser
IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
dns_snek
> while being fully GPL'ed Free Software
I wouldn't sing them praises for being FOSS. All contributions are signed away under their CLA which will allow them to pull the plug when their VCs come knocking and the FOSS angle is no longer convenient.
tkz1312
why not just use sublime text?
sli
I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.
skrtskrt
AI is solid for kicking off learning a language or framework you've never touched before.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
skydhash
But so is a good book. And it costs way less. Even though searching may be quicker, having a good digest of a feature is worth the half hour I can spend browsing a chapter. It’s directly picking an expert brains. Then you take notes, compare what you found online and the updated documentation and soon you develop a real understanding of the language/tool abstraction.
mirkodrummer
AI has stale knowledge I won't use it for learning, especially because it's biased towards low quality JS repos on which has been trained on
jama211
Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.
stouset
I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
skydhash
> Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases,
That's a red flag for me. Having a lot of tests usually means that your domain is fully known so now you can specify it fully with tests. But in a lot of setting, the domain is a bunch of business rules that product decides on the fly. So you need to be pragmatic and only write tests against valuable workflows. Or find yourself changing a line and have 100+ tests breaking.
mirkodrummer
Good marketing bro
aDyslecticCrow
zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.
mootoday
You can opt out of AI features in Zed [0].
coneonthefloor
Well said, Zed could be great if they just stopped with the AI stuff and focused on text editing.
senko
Can't you just not use / disable AI and telemetry? It's not shoved in your face.
I would prefer an off-by-default telemetry, but if there's a simple opt-out, that's fine?
insane_dreamer
didn't Zed recently add a config option to disable all AI features?
asadm
I think you and I are having very different experiences with these copilot/agents. So I have questions for you, how do you:
- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?
- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?
- convert an idea or plan.md or a paper into working code?
- Fix flakes, fix test<->code discrepancies or increase coverage etc
If you do all this manually, why?
skydhash
> generate new modules/classes in your projects
If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).
> integrate module A into module B
If it's cannot be done easily, that's the sign of a less than optimal API.
> entire codebase A into codebase B
Is that a real need?
> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms
If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.
> convert an idea or plan.md or a paper into working code?
Iteratively. First have an hello world or something working, then mowing down the task list.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.
> If you do all this manually, why?
Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.
frakt0x90
To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.
mirkodrummer
*Outsourcing to a parrot on steroids which will make mistakes, produce stale ugly ui with 100px border radius, 50px padding and rainbow hipster shadows, write code biased towards low quality training data and so on. It's the perfect recipe for disaster
mackeye
i'm an undergrad, so when i need to implement a paper, the idea is that i'm supposed to learn something from implementing it. i feel fortunate in that ai is not yet effective enough to let me be lazy and skip that process, lol
vehemenz
Drawing blueprints is more enjoyable than putting up drywall.
asadm
depends. if i am converting it to then use it in my project, i don't care who writes it, as long as it works.
pnathan
I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.
AI doesn't really help me code vs me doing it myself.
AI is better doing other things...
asadm
> AI is better doing other things...
I agree. For me the other things are non-business logic, build details, duplicate/bootstrap code that isn't exciting.
mackeye
> how do you convert a paper into working code?
this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?
asadm
I have found that whenever it fails for me, it's likely that I was trying to one-shot the solution and I retry by breaking the problem into smaller chunks or doing a planning work with gemini cli first.
chamomeal
For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.
If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.
Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.
Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.
If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth
craftkiller
> generate new modules/classes in your projects
I type:
class Foo:
or: pub(crate) struct Foo {}
> integrate module A into module BWhat do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.
> get someones github project up and running on your machine
docker
> convert an idea or plan.md or a paper into working code
I sit in front of a keyboard and start typing.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
I sit in front of a keyboard, read, think, and then start typing.
> If you do all this manually, why?
Because I care about the quality of my code. If these activities don't interest you, why are you in this field?
asadm
> If these activities don't interest you, why are you in this field?
I am in this field to deliver shareholder value. Writing individual lines of code; unless absolutely required, is below me?
stevenbedrick
To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
So that's my $0.02!
agosta
"Happy to see this". The folks over at Zed did all of the hard work of making the thing, try to make some money, and then someone just forks it to get rid of all of the things they need to put in to make it worth their time developing. I understand if you don't want to pay for Zed - but to celebrate someone making it harder for Zed to make money when you weren't paying them to begin with -"Happy to PLAN to pay for Zed"- is beyond.
jemiluv8
I always have mixed feelings about forks. Especially the hard ones. Zed recently rolled out a feature that lets you disable all AI features. I also know telemetry can be opted out. So I don’t see the need for this fork. Especially given the list of features stated. Feels like something that can be upstreamed. Hope that happens
I remember the Redis fork and how it fragmented that ecosystem to a large extent.
barnabee
I'd see less need for this fork if Zed's creators weren't already doing nefarious things like refusing to allow the Zed account / sign-in features to be disabled.
I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.
giancarlostoro
Well there's features within Zed that are part of the account / sign-in process, so it might be a bit more effort to just "simply comment out login" for an editor that is as fast and smooth as Zed, I dont care that its there as long as they dont force it on me, which they don't.
max-privatevoid
Even opt-in telemetry makes me feel uncomfortable. I am always aware that the software is capable of reporting the size of my underwear and what I had for breakfast this morning at any moment, held back only by a single checkbox. As for the other features, opt-out stuff just feels like a nuisance, having to say "No, I don't want this" over and over again. In some cases it's a matter of balance, but generally I want to lean towards minimalism.
giancarlostoro
Not to mention Zed is already open source. I guess the best thing Zed can do is make it all opt-in by default, then this fork is rendered useless.
mixmastamyk
It's nice to have additional assurance that the software won't upload behind your back on first startup. Though I also run opensnitch, belt and suspenders style.
RestartKernel
Bit premature to post this, especially without some manifesto explaining the particular reason for this fork. The "no rugpulls" implies something happened with Zed, but you can't really expect every HN reader to be in the loop with the open source controversy of the week.
eikenberry
Contributor Agreements are specifically there for license rug-pulls, so they can change the license in the future as they own all the copyrights. So the fact that they have a CA means they are prepping for a rug-pull and thus this bullet point.
latexr
I can’t speak for Zed’s specific case, but several years ago I was part of a project which used a permissive license. I wanted to make it even more permissive, by changing it to one of those essentially-public-domain licenses. The person with the ultimate decision power had no objections and was fine with it, but said we couldn’t do that because we never had Contributor License Agreements. So it cuts both ways.
ItsHarper
It's reasonable for a contributor to reject making their code available more permissively
jen20
Are you suggesting the FSF has a copyright assignment for the purposes of “rug pulls”?
ilc
Yes.
The FSF requires assignment so they can re-license the code to whatever new license THEY deem best.
Not the contributors.
A CLA should always be a warning.
eikenberry
It was, some see the GPL2->GPL3 as a rug-pull... but it doesn't matter today as the FSF stopped requiring CAs back in 2021.
zahlman
CLAs represent an important legal protection, and I would never accept a PR from a stranger, for something being developed in public, without one. They're the simplest way to prove that the contributor consented to licensing the code under the terms of the project license, and a CYA in case the contributed code is e.g. plagiarized from another party.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
NoboruWataya
Zed is quite well known to be heavily cloud- and AI-focused, it seems clear that's what's motivating this fork. It's not some new controversy, it's just the clearly signposted direction of the project that many don't like.
decentrality
Seems like it might be reacting to or fanned to flame by: https://github.com/zed-industries/zed/discussions/36604
FergusArgyll
That's not a rug pull, that's a few overly sensitive young 'uns complaining
MeetingsBrowser
overly sensitive to what?
201984
No, this fork is at least 6 months old. The first PR is dated February 13th.
decentrality
This is correct. The fork and the pitchforks are not causally related
Squarex
[flagged]
barbazoo
> Are they really boycotting jews now?
Just because they're boycotting someone who happens to be Jewish doesn't necessarily mean they're boycotting them because of it.
> Zed just announced that they are taking money from Sequoia Capital, which has a partner, Shaun Maguire, who has recently been publicly and unapologetically Islamophobic. It seems hard to believe that the team didn't know about this, as it was covered in the New York Times. In addition, Maguire has been actively pro-occupation and genocide in Palestine for nearly 2 years.
> How can anyone feel like the Code of Conduct means anything at all, when Sequoia is an investor? I'm shocked and surprised at the Zed team for this - I expected much better.
Reads like it has more to do with what they said and done in the past which seems reasonable.
marcosdumay
They got a VC investment.
But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.
dang
Related ongoing threads:
Zed for Windows: What's Taking So Long? - https://news.ycombinator.com/item?id=44964366
Sequoia backs Zed - https://news.ycombinator.com/item?id=44961172
_benj
I’m curious how this will turn out. Reminds me of the node.js fork IO.js and how that shifted the way node was being developed.
If there’s a group of people painfully aware of telemetry and AI being pushed everywhere is devs…
null
leshenka
Shouldn’t this just be a pull request to Zed itself that hides AI features of behind behind compile flags? That way the ‘fork’ will be just a build command with different set of flags with no changes to the actual code?
dkersten
What I really want from Zed is multi window support. Currently, I can’t pop out the agent panel or any other panels to use them on another monitor.
Local-first is nice, but I do use the AI tools, so I’m unlikely to use this fork in the near term. I do like the idea behind this, especially no telemetry and no contributor agreements. I wish them the best of luck.
I did happily use Zed for about year before using any of its AI features, so who knows, maybe I’ll get fed up with AI and switch to this eventually.
201984
Comment from the author: https://lobste.rs/c/wmqvug
> Since someone mentioned forking, I suppose I’ll use this opportunity to advertise my fork of Zed: https://github.com/zedless-editor/zed
> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.
haneefmubarak
> relying on node.js
When did people start hating node and what do they have against it?
leblancfg
> When did people start hating node
You're kidding, right?
WestCoader
Maybe they've just never seen a dependency they didn't like.
woodson
I guess some node.js based tools that are included in Zed (or its language extensions) such as ‘prettier’ don’t behave well in some environments (e.g., they constantly try to write files to /home/$USER even if that’s not your home directory). Things like that create some backlash.
max-privatevoid
It shouldn't be as tightly integrated into the editor as it is. Zed uses it for a lot of things, including to install various language servers and other things via NPM, which is just nasty.
muppetman
You might not be old enough to remember how much everyone hated JavaScript initially - just as an in-browser language. Then suddenly it's a standalone programming language too? WTH??
I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.
skydhash
JavaScript is actually fine as the warts have been documented. The main issue these days is the billions of tiny packages. So many people/org to trust for every project that uses npm.
Sephr
For me, upon its inception. We desperately needed unity in API design and node.js hasn't been adequate for many of us.
WinterTC has only recently been chartered in order to make strides towards specifying a unified standard library for the JS ecosystem.
aDyslecticCrow
Slow and ram heavy. Zed feels refreshingly snappy compared to vscode even before adding plugins. And why does desktop application need to use interpreted programming languages?
Quitschquat
I think this guy has to be trolling in the testimonials page:
“Yes! Now I can have shortcuts to run and debug tests. Ever since snippets were added, Zed has all of the features I could ask for in an editor.”
adastra22
Thank you.
That's all I have to say right now, but I feel it needs to be said. Thank you for doing this.
cultofmetatron
I've been using AI extensivly the last few weeks but not as a coding agent. I really don't trust it for that. Its really helpful for generating example code for a library I might not be familiar with. a month ago, I was interested in using rabbitmq but he docs were limited. chatgpt was able to give me a fairly good amount of starter code to see how these things are wired together. I used some of it and added to it by hand to finally come up with what is running in production. It certainly has value in that regard. Letting it write and modify code directly? I'm not ready for that. other things its useful for is finding the source of an error when the error message isnt' so great. I'll usually copy paste code that I know is causing the error along with the error message and it'll point out the issues in a way that I can immediatly address. My method is cheaper too, I can get by just fine on the $20/month chatgpt sub doing that.
withinrafael
The CLA does not change the copyright owner of the contributed content (https://zed.dev/cla), so I'm confused by the project's comments on copyright reassignment.
Huppie
Maybe not technically correct but it's still the gist of this line, no?
> Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”).
They are allowed to use your contribution in a derivative work under another license and/or sublicense your contribution.
It's technically not copyright reassignment though.
withinrafael
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
pie_flavor
The CLA has you granting them a non-open-source license. It permits them to change the Zed license to a proprietary one while still incorporating your contributions. It doesn't assign copyright ownership, but your retaining the ability to release your contribution under a different license later has little practical value.
max-privatevoid
I'm concerned about relicensing. See HashiCorp.
ItsHarper
It may not technically reassign copyright, but it grants them permission to do whatever they want with your contributions, which seems pretty equivalent in terms of outcome.
withinrafael
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
nicce
Without CLA, they can’t sell, for example, the code under different license, or be an exception themselves for the current GPL license requirements. But yeah, there might be some confusion with terms.
Relevant part:
> 2. Grant of Copyright License. Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”). Further, to the extent that You participate in any livestream or other collaborative feedback generating session offered by Company, you hereby consent to use of any content shared by you in connection therewith in accordance with the foregoing Contributor License Grant.
I'm glad to see this. I'm happy to plan to pay for Zed - its not there yet but its well on its way - But I don't want essentially _any_ of the AI and telemetry features.
The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....