Skip to content(if available)orjump to list(if available)

It is worth it to buy the fast CPU

It is worth it to buy the fast CPU

265 comments

·August 24, 2025

avidiax

Employers, even the rich FANG types, are quite penny-wise and pound-foolish when it comes to developer hardware.

Limiting the number and size of monitors. Putting speedbumps (like assessments or doctor's notes) on ergo accessories. Requiring special approval for powerful hardware. Requiring special approval for travel, and setting hotel and airfare caps that haven't been adjusted for inflation.

To be fair, I know plenty of people that would order the highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs, which is just a small fraction of one year's salary for a developer.

Aurornis

Every well funded startup I’ve worked for went through a period where employees could get nearly anything they asked for: New computers, more monitors, special chairs, standing desks, SaaS software, DoorDash when working late. If engineers said they needed it, they got it.

Then some period of time later they start looking at spending in detail and can’t believe how much is being spent by the 25% or so who abuse the possibly. Then the controls come.

> There is abuse. But that abuse is really capped out at a few thousand in laptops, monitors and workstations, even with high-end specs,

You would think, but in the age of $6,000 fully specced MacBook Pros, $2,000 monitors, $3,000 standing desks, $1500 iPads with $100 Apple pencils and $300 keyboard cases, $1,000 chairs, SaaS licenses that add up, and (if allowed) food delivery services for “special circumstances” that turns into a regular occurrence it was common to see individuals incurring expenses in the tens of thousands range. It’s hard to believe if you’re a person who moderates their own expenditures.

Some people see a company policy as something meant to be exploited until a hidden limit is reached.

There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

Aeolun

Don’t you think the problem there is that you hired the wrong people?

SteveJS

Was trying to remember a counter example on good hires and wasted money.

Alex St. John Microsoft Windows 95 era, created directX annnnd also built an alien spaceship.

I dimly recalled it as a friend in the games division telling me about some someone getting 5 and a 1 review scores in close succession.

Facts i could find (yes i asked an llm)

5.0 review: Moderately supported. St. John himself hosted a copy of his Jan 10, 1996 Microsoft performance review on his blog (the file listing still exists in archives). It reportedly shows a 5.0 rating, which in that era was the rare top-box mark. Fired a year later: Factual. In an open letter (published via GameSpot) he states he was escorted out of Microsoft on June 24, 1997, about 18 months after the 5.0 review. Judgment Day II alien spaceship party: Well documented as a plan. St. John’s own account (quoted in Neowin, Gizmodo, and others) describes an H.R. Giger–designed alien-ship interior in an Alameda air hangar, complete with X-Files cast involvement and a Gates “head reveal” gag. Sunk cost before cancellation: Supported. St. John says the shutdown came “a couple of weeks” before the 1996 event date, after ~$4.3M had already been spent/committed (≈$1.2M MS budget + ≈$1.1M sponsors + additional sunk costs). Independent summaries repeat this figure (“in excess of $4 million”).

So: 5.0 review — moderate evidence Fired 1997 — factual Alien spaceship build planned — factual ≈$4M sunk costs — supported by St. John’s own retrospective and secondary reporting

michaelt

Well partly, yes.

But also, when I tell one of my reports to spec and order himself a PC, there should be several controls in place.

Firstly, I should give clear enough instructions that they know whether they should be spending around $600, $1500, or $6000.

Second, although my reports can freely spend ~$100 no questions asked, expenses in the $1000+ region should require my approval.

Thirdly, there is monitoring of where money is going; spending where the paperwork isn't in order gets flagged and checked. If someone with access to the company amazon account gets an above-ground pool shipped to their home, you can bet there will be questions to be answered.

spyckie2

Basic statistics. You can find 10 people that will probably not abuse the system but definitely not 100.

It’s like your friend group and time choosing a place to eat. It’s not your friends, it’s the law of averages.

mort96

As a company grows, it will undoubtedly hire some "wrong people" along the way.

jayd16

Maybe so but it's not like that's something you can really control. You can control the policy so that is what's done.

necovek

If $20k is misspent by 1 in 100 employees, that's still $200 per employee per year: peanuts, really.

Just like with "policing", I'd only focus on uncovering and dealing with abusers after the fact, not on everyone — giving most people "benefits" that instead makes them feel valued.

appreciatorBus

So then just set a limit of $200 per head instead of allowing a few bad apples to spend $20k all on themselves.

dcrazy

Is it “soft fraud” when a manager at an investment bank regularly demands unreasonable productivity from their junior analysts, causing them to work late and effectively reduce their compensation rate? Only if the word “abuse” isn’t ambiguous and loaded enough for you!

AtlanticThird

Lying about a laptop being stolen is black and white. I'm not sure how you are trying to say that is ambiguous.

I don't know what the hell you mean by the term unreasonable. Are you under the impression that investment banking analysts do not think they will have to work late before they take the role?

gregshap

The pay and working hours are extremely well known to incoming jr investment bankers

wmf

Working late is official company policy in investment banking.

dingnuts

Is this meant to be a gotcha question? Yes, unpaid overtime is fraud, and employers commit that kind of fraud probably just as regularly as employees doing the things up thread.

none of it is good lol

lukan

"$1,000 chairs"

Not an expert here, but from what I heard, that would be a bargain for a good office chair. And having a good chair or not - you literally feel the difference.

master_crab

There also starts to be some soft fraud at scales higher than you’d imagine: When someone could get a new laptop without questions, old ones started “getting stolen” at a much higher rate. When we offered food delivery for staying late, a lot of people started staying just late enough for the food delivery to arrive while scrolling on their phones and then walking out the door with their meal.

Ehh. Neither of these are soft fraud. The former is outright law-breaking, the latter…is fine. They stayed till they were supposed to.

Aurornis

> the latter…is fine. They stayed till they were supposed to.

This is the soft fraud mentality: If a company offers meal delivery for people who are working late who need to eat at the office and then people start staying late (without working) and then taking the food home to eat, that’s not consistent with the policies.

It was supposed to be a consolation if someone had to (or wanted to, as occurred with a lot of our people who liked to sleep in) stay late to work. It was getting used instead for people to avoid paying out of pocket for their own dinners even though they weren’t doing any more work.

Which is why we can’t have nice things: People see these policies as an opportunity to exploit them rather than use them as intended.

baq

> individuals incurring expenses in the tens of thousands range

peanuts compared to their 500k TC

Aurornis

Very few companies pay $500K. Even at FAANG a lot of people are compensated less than that.

I do think a lot of this comment section is assuming $500K TC employees at employers with infinite cash to spend, though.

pengaru

500k is not the average, and anyone at that level+ can get fancy hardware if they want it.

groby_b

One, not everybody gets 500K TC.

Two, several tens of thousands are in the 5%-10% range. Hardly "peanuts". But I suppose you'll be happy to hear "no raise for you, that's just peanuts compared to your TC", right?

incone123

$3,000 standing desks?? It's some wood, metal and motors. I got one from IKEA in about 2018 for 500 gbp and it's still my desk today. You can get Chinese ones now for about 150 gbp.

Aurornis

The people demanding new top spec MacBook Pros every year aren’t the same people requesting the cheapest Chinese standing desk they can find.

wslh

Breaking news: "Trump tariffs live updates: Trump says US to tariff furniture imports following investigation"<https://finance.yahoo.com/news/live/trump-tariffs-live-updat...>

kev009

Netflix, at least the Open Connect org, was still open ended adjacent to whatever NTech provided (your issued laptop and remote working stuff). It was very easy to get "exotic" hardware. I really don't think anyone abused it. This is an existence proof to the comment parents, it's neither a startup and I don't see engineers screwing the wheels off the bus anywhere I've ever worked.

null

[deleted]

geor9e

I know a FAANG company whose IT department, for the last few years, has been "out of stock" for SSD drives over 250GB . They claim its a global market issue (it's not). There's constant complaining in the chats for folks who compile locally. The engineers make $300k+ so they just buy a second SSD from Amazon on their credit cards and self-install them without mentioning it to the IT dept. I've never heard a rational explanation for the "shortage" other than chronic incompetence from the team supplying engineers with laptops/desktops. Meanwhile, spinning up a 100TB cloud VM has no friction whatsoever there. It's a cushy place to work tho, so folks just accept the comically dumb aspects everyone knows about.

loeg

I think you're maybe underestimating the aggregate cost of totally unconstrained hardware/travel spending across tens or hundreds of thousands of employees, and overestimating the benefits. There need to be some limits or speedbumps to spending, or a handful of careless employees will spend the moon.

adverbly

It's the opposite.

You're underestimating the scope of time lost by losing a few percent in productivity per employee across hundreds of thousands of employees.

You want speed limits not speed bumps. And they should be pretty high limits...

loeg

I don't believe anyone is losing >1% productivity from these measures (at FANG employers).

BlandDuck

Scaling cuts both ways. You may also be underestimating the aggregate benefits of slight improvements added up across hundreds or thousands of employees.

For a single person, slight improvements added up over regular, e.g., daily or weekly, intervals compound to enormous benefits over time.

XKCD: https://xkcd.com/1205/

Retric

The breakeven rate on developer hardware is based on the value a company extracts not their salary. Someone making X$/year directly has a great deal of overhead in terms of office space and managers etc, and above that the company only employees them because the company gains even more value.

Saving 1 second/employee/day can quickly be worth 10+$/employee/year (or even several times that). But you rarely see companies optimizing their internal processes based on that kind of perceived benefits.

Water cooler placement in a cube farm comes to mind as a surprisingly valuable optimization problem.

corimaith

The cost of a good office chair is comparable to a top tier gaming pc, if not higher.

kec

Not for an enterprise buying (or renting) furniture in bulk it isn’t. The chair will also easily last a decade and be turned over to the next employee if this one leaves… unlike computer hardware which is unlikely to be reused and will historically need to be replaced every 24-36 months even if your dev sticks around anyway.

loeg

Are there any FANG employers unwilling to provide good office chairs? I think even cheap employers offer these.

jjmarr

It's not abuse to open 500 Chrome tabs if they're work-related and increase my productivity.

I am 100x more expensive than the laptop. Anything the laptop can do instead of me is something the laptop should be doing instead of me.

llbbdd

Always amuses me when I see someone use web development as an example like this. Web dev is very easily in the realm of game dev as far as required specs for your machine, otherwise you're probably not doing much actual web dev. If anything, engineers doing nothing but running little Java or Python servers don't need anything more than a PI and a two-color external display to do their job.

forgotusername6

Just to do web development? I regularly go into swap running everything I need on my laptop. Ideally I'd have VScode, webpack, and jest running continuously. I'd also occasionally need playwright. That's all before I open a chrome tab.

SoftTalker

This explains a lot about why the modern web is the way it is.

thfuran

I do think a lot of software would be much better if all devs were working on hardware that was midrange five years ago and over a flaky WiFi connection.

jacobolus

> highest spec MacBook just to do web development and open 500 chrome tabs. There is abuse.

Why is that abuse? Having many open browser tabs is perfectly legitimate.

Arguably they should switch from Chrome to Safari / lobby Google to care about client-side resource use, but getting as much RAM as possible also seems fine.

benlivengood

It's straightforward to measure this; start a stopwatch every time your flow gets interrupted by waiting for compilation or your laptop is swapping to keep the IDE and browser running, and stop it once you reach flow state again.

We managed to just estimate the lost time and management (in a small startup) was happy to give the most affected developers (about 1/3) 48GB or 64GB MacBooks instead of the default 16GB.

At $100/hr minimum (assuming lost work doesn't block anyone else) it doesn't take long for the upgrades to pay off. The most affected devs were waiting an hour a day sometimes.

This applies to CI/CD pipelines too; it's almost always worth increasing worker CPU/RAM while the reduction in time is scaling anywhere close to linearly, especially because most workers are charged by the minute anyway.

beezle

When ever I've built a new desktop I've always gone near the top performance with some consideration given to cache and power consumption (remember when peeps cared about that? lol).

From dual pentium pros to my current desktop - Xeon E3-1245 v3 @ 3.40GHz built with 32 GB top end ram in late 2012 which has only recently started to feel a little pokey, I think largely due to cpu security mitigations added to Windows over the years.

So that extra few hundred up front gets me many years extra on the backend.

ocdtrekkie

I think people overestimate the value of a little bump in performance. I recently built a gaming PC with a 9700X. The 9800X3D is drastically more popular, for an 18% performance bump on benchmarks but double the power draw. I rarely peg my CPU, but I am always drawing power.

Higher power draw means it runs hotter, and it stresses the power supply and cooling systems more. I'd rather go a little more modest for a system that's likely to wear out much, much slower.

rafaelmn

Is it really 2x or is it 2x at max load ? Since, as you say, you're not peggig the CPU - would be interesting to compare power usage on a task basis and the duration. Could be that the 3D cache is really adding that much overhead even to idle CPU.

Anyway I've never regretted buying a faster CPU (GPU is a different story, burned some money there on short time window gains that were marginally relevant), but I did regret saving on it (going with M4 air vs M4 pro)

userbinator

I wish developers, and I'm saying this as one myself, were forced to work on a much slower machine, to flush out those who can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

diggan

> were forced to work on a much slower machine

I feel like that's the wrong approach. Like saying a music producer to always work with horrible (think car or phone) speakers. True, you'll get a better mix and master if you test it on speakers you expect others to hear it through, but no one sane recommends you to default to use those for day-to-day work.

Same goes for programming, I'd lose my mind if everything was dog-slow, and I was forced to experience this just because someone thinks I'll make things faster for them if I'm forced to have a slower computer. Instead I'd just stop using my computer if the frustration ended up larger than the benefits and joy I get.

fluoridation

That's actually a good analogy. Bad speakers aren't just slow good speakers. If you try to mix through a tinny phone speaker you'll have no idea what the track will sound like even through halfway acceptable speakers, because you can't hear half of the spectrum properly. Reference monitors are used to have a standard to aim for that will sound good on all but the shittiest sound systems.

Likewise, if you're developing an application where performance is important, setting a hardware target and doing performance testing on that hardware (even if it's different from the machines the developers are using) demonstrably produces good results. For one, it eliminates the "it runs well on my machine" line.

SoftTalker

Although, any good producer is going to listen to mixes in the car (and today, on a phone) to be sure they sound at least decent, since this is how many consumers listen to their music.

diggan

Yes, this is exactly my point :) Just like any good software developer who don't know exactly where their software will run, they test on the type of device that their users are likely to be running it with, or at least similar characteristics.

ofcrpls

The car test has been considered a standard by mixing engineers for the past 4 decades

avidiax

Yeah, I recognize this all too well. There is an implicit assumption that all hardware is top-tier, all phones are flagships, all mobile internet is 5G, everyone has regular access to free WiFi, etc.

Engineers and designers should compile on the latest hardware, but the execution environment should be capped at the 10th percentile compute and connectivity at least one rotating day per week.

Employees should be nudged to rotate between Android and IOS on a monthly or so basis. Gate all the corporate software and ideally some perks (e.g. discounted rides as a ride-share employee) so that you have to experience both platforms.

jacobgorm

If they get the latest hardware to build on the build itself will become slow too.

geocar

> can't write efficient code. Software bloat has already gotten worse by at least an order of magnitude in the past decade.

Efficiency is a good product goal: Benchmarks and targets for improvement are easy to establish and measure, they make users happy, thinking about how to make things faster is a good way to encourage people to read the code that's there, instead of just on new features (aka code that's not there yet)

However they don't sell very good: Your next customer is probably not going to be impressed your latest version is 20% faster than the last version they also didn't buy. This means that unless you have enough happy customers, you are going to have a hard time convincing yourself that I'm right, and you're going to continue to look for backhanded ways of making things better.

But reading code, and re-reading code is the only way you can really get it in your brain; it's the only way you can see better solutions than the compiler, and it's the only way you remember you have this useful library function you could reuse instead of writing more and more code; It's the only guaranteed way to stop software bloat, and giving your team the task of "making it better" is a great way to make sure they read it.

When you know what's there, your next feature will be smaller too. You might even get bonus features by making the change in the right place, instead of as close to the user as possible.

Management should be able to buy into that if you explain it to them, and if they can't, maybe you should look elsewhere...

> a much slower machine

Giving everyone laptops is also one of those things: They're slow even when they're expensive, and so developers are going to have to work hard to make things fast enough there, which means it'll probably be fine when they put it on the production servers.

I like having a big desktop[1] so my workstation can have lots of versions of my application running, which makes it a lot easier to determine which of my next ideas actually makes things better.

[1]: https://news.ycombinator.com/item?id=44501119

Using the best/fastest tools I can is what makes me faster, but my production hardware (i.e. the tin that runs my business) is low-spec because that's cheaper, and higher-spec doesn't have a measurable impact on revenue. But I measure this, and I make sure I'm always moving forward.

jayd16

The beatings will continue until the code improves.

I get the sentiment but taken literally it's counter productive. If the business cares about perf, put it in the sprint planning. But they don't. You'll just be writing more features with more personal pain.

For what its worth, console gamedev has solved this. You test your game on the weakest console you're targeting. This usually shakes out as a stable perf floor for PC.

Lerc

Perhaps the better solution would be to have the fast machine but have a pseudo VM for just the software you are developing that uses up all of those extra resources with live analysis. The software runs like it is on a slower machine, but you could potentially gather plenty of info that would enable you to speed up the program for everyone.

guerrilla

Why complicated? Incentivize the shit out of it at the cultural level so they pressure their peers. This has gotten completely out of control.

djmips

They shouldn't work on a slower machine - however they should test on a slower machine. Always.

zh3

Develop on a fast machine, test and optimise on a slow one?

baq

it's absolutely the wrong approach.

software should be performance tested, but you don't want a situation when time of single iteration is dominated by duration of functional tests and build time. the faster software builds and tests, the quicker solutions get delivered. if giving your developers 64GB or RAM instead of 32GB halves test and build time, you should happily spend that money.

toast0

Assuming you build desktop software; you can build it on a beastly machine, but run it on a reasonable machine. Maybe local builds for special occasions, but it's special, you can wait.

Sure, occasionally run the software on the build machine to make sure it works on beastly machines; but let the developers experience the product on normal machines as the usual.

the__alchemist

Tangent: IMO top tier CPU is a no brainer if you play games, run performance-sensitive software (molecular dynamics or w/e), or compile code.

Look at GPU purchasing. It's full of price games, stock problems, scalpers, 3rd party boards with varying levels of factory overclock, and unreasonable prices. CPU is a comparative cake walk: go to Amazon or w/e, and buy the one with the highest numbers in its name.

AnotherGoodName

For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

Almost all build guides will say ‘get midrange cpu X over high end chip Y and put the savings to a better GPU’.

Consoles in particular are just a decent gpu with a fairly low end cpu these days. The xbox one with a 1.75Ghz 8core AMD from a couple of generations ago now is still playing all the latest games.

the__alchemist

Anecdote: I got a massive performance (FPS) improvement in games after upgrading CPU recently, with no GPU change.

I think currently, that build guide doesn't apply based on what's going on with GPUs. Was valid in the past, and will be valid in the future, I hope!

Hikikomori

Depending on the game there can be a large difference. Ryzen with larger cache have a large benefit in singleplayer games with many units like civilization or in most multiplayer games. Not so much GHz speed but being able to keep most of hot path code and data you need in cache.

enraged_camel

>> For games its generally not worthwhile since the performance is almost entirely based on gpu these days.

It completely depends on the game. Civilization series, for example, are mostly CPU bound, which is why turns take longer and longer as the games progress.

AnotherGoodName

Factorio and stellaris are others i’m aware of.

Factorio it's an issue when you go way past the end game into the 1000+ hour megabases.

Stellaris is just poorly coded with lots of n^2 algorithms and can run slowly on anything once population and fleets grow a bit.

For civilisation the ai does take turns faster with a higher end cpu but imho it’s also no big deal since you spend most time scrolling the map and taking actions (gpu based perf).

I think it’s reasonable to state that the exceptions here are very exceptional.

jandrese

It’s not quite that simple. Often the most expensive chips trade off raw click speed for more cores, which can be counterproductive if your game only uses 4 threads.

saati

The 8 core X3D chips beat the 16 core ones on almost all games, so that's not that simple.

sgarland

Tangential: TIL you can compile the Linux kernel in < 1 minute (on top-spec hardware). Seems it’s been a while since I’ve done that, because I remember it being more like an hour or more.

bornfreddy

I remember I was blown away by some machine that compiled it in ~45 minutes. Pentium Pro baby! Those were the days.

sgarland

My memory must be faulty, then, because I was mostly building it on an Athlon XP 2000+, which is definitely a few generations newer than a Pentium Pro.

I’m probably thinking of various other packages, since at the time I was all-in on Gentoo. I distinctly remember trying to get distcc running to have the other computers (a Celeron 333 MHz and Pentium III 550 MHz) helping out for overnight builds.

Can’t say that I miss that, because I spent more time configuring, troubleshooting, and building than using, but it did teach me a fair amount about Linux in general, and that’s definitely been worth it.

JonChesterfield

I'd like to know why making debian packages containing the kernel now takes substantially longer than a clean build of the kernel. That seems deeply wrong and rather reduces the joy at finding the kernel builds so quickly.

dahart

Spinning hard drives were soooo slow! Maybe very roughly an order of magnitude from SSDs and an order of magnitude from multi-core?

2shortplanks

This article skips a few important steps - how a faster CPU will have a demonstrable improvement on developer performance.

I would agree with the idea that faster compile times can have a significant improvement in performance. 30s is long enough for a developer to get distracted and go off and check their email, look at social media, etc. Basically turning 30s into 3s can keep a developer in flow.

The critical thing we’re missing here is how increasing the CPU speed will decrease the compile time. What if the compiler is IO bound? Or memory bound? Removing one bottleneck will get you to the next bottleneck, not necessarily get you all the performance gains you want

mordae

IO bound compiler would be weird. Memory, perhaps, but newer CPUs also tend to be able to communicate with RAM faster, so...

I think just having LSP give you answers 2x faster would be great for staying in flow.

crinkly

Compiler is usually IO bound on windows due to NTFS and the small files in MFT and lock contention problem. If you put everything on a ReFS volume it goes a lot faster.

Applies to git operations as well.

delusional

I wish I was compiler bound. Nowadays, with everything being in the cloud or whatever I'm more likely to be waiting for Microsoft's MFA (forcing me to pick up my phone, the portal to distractions) or getting some time limited permission from PIM.

The days when 30 seconds pauses for the compiler was the slowest part are long over.

1over137

You must be a web developer. Doing desktop development, nothing is in the cloud for me. I’m always waiting cor my compiler.

necovek

More likely in an enterprise company using MS tooling (AD/Entra/Outlook/Teams/Office...) with "stringent" security settings.

It gets ridiculous quickly, really.

yoz-y

I don’t think that we live in an era where a hardware update can bring you down to 3s from 30s, unless the employer really cheaped out on the initial buy.

Now in the tfa they compare laptop to desktop so I guess the title should be “you should buy two computers”

jhanschoo

Important caveat that the author neglects to mention since they are discussing laptop CPUs in the same breath:

The limiting factor on high-end laptops is their thermal envelope. Get the better CPU as long as it is more power efficient. Then get brands that design proper thermal solutions.

krisroadruck

You simply cannot cram enough cooling and power into a laptop to have it equal a desktop high end desktop CPU of the same generation. There is physically not enough room. Just about the only way to even approach that would be to have liquid cooling loop ports out the back that you had to plug into an under-desk cooling loop and I don't think anyone is doing that because at that point just get a frickin desktop computer + all the other conveniences that come with it (discrete peripherals, multiple monitors, et cetera). I honestly do not understand why so many devs seem to insist on doing work on a laptop. My best guess is this is mostly the apple crowd because apple "desktops" are for the most part - just the same hardware in a larger box instead of being actually a different class of machine. A little better on the thermals, but not the drastic jump you see between laptops and desktops from AMD and Intel.

necovek

If you have to do any travel for work, a lightweight but fast portable machine that is easy to lug around beats any productivity gains from two machines (one much faster) due to the challenge of keeping two devices in sync.

apt-apt-apt-apt

* Me shamefully hiding my no-fan MBA used for development... *

diminish

Multi-core operations like compiling C/C++ could benefit.

Single thread performance of 16-core AMD Ryzen 9 9950X is only 1.8x of my poor and old laptop's 4-core i5 performance. https://www.cpubenchmark.net/compare/6211vs3830vs3947/AMD-Ry...

I'm waiting for >1024 core ARM desktops, with >1TB of unified gpu memory to be able to run some large LLMs with

Ping me when some builds this :)

zh3

Yes, just went from i3770 (12 years old!) to a 9900x as I tend to wait for a doubling of single core performance before upgrading (got through a lot of PCs in the 386/486 era!). It's actually only 50% faster according to cpubenchmark [0] but is twice as fast in local usage (multithread is reported about 3 times faster).

Also got a Mac Mini M4 recently and that thing feels slow in comparison to both these systems - likely more of a UI/software thing (only use M4 for xcode) than being down to raw CPU performance.

[0] https://www.cpubenchmark.net/compare/Intel-i9-9900K-vs-Intel...

fmajid

M4 is amazing hardware held up by a sub-par OS. One of the biggest bottlenecks when compiling software on a Mac is notarization, where every executable you compile causes a HTTP call to Apple. In addition to being a privacy nightmare, this causes the configure step in autoconf based packages to be excruciatingly slow.

gentooflux

They added always-connected DRM to software development, neat

glitchc

Does this mean that compilation fails without an internet connection? If so, that's horrifying.

torginus

I jumped ahead about 5 generations of Intel, when I got my new laptop and while the performance wasn't much better, the fact that I changed from a 10 pound workstation beast that sounded like a vacuum cleaner, to a svelte 13 inch laptop that works with a tiny USB C brick, and barely runs its fans while being just as fast made it worthwhile for me.

miiiiiike

Depends on the workload.

I spent a few grand building a new machine with a 24-core CPU. And, while my gcc Docker builds are MUCH faster, the core Angular app still builds a few seconds slower than on my years old MacBook Pro. Even with all of my libraries split into atoms, built with Turbo, and other optimizations.

6-10 seconds to see a CSS change make its way from the editor to the browser is excruciating after a few hours, days, weeks, months, and years.

renewiltord

Web development is crazy. Went from a Java/C codebase to a webdev company using TS. The latter would take minutes to build. The former would build in seconds and you could run a simulated backtest before the web app would be ready.

It blew my mind. Truly this is more complicated than trading software.

thunderfork

A lot of this seems to have gotten a lot better with esbuild for me, at least, and maybe tsgo will be another big speed-up once it's done...

kaspar030

> Top end CPUs are about 3x faster than the comparable top end models 3 years ago

I wish that were true, but the current Ryzen 9950 is maybe 50% faster than the two generations older 5950, at compilation workloads.

szatkus

The author used kernel compilation as a benchmark. Which is weird, because for most projects a build process isn't as scalable as that (especially in the node.js ecosystem), even less after a full build.

tgma

Not even. Probably closer to 30%, and that's if you are doing actual many-core compile workloads on your critical path.

ben-schaaf

Phoronix has actual benchmarks: https://www.phoronix.com/review/amd-ryzen-9950x-9900x/2

It's not 3x, but it's most certainly above 1.3x. Average for compilation seems to be around 1.7-1.8x.

nchmy

What's with the post title being completely incongruent with the article title? Moreover, I'm pretty sure this was not the case when it was first posted...

defanor

This compares a new desktop CPU to older laptop ones. There are much more complete benchmarks on more specialized websites [0, 1].

> If you can justify an AI coding subscription, you can justify buying the best tool for the job.

I personally can justify neither, but not seeing how one translates into another: is a faster CPU supposed to replace such a subscription? I thought those are more about large and closed models, and that GPUs would be more cost-effective as such a replacement anyway. And if it is not, it is quite a stretch to assume that all those who sufficiently benefit from a subscription would benefit at least as much from a faster CPU.

Besides, usually it is not simply "a faster CPU": sockets and chipsets keep changing, so that would also be a new motherboard, new CPU cooler, likely new memory, which is basically a new computer.

[0] https://www.cpubenchmark.net/

[1] https://www.tomshardware.com/pc-components/cpus