The world could run on older hardware if software optimization was a priority
841 comments
·May 13, 2025caseyy
dahart
The dumbest and most obvious of realizations finally dawned on me after trying to build a software startup that was based on quality differentiation. We were sure that a better product would win people over and lead to viral success. It didn’t. Things grew, but so slowly that we ran out of money after a few years before reaching break even.
What I realized is that lower costs, and therefore lower quality, are a competitive advantage in a competitive market. Duh. I’m sure I knew and said that in college and for years before my own startup attempt, but this time I really felt it in my bones. It suddenly made me realize exactly why everything in the market is mediocre, and why high quality things always get worse when they get more popular. Pressure to reduce costs grows with the scale of a product. Duh. People want cheap, so if you sell something people want, someone will make it for less by cutting “costs” (quality). Duh. What companies do is pay the minimum they need in order to stay alive & profitable. I don’t mean it never happens, sometimes people get excited and spend for short bursts, young companies often try to make high quality stuff, but eventually there will be an inevitable slide toward minimal spending.
There’s probably another name for this, it’s not quite the Market for Lemons idea. I don’t think this leads to market collapse, I think it just leads to stable mediocrity everywhere, and that’s what we have.
xg15
This is also the exact reason why all the bright-eyed pieces that some technology would increase worker's productivity and therefore allow more leisure time for the worker (20 hour workweek etc) are either hopelessly naive or pure propaganda.
Increased productivity means that the company has a new option to either reduce costs or increase output at no additional cost, one of which it has to do to stay ahead in the rat-race of competitors. Investing the added productivity into employee leisure time would be in the best case foolish and in the worst case suicidal.
diputsmonro
Which is why government regulations that set the boundaries for what companies can and can't get away with (such as but not limited to labor laws) are so important. In absence of guardrails, companies will do anything to get ahead of the competition. And once one company breaks a norm or does something underhanded, all their competitors must do the same thing or they risk ceding a competitive advantage. It becomes a race to the bottom.
Of course we learned this all before a century ago, it's why we have things like the FDA in the first place. But this new generation of techno-libertarians and DOGE folks who grew up in a "move fast and break things" era, who grew up in the cleanest and safest times the world has ever seen, have no understanding or care of the dangers here and are willing to throw it all away because of imagined inefficiencies. Regulations are written in blood, and those that remove them will have new blood on their hands.
musicale
> 20 hour workweek etc
We have that already. It's called part-time jobs. Usually they don't pay as much as full-time jobs, provide no health insurance or other benefits, etc.
const_cast
Yes, this is an observation I've made about the illusion of choice in so-called free markets.
In actuality, everyone is doing the same thing and their decisions are already made for them. Companies don't just act evil because they are evil. They act evil because all they can ever be is evil. If they don't, then they lose. So what's left?
Facebook becoming an ad-ridden disaster was, in a way, predestined. Unavoidable.
satvikpendem
Indeed, and I don't know why people keep saying that we ever thought the 20 hour workweek was feasible, because there is always more work to be done. Work expands to fill the constraints available, similar to Parkinson's Law.
panick21_
This misses a huge part of the story, increase in productive, means a large economy, means more efficient use of resources, means compensation goes up over time. If you want to live the live of somebody that did 40h a week 40 years ago and only work 20h, you can already have most of that, and still have many options somebody back then didn't have that is virtually free.
The actual realization is that most people simple rather work 40h a week (or more) and spend their money on whatever they want to spend their money on.
Specially many of us here, can do so easily. I personally work 80% and could reduce it further if my goal was maximum leisure.
By far the biggest reason it doesn't feel that way, is that housing polices in most of the Western world have been utterly and completely braindead. That and the ever increasing cost of health care as people get ever older and older.
bruce511
You're on the right track, but missing an important aspect.
In most cases the company making the inferior product didn't spend less. But they did spend differently. As in, they spent a lot on marketing.
You were focused on quality, and hoped for viral word of mouth marketing. Your competitors spent the same as you, but half their budget went to marketing. Since people buy what they know, they won.
Back in the day MS made Windows 95. IBM made OS/2. MS spend a billion $ on marketing Windows 95. That's a billion back when a billion was a lot. Just for the launch.
Techies think that Quality leads to sales. If does not. Marketing leads to sales. There literally is no secret to business success other than internalizing that fact.
nostrademons
Quality can lead to sales - this was the premise behind the original Google (they never spent a dime on advertising their own product until the Parisian Love commercial [1] came out in 2009, a decade after founding), and a few other tech-heavy startups like Netscape or Stripe. Microsoft certainly didn't spend a billion $ marketing Altair Basic.
The key point to understand is the only effort that matters is that which makes the sale. Business is a series of transactions, and each individual transaction is binary: it either happens or it doesn't. Sometimes, you can make the sale by having a product which is so much better than alternatives that it's a complete no-brainer to use it, and then makes people so excited that they tell all their friends. Sometimes you make the sale by reaching out seven times to a prospect that's initially cold but warms up in the face of your persistence. Sometimes, you make the sale by associating your product with other experiences that your customers want to have, like showing a pretty woman drinking your beer on a beach. Sometimes, you make the sale by offering your product 80% off to people who will switch from competitors and then jacking up the price once they've become dependent on it.
You should know which category your product fits into, and how and why customers will buy it, because that's the only way you can make smart decisions about how to allocate your resources. Investing in engineering quality is pointless if there is no headroom to deliver experiences that will make a customer say "Wow, I need to have that." But if you are sitting on one of those gold mines, capitalizing on it effectively is orders of magnitude more efficient than trying to market a product that doesn't really work.
jcadam
It's not just software -- My wife owns a restaurant. Operating a restaurant you quickly learn the sad fact that quality is just not that important to your success.
We're still trying to figure out the marketing. I'm convinced the high failure rate of restaurants is due largely to founders who know how to make good food and think their culinary skills plus word-of-mouth will get them sales.
Lio
Pure marketing doesn’t always win. There are counter examples.
Famously Toyota beat many companies that were basing their strategy on marketing rather than quality.
They were able to use quality as part of their marketing.
My father in law worked in a car showroom and talks about when they first installed carpet there.
No one did that previously. The subtle point to customers being that Toyotas didn’t leak oil.
jolt42
IIRC, Microsoft was also charging Dell for a copy of Windows even if they didn't install it on the PC! And yeah OS/2 was ahead by miles.
panick21_
This is a massive oversimplification of the Windows and OS/2 story. Anybody that has studied this understands that it wasn't just marketing. I can't actually believe that anybody who has read deeply about this believes it was just marketing.
And its also a cherry picked example. There are so many counter-examples, how Sun out-competed HP, IBM, Appollo and DEC. Or how AMD in the last 10 years out-competed Intel, sure its all marketing. I could go on with 100s of examples just in computer history.
Marketing is clearly an important aspect in business, nobody denies that. But there are many other things that are important as well. You can have the best marketing in the world, if you fuck up your production and your supply chain, your company is toast. You can have the best marketing in the world, if your product sucks, people will reject it (see the Blackbarry Strom as an nice example). You can have the best marketing in the world, if your finance people fuck up, the company might go to shit anyway.
Anybody that reaches for simple explanations like 'marketing always wins' is just talking nonsense.
naasking
> What I realized is that lower costs, and therefore lower quality,
This implication is the big question mark. It's often true but it's not at all clear that it's necessarily true. Choosing better languages, frameworks, tools and so on can all help with lowering costs without necessarily lowering quality. I don't think we're anywhere near the bottom of the cost barrel either.
I think the problem is focusing on improving the quality of the end products directly when the quality of the end product for a given cost is downstream of the quality of our tools. We need much better tools.
For instance, why are our languages still obsessed with manipulating pointers and references as a primary mode of operation, just so we can program yet another linked list? Why can't you declare something as a "Set with O(1) insert" and the language or its runtime chooses an implementation? Why isn't direct relational programming more common? I'm not talking programming in verbose SQL, but something more modern with type inference and proper composition, more like LINQ, eg. why can't I do:
let usEmployees = from x in Employees where x.Country == "US";
func byFemale(Query<Employees> q) =>
from x in q where x.Sex == "Female";
let femaleUsEmployees = byFemale(usEmployees);
These abstract over implementation details that we're constantly fiddling with in our end programs, often for little real benefit. Studies have repeatedly shown that humans can write less than 20 lines of correct code per day, so each of those lines should be as expressive and powerful as possible to drive down costs without sacrificing quality.ndriscoll
You can do this in Scala[0], and you'll get type inference and compile time type checking, informational messages (like the compiler prints an INFO message showing the SQL query that it generates), and optional schema checking against a database for the queries your app will run. e.g.
case class Person(name: String, age: Int)
inline def onlyJoes(p: Person) = p.name == "Joe"
// run a SQL query
run( query[Person].filter(p => onlyJoes(p)) )
// Use the same function with a Scala list
val people: List[Person] = ...
val joes = people.filter(p => onlyJoes(p))
// Or, after defining some typeclasses/extension methods
val joesFromDb = query[Person].onlyJoes.run
val joesFromList = people.onlyJoes
This integrates with a high-performance functional programming framework/library that has a bunch of other stuff like concurrent data structures, streams, an async runtime, and a webserver[1][2]. The tools already exist. People just need to use them.[0] https://github.com/zio/zio-protoquill?tab=readme-ov-file#sha...
mike_hearn
Hm, you could do that quite easily but there isn't much juice to be squeezed from runtime selected data structures. Set with O(1) insert:
var set = new HashSet<Employee>();
Done. Don't need any fancy support for that. Or if you want to load from a database, using the repository pattern and Kotlin this time instead of Java: @JdbcRepository(dialect = ANSI) interface EmployeeQueries : CrudRepository<Employee, String> {
fun findByCountryAndGender(country: String, gender: String): List<Employee>
}
val femaleUSEmployees = employees.findByCountryAndGender("US", "Female")
That would turn into an efficient SQL query that does a WHERE ... AND ... clause. But you can also compose queries in a type safe way client side using something like jOOQ or Criteria API.bflesch
Your argument makes sense. I guess now it's your time to shine and to be the change you want to see in the world.
gus_massa
Isn't this comprehension in Python https://www.w3schools.com/python/python_lists_comprehension.... ?
dragandj
Clojure, friend. Clojure.
Other functional languages, too, but Clojure. You get exactly this, minus all the <'s =>'s ;'s and other irregularities, and minus all the verbosity...
rjbwork
I consider functional thinking and ability to use list comprehensions/LINQ/lodash/etc. to be fundamental skills in today's software world. The what, not the how!
jg0r3
Could you link any of these studies?
I couldn't find anything specific when searching.
caseyy
> I don’t think this leads to market collapse
You must have read that the Market for Lemons is a type of market failure or collapse. Market failure (in macroeconomics) does not yet mean collapse. It describes a failure to allocate resources in the market such that the overall welfare of the market participants decreases. With this decrease may come a reduction in trade volume. When the trade volume decreases significantly, we call it a market collapse. Usually, some segment of the market that existed ceases to exist (example in a moment).
There is a demand for inferior goods and services, and a demand for superior goods. The demand for superior goods generally increases as the buyer becomes wealthier, and the demand for inferior goods generally increases as the buyer becomes less wealthy.
In this case, wealthier buyers cannot buy the superior relevant software previously available, even if they create demand for it. Therefore, we would say a market fault has developed as the market could not organize resources to meet this demand. Then, the volume of high-quality software sales drops dramatically. That market segment collapses, so you are describing a market collapse.
> There’s probably another name for this
You might be thinking about "regression to normal profits" or a "race to the bottom." The Market for Lemons is an adjacent scenario to both, where a collapse develops due to asymmetric information in the seller's favor. One note about macroecon — there's never just one market force or phenomenon affecting any real situation. It's always a mix of some established and obscure theories.
dahart
The Wikipedia page for Market for Lemons more or less summarizes it as a condition of defective products caused by information asymmetry, which can lead to adverse selection, which can lead to market collapse.
https://en.m.wikipedia.org/wiki/The_Market_for_Lemons
The Market for Lemons idea seems like it has merit in general but is too strong and too binary to apply broadly, that’s where I was headed with the suggestion for another name. It’s not that people want low quality. Nobody actually wants defective products. People are just price sensitive, and often don’t know what high quality is or how to find it (or how to price it), so obviously market forces will find a balance somewhere. And that balance is extremely likely to be lower on the quality scale than what people who care about high quality prefer. This is why I think you’re right about the software market tolerating low quality; it’s because market forces push everything toward low quality.
mtalantikite
My wife has a perfume business. She makes really high quality extrait de parfums [1] with expensive materials and great formulations. But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price. We've had so many conversations about whether she should dilute everything like the other companies do, but you lose so much of the beauty of the fragrance when you do that. She really doesn't want to go the route of mediocrity, but that does seem to be what the market demands.
codethief
> [1] https://studiotanais.com/
First, honest impression: At least on my phone (Android/Chromium) the typography and style of the website don't quite match that "high quality & expensive ingredients" vibe the parfums are supposed to convey. The banners (3 at once on the very first screen, one of them animated!), italic text, varying font sizes, and janky video header would be rather off-putting to me. Maybe it's also because I'm not a huge fan of flat designs, partially because I find they make it difficult to visually distinguish important and less important information, but also because I find them a bit… unrefined and inelegant. And, again, this is on mobile, so maybe on desktop it comes across differently.
Disclaimer: I'm not a designer (so please don't listen only to me and take everything with a grain of salt) but I did work as a frontend engineer for a luxury retailer for some time.
jimbokun
She should double the price so customers wonder why hers costs so much more. Then have a sales pitch explaining the difference.
Some customers WANT to pay a premium just so they know they’re getting the best product.
LordGrignard
To he blunt
this website looks like a scam website redirecter the one where you have to click on 49 ads and wait for 3 days before you get to your link the video playing immediately makes me think that's a Google ad unrelated to what the website is about the different font styles reminds me of the middle school HTML projects we had to do with each line in a different size and font face to prove that we know how to use <font face> and <font size>. All its missing is a jokerman font
esafak
Offer an eau de parfum line for price anchoring, and market segmentation. Win win.
_puk
Is that what the market demands, or is the market unable to differentiate?
From the site there's a huge assumption that potential customers are aware of what extrait de parfum is vs eau de parfum (or even eau de toilette!).
Might be worth a call out that these fragrances are in fact a standard above the norm.
"The highest quality fragrance money can buy" kind of thing.
ayewo
> But the market is flooded with eau de parfums -- which are far more diluted than a extrait -- using cheaper ingredients, selling for about the same price.
Has she tried raising prices? To signal that her product is highly quality and thus more expensive than her competition?
rom16384
I had the same realization but with car mechanics. If you drive a beater you want to spend the least possible on maintenance. On the other hand, if the car mechanic cares about cars and their craftsmanship they want to get everything to tip-top shape at high cost. Some other mechanics are trying to scam you and get the most amount of money for the least amount of work. And most people looking for car mechanics want to pay the least amount possible, and don't quite understand if a repair should be expensive or not. This creates a downward pressure on price at the expense of quality and penalizes the mechanics that care about quality.
AtlasBarfed
Luckily for mechanics, the shortage of actual blue collar Hands-On labor is so small, that good mechanics actually can charge more.
The issue is that you have to be able to distinguish a good mechanic from a bad mechanic cuz they all get to charge a lot because of the shortage. Same thing for plumbing, electrical, HVAC, etc etc etc
But I understand your point.
brundolf
Exactly. People on HN get angry and confused about low software quality, compute wastefulness, etc, but what's happening is not a moral crisis: the market has simply chosen the trade-off it wants, and industry has adapted to it
If you want to be rewarded for working on quality, you have to find a niche where quality has high economic value. If you want to put effort into quality regardless, that's a very noble thing and many of us take pleasure in doing so, but we shouldn't act surprised when we aren't economically rewarded for it
rpnx
I actually disagree. I think that people will pay more for higher quality software, but only if they know the software is higher quality.
It's great to say your software is higher quality, but the question I have is whether or not is is higher quality with the same or similar features, and second, whether the better quality is known to the customers.
It's the same way that I will pay hundreds of dollars for Jetbrains tools each year even though ostensibly VS Code has most of the same features, but the quality of the implementation greatly differs.
If a new company made their IDE better than jetbrains though, it'd be hard to get me to fork over money. Free trials and so on can help spread awareness.
dsr_
The Lemon Market exists specifically when customers cannot tell, prior to receipt and usage, whether they are buying high quality or low quality.
wang_li
> but only if they know the software is higher quality.
I assume all software is shit in some fashion because every single software license includes a clause that has "no fitness for any particular purpose" clause. Meaning, if your word processor doesn't process words, you can't sue them.
When we get consumer protection laws that require that software does what is says on the tin quality will start mattering.
mjr00
> the market sells as if all goods were high-quality
The phrase "high-quality" is doing work here. The implication I'm reading is that poor performance = low quality. However, the applications people are mentioning in this comment section as low performance (Teams, Slack, Jira, etc) all have competitors with much better performance. But if I ask a person to pick between Slack and, say, a a fast IRC client like Weechat... what do you think the average person is going to consider low-quality? It's the one with a terminal-style UI, no video chat, no webhook integrations, and no custom avatars or emojis.
Performance is a feature like everything else. Sometimes, it's a really important feature; the dominance of Internet Explorer was destroyed by Chrome largely because it was so much faster than IE when it was released, and Python devs are quickly migrating to uv/ruff due to the performance improvement. But when you start getting into the territory of "it takes Slack 5 seconds to start up instead of 10ms", you're getting into the realm where very few people care.
dgb23
You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Performance is not just a quantitative issue. It leaks into everything, from architecture to delivery to user experience. Bad performance has expensive secondary effects, because we introduce complexity to patch over it like horizontal scaling, caching or eventual consistency. It limits our ability to make things immediately responsive and reliable at the same time.
kristofferR
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
Disagree, the main reason so many apps are using "slow" languages/frameworks is precisely that it allows them to develop way more features way quicker than more efficient and harder languages/frameworks.
mjr00
> You are comparing applications with wildly different features and UI. That's neither an argument for nor against performance as an important quality metric.
I never said performance wasn't an important quality metric, just that it's not the only quality metric. If a slow program has the features I need and a fast program doesn't, the slow program is going to be "higher quality" in my mind.
> How fast you can compile, start and execute some particular code matters. The experience of using a program that performs well if you use it daily matters.
Like any other feature, whether or not performance is important depends on the user and context. Chrome being faster than IE8 at general browsing (rendering pages, opening tabs) was very noticeable. uv/ruff being faster than pip/poetry is important because of how the tools integrate into performance-sensitive development workflows. Does Slack taking 5-10 seconds to load on startup matter? -- to me not really, because I have it come up on boot and forget about it until my next system update forced reboot. Do I use LibreOffice or Word and Excel, even though LibreOffice is faster? -- I use Word/Excel because I've run into annoying compatibility issues enough times with LO to not bother. LibreOffice could reduce their startup and file load times to 10 picoseconds and I would still use MS Office, because I just want my damn documents to keep the same formatting my colleagues using MS Office set on their Windows computers.
Now of course I would love the best of all worlds; programs to be fast and have all the functionality I want! In reality, though, companies can't afford to build every feature, performance included, and need to pick and choose what's important.
homebrewer
If you're being honest, compare Slack and Teams not with weechat, but with Telegram. Its desktop client (along with other clients) is written by an actually competent team that cares about performance, and it shows. They have enough money to produce a native client written in C++ that has fantastic performance and is high quality overall, but these software behemoths with budgets higher than most countries' GDP somehow never do.
bri3d
This; "quality" is such an unclear term here.
In an efficient market people buy things based on a value which in the case of software, is derived from overall fitness for use. "Quality" as a raw performance metric or a bug count metric aren't relevant; the criteria is "how much money does using this product make or save me versus its competition or not using it."
In some cases there's a Market of Lemons / contract / scam / lack of market transparency issue (ie - companies selling defective software with arbitrary lock-ins and long contracts), but overall the slower or more "defective" software is often more fit for purpose than that provided by the competition. If you _must_ have a feature that only a slow piece of software provides, it's still a better deal to acquire that software than to not. Likewise, if software is "janky" and contains minor bugs that don't affect the end results it provides, it will outcompete an alternative which can't produce the same results.
caseyy
That's true. I meant it in a broader sense. Quality = {speed, function, lack of bugs, ergonomics, ... }.
cokeandpepsi
[dead]
davidw
I don't think it's necessarily a market for lemons. That involves information asymmetry.
Sometimes that happens with buggy software, but I think in general, people just want to pay less and don't mind a few bugs in the process. Compare and contrast what you'd have to charge to do a very thorough process with multiple engineers checking every line of code and many hours of rigorous QA.
I once did some software for a small book shop where I lived in Padova, and created it pretty quickly and didn't charge the guy - a friend - much. It wasn't perfect, but I fixed any problems (and there weren't many) as they came up and he was happy with the arrangement. He was patient because he knew he was getting a good deal.
graemep
I do think there is an information problem in many cases.
It is easy to get information of features. It is hard to get information on reliability or security.
The result is worsened because vendors compete on features, therefore they all make the same trade off of more features for lower quality.
HideousKojima
Some vendors even make it impossible to get information. See Oracle and Microsoft forbidding publishing benchmarks for their SQL databases.
davidw
There's likely some, although it depends on the environment. The more users of the system there are, the more there are going to be reviews and people will know that it's kind of buggy. Most people seem more interested in cost or features though, as long as they're not losing hours of work due to bugs.
genghisjahn
I have worked for large corporations that have foisted awful HR, expense reporting, time tracking and insurance "portals" that were so awful I had to wonder if anyone writing the checks had ever seen the product. I brought up the point several times that if my team tried to tell a customer that we had their project all done but it was full of as many bugs and UI nightmares as these back office platforms, I would be chastised, demoted and/or fired.
hamburglar
I used to work at a large company that had a lousy internal system for doing performance evals and self-reviews. The UI was shitty, it was unreliable, it was hard to use, it had security problems, it would go down on the eve of reviews being due, etc. This all stressed me out until someone in management observed, rather pointedly, that the reason for existence of this system is that we are contractually required to have such a system because the rules for government contracts mandate it, and that there was a possibility (and he emphasized the word possibility knowingly) that the managers actully are considering their personal knowledge of your performance rather than this performative documentation when they consider your promotions and comp adjustments. It was like being hit with a zen lightning bolt: this software meets its requirements exactly, and I can stop worrying about it. From that day on I only did the most cursory self-evals and minimal accomplishents, and my career progressed just fine.
You might not think about this as “quality” but it does have the quality of meeting the perverse functional requirements of the situation.
null
Ajedi32
> I had to wonder if anyone writing the checks had ever seen the product
Probably not, and that's like 90% of the issue with enterprise software. Sadly enterprise software products are often sold based mainly on how many boxes they check in the list of features sent to management, not based on the actual quality and usability of the product itself.
api
What you're describing is Enterprise(tm) software. Some consultancy made tens of millions of dollars building, integrating, and deploying those things. This of course was after they made tens of millions of dollars producing reports exploring how they would build, integrate, and deploy these things and all the various "phases" involved. Then they farmed all the work out to cheap coders overseas and everyone went for golf.
Meanwhile I'm a founder of startup that has gotten from zero to where it is on probably what that consultancy spends every year on catering for meetings.
econ
If they think it is unimportant talk as if it is. It could be more polished. Do we want to impress them or just satisfy their needs?
monkeyelite
The job it’s paid to do is satisfy regulation requirements.
regularfry
Across three jobs, I have now seen three different HR systems from the same supplier which were all differently terrible.
mamcx
> the market buys bug-filled, inefficient software about as well as it buys pristine software
In fact, the realization is that the market buy support.
And that includes google and other companies that lack much of human support.
This is the key.
Support is manifested in many ways:
* There is information about it (docs, videos, blogs, ...)
* There is people that help me ('look ma, this is how you use google')
* There is support for the thing I use ('OS, Browser, Formats, ...')
* And for my way of working ('Excel let me do any app there...')
* And finally, actual people (that is the #1 thing that keep alive even the worst ERP on earth). This also includes marketing, sales people, etc. This are signal of having support even if is not exactly the best. If I go to enterprise and only have engineers that will be a bad signal, because well, developers then to be terrible at other stuff and the other stuff is support that matters.
If you have a good product, but there is not support, is dead.
And if you wanna fight a worse product, is smart to reduce the need to support for ('bugs, performance issues, platforms, ...') for YOUR TEAM because you wanna reduce YOUR COSTS but you NEED to add support in other dimensions!
The easiest for a small team, is just add humans (that is the MOST scarce source of support). After that, it need to be creative.
(also, this means you need to communicate your advantages well, because there is people that value some kind of support more than others 'have the code vs propietary' is a good example. A lot prefer the proprietary with support more than the code, I mean)
archargelod
So you're telling me that if companies want to optimize profitability, they’d release inefficient, bug-ridden software with bad UI—forcing customers to pay for support, extended help, and bug fixes?
Suddenly, everything in this crazy world is starting to make sense.
tliltocatl
Afaik, SAS does exactly that (haven't any experience with them personally, just retelling gossips). Also Matlab. Not that they are BAD, it's just that 95% of matlab code could be python or even fortran with less effort. But matlab have really good support (aka telling the people in charge how they are tailored to solve this exact problem).
hermitShell
Suddenly, Microsoft makes perfect sense!
lifeisstillgood
This really focuses on the single metric that can be used try ought lifetime of a product … a really good point that keeps unfolding.
Starting an OSS product - write good docs. Got a few enterprise people interested - “customer success person” is most important marketing you can do …
hombre_fatal
Even if end-users had the data to reasonably tie-break on software quality and performance, as I scroll my list of open applications not a single one of them can be swapped out with another just because it were more performant.
For example: Docker, iterm2, WhatsApp, Notes.app, Postico, Cursor, Calibre.
I'm using all of these for specific reasons, not for reasons so trivial that I can just use the best-performing solution in each niche.
So it seems obviously true that it's more important that software exists to fill my needs in the first place than it pass some performance bar.
kasey_junk
I’m surprised in your list because it contains 3 apps that I’ve replaced specifically due to performance issues (docker, iterm and notes). I don’t consider myself particularly performance sensitive (at home) either. So it might be true that the world is even _less_ likely to pay for resource efficiency than we think.
defen
What did you replace Docker with?
hombre_fatal
Podman might have some limited API compatibility, but it's a completely different tool. Just off the bat it's not compatible with Skaffold, apparently.
That an alternate tool might perform better is compatible with the claim that performance alone is never the only difference between software.
Podman might be faster than Docker, but since it's a different tool, migrating to it would involve figuring out any number of breakage in my toolchain that doesn't feel worth it to me since performance isn't the only thing that matters.
jpalawaga
Except you’ve already swapped terminal for iterm, and orbstack already exists in part because docker left so much room for improvement, especially on the perf front.
hombre_fatal
I swapped Terminal for iTerm2 because I wanted specific features, not because of performance. iTerm2 is probably slower for all I care.
Another example is that I use oh-my-zsh which is adds weirdly long startup time to a shell session, but it lets me use plugins that add things like git status and kubectl context to my prompt instead of fiddling with that myself.
cogman10
> But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies
I'd take this one step further, 99% of the software written isn't being done with performance in mind. Even here in HN, you'll find people that advocate for poor performance because even considering performance has become a faux pas.
That means you L4/5 and beyond engineers are fairly unlikely to have any sort of sense when it comes to performance. Businesses do not prioritize efficient software until their current hardware is incapable of running their current software (and even then, they'll prefer to buy more hardware is possible.)
reidrac
The user tolerance has changed as well because the web 2.0 "perpetual beta" and SaaS replacing other distribution models.
Also Microsoft has educated now several generations to accept that software fails and crashes.
Because "all software is the same", customers may not appreciate good software when they're used to live with bad software.
azemetre
Is this really tolerance and not just monopolistic companies abusing their market position? I mean workers can't even choose what software they're allowed to use, those choices are made by the executive/management class.
titzer
I like to point out that since ~1980, computing power has increased about 1000X.
If dynamic array bounds checking cost 5% (narrator: it is far less than that), and we turned it on everywhere, we could have computers that are just a mere 950X faster.
If you went back in time to 1980 and offered the following choice:
I'll give you a computer that runs 950X faster and doesn't have a huge class of memory safety vulnerabilities, and you can debug your programs orders of magnitude more easily, or you can have a computer that runs 1000X faster and software will be just as buggy, or worse, and debugging will be even more of a nightmare.
People would have their minds blown at 950X. You wouldn't even have to offer 1000X. But guess what we chose...
Personally I think the 1000Xers kinda ruined things for the rest of us.
_aavaa_
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
Gigachad
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
maccard
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
viraptor
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
flohofwoe
I guess you don't need to wrestle with Xcode?
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
tjader
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
KapKap66
There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.
If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960
Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.
_aavaa_
I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.
Life on an entry or even mid level windows laptop is a very different world.
mschild
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
makeitdouble
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
pydry
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
fsloth
I don’t think abundance vs speed is the right lens.
No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.
Established apps could be optimized to the hilt.
But they seldom are.
pona-a
Did people make this exchange or did __the market__? I feel like we're assigning a lot of intention to a self-accelerating process.
You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.
ffsm8
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
bitmasher9
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
grumpymuppet
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
sgarland
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
mike_hearn
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.
It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.
josefx
> on countless layers of abstractions
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
TiredOfLife
And text that is not a pixely or blurry mess. And Unicode.
anthk
Unicode worked since Plan9. And antialiasing it's from the early 90's.
billfor
I made a vendor run their buggy and slow software on a Sparc 20 against their strenuous complaints to just let them have an Ultra, but when they eventually did optimize their software to run efficiently (on the 20) it helped set the company up for success in the wider market. Optimization should be treated as competitive advantage, perhaps in some cases one of the most important.
MonkeyClub
> Optimization should be treated as competitive advantage
That's just so true!
The right optimizations at the right moment can have a huge boost for both the product and the company.
However the old tenet regarding premature optimization has been cargo-culted and expanded to encompass any optimization, and the higher-ups would rather have ICs churn out new features instead, shifting the cost of the bloat to the customer by insisting on more and bigger machines.
It's good for the economy, surely, but it's bad for both the users and the eventual maintainers of the piles of crap that end up getting produced.
monkeyelite
> If dynamic array bounds checking cost 5% (narrator: it is far less than that)
It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
The vast majority of cases it doesn’t matter at all - much less than 5%. I think safe/unsafe or general/performance scopes are a good way to handle this.
miloignis
It's not that simple either - normally, if you're doing some loops over a large array of pixels, say, to perform some operation to them, there will only be a couple of bounds checks before the loop starts, checking the starting and ending conditions of the loops, not re-doing the bounds check for every pixel.
So very rarely should it be anything like 3-4x the cost, though some complex indexing could cause it to happen, I suppose. I agree scopes are a decent way to handle it!
monkeyelite
You’re describing a situation where I - or a very smart compiler can choose when to bounds check or not to make that intelligent realization.
Aurornis
> It doesn’t work like that. If an image processing algorithm takes 2 instructions per pixel, adding a check to every access could 3-4x the cost.
Your understanding of how bounds checking works in modern languages and compilers is not up to date. You're not going to find a situation where bounds checking causes an algorithm to take 3-4X longer.
A lot of people are surprised when the bounds checking in Rust is basically negligible, maybe 5% at most. In many cases if you use iterators you might not see a hit at all.
Then again, if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong.
> This is why if you dictate bounds checking then the language becomes uncompetitive for certain tasks.
Do you have any examples at all? Or is this just speculation?
monkeyelite
> Your understanding of how bounds checking works in modern languages and compilers is not up to date.
One I am familiar with is Swift - which does exactly this because it’s a library feature of Array.
Which languages will always be able to determine through function calls, indirect addressing, etc whether it needs to bounds check or not?
And how will I know if it succeeded or whether something silently failed?
> if you have an image processing algorithm that is literally reading every single pixel one-by-one to perform a 2-instruction operation and calculating bounds check on every access in the year 2025, you're doing a lot of things very wrong
I agree. And note this is an example of a scenario you can encounter in other forms.
> Do you have any examples at all? Or is this just speculation?
Yes. Java and python are not competitive for graphics and audio processing.
timbit42
Your argument is exactly why we ended up with the abominations of C and C++ instead of the safety of Pascal, Modula-2, Ada, Oberon, etc. Programmers at the time didn't realize how little impact safety features like bounds checking have. The bounds only need to be checked once for a for loop, not on each iteration.
monkeyelite
> The bounds only need to be checked once for a for loop, not on each iteration.
This is a theoretical argument. It depends on the compiler being able to see that’s what you’re doing and prove that there is no other mutation.
> abominations of C and C++
Sounds like you don’t understand the design choices that made this languages successful.
CyberDildonics
Clock speeds are 2000x higher than the 80s.
IPC could be 80x higher when taking into account SIMD and then you have to multiply by each core. Mainstream CPUs are more like 1 to 2 million times faster than what was there in the 80s.
You can get full refurbished office computers that are still in the million times faster range for a few hundred dollars.
The things you are describing don't have much to do with computers being slow and feeling slow, but they are happening anyway.
Scripting languages that are constantly allocating memory to any small operation and pointer chasing ever variable because the type is dynamic is part of the problem, then you have people writing extremely inefficient programs in an already terrible environment.
Most programs are written now in however way the person writing them wants to work, not how someone using it wishes they were written.
Most people have actually no concept of optimization or what runs faster than something else. The vast majority of programs are written by someone who gets it to work and thinks "this is how fast this program runs".
The idea that the same software can run faster is a niche thought process, not even everyone on hacker news thinks about software this way.
nitwit005
The cost of bounds checking, by itself, is low. The cost of using safe languages generally can be vastly higher.
Garbage collected languages often consume several times as much memory. They aren't immediately freeing memory no longer being used, and generally require more allocations in the first place.
ngneer
I agree with the sentiment and analysis that most humans prefer short term gains over long term ones. One correction to your example, though. Dynamic bounds checking does not solve security. And we do not know of a way to solve security. So, the gains are not as crisp as you are making them seem.
bluGill
Bounds checking solves one tiny subset of security. There are hundreds of other subsets that we know how to solve. However these days the majority of the bad attacks are social and no technology is likely to solve them - as more than 10,000 years of history of the same attack has shown. Technology makes the attacks worse because they now scale, but social attacks have been happening for longer than recorded history (well there is every reason to believe that - there is unlikely to evidence going back that far).
titzer
> However these days the majority of the bad attacks are social
You're going to have to cite a source for that.
Bounds checking is one mechanism that addresses memory safety vulnerabilities. According to MSFT and CISA[1], nearly 70% of CVEs are due to memory safety problems.
You're saying that we shouldn't solve one (very large) part of the (very large) problem because there are other parts of the problem that the solution wouldn't address?
[1] https://www.cisa.gov/news-events/news/urgent-need-memory-saf...
HappMacDonald
You don't have to "solve" security in order to improve security hygiene by a factor of X, and thus risk of negative consequences by that same factor of X.
ngneer
I am not suggesting we refuse to close one window because another window is open. That would be silly. Of course we should close the window. Just pointing out that the "950X" example figure cited fails to account for the full cost (or overestimates the benefit).
fsflover
> And we do not know of a way to solve security.
Security through compartmentalization approach actually works. Compare the number of CVEs of your favorite OS with those for Qubes OS: https://www.qubes-os.org/security/qsb/
ngneer
Playing devil's advocate, compare their popularity. You may have fallen prey to the base rate fallacy.
dijit
Maybe since 1980.
I recently watched a video that can be summarised quite simply as: "Computers today aren't that much faster than the computers of 20 years ago, unless you specifically code for them".
https://www.youtube.com/watch?v=m7PVZixO35c
It's a little bit ham-fisted, as the author was shirking decades of compile optimisations also, and it's not apples to apples as he's comparing desktop class hardware with what is essentially laptop hardware; but it's also interesting to see that a lot of the performance gains really weren't that great actually. he observes a doubling of performance in 15 years! Truth be told most people use laptops now, and truth be told 20 years ago most people used desktops, so it's not totally unfair.
Maybe we've bought a lot into marketing.
dist-epoch
It's more like 100,000X.
Just the clockspeed increased 1000X, from 4 MHz to 4 GHz.
But then you have 10x more cores, 10x more powerful instructions (AVX), 10x more execution units per core.
cletus
So I've worked for Google (and Facebook) and it really drives the point home of just how cheap hardware is and how not worth it optimizing code is most of the time.
More than a decade ago Google had to start managing their resource usage in data centers. Every project has a budget. CPU cores, hard disk space, flash storage, hard disk spindles, memory, etc. And these are generally convertible to each other so you can see the relative cost.
Fun fact: even though at the time flash storage was ~20x the cost of hard disk storage, it was often cheaper net because of the spindle bottleneck.
Anyway, all of these things can be turned into software engineer hours, often called "mili-SWEs" meaning a thousandth of the effort of 1 SWE for 1 year. So projects could save on hardware and hire more people or hire fewer people but get more hardware within their current budgets.
I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands. So if you spend 1 SWE year working on optimization acrosss your project and you're not saving 5000 CPU cores, it's a net loss.
Some projects were incredibly large and used much more than that so optimization made sense. But so often it didn't, particularly when whatever code you wrote would probably get replaced at some point anyway.
The other side of this is that there is (IMHO) a general usability problem with the Web in that it simply shouldn't take the resources it does. If you know people who had to or still do data entry for their jobs, you'll know that the mouse is pretty inefficient. The old terminals from 30-40+ years ago that were text-based had some incredibly efficent interfaces at a tiny fraction of the resource usage.
I had expected that at some point the Web would be "solved" in the sense that there'd be a generally expected technology stack and we'd move on to other problems but it simply hasn't happened. There's still a "framework of the week" and we're still doing dumb things like reimplementing scroll bars in user code that don't work right with the mouse wheel.
I don't know how to solve that problem or even if it will ever be "solved".
mike_hearn
I worked there too and you're talking about performance in terms of optimal usage of CPU on a per-project basis.
Google DID put a ton of effort into two other aspects of performance: latency, and overall machine utilization. Both of these were top-down directives that absorbed a lot of time and attention from thousands of engineers. The salary costs were huge. But, if you're machine constrained you really don't want a lot of cores idling for no reason even if they're individually cheap (because the opportunity cost of waiting on new DC builds is high). And if your usage is very sensitive to latency then it makes sense to shave milliseconds off because of business metrics, not hardware $ savings.
cletus
The key part here is "machine utilization" and absolutely there was a ton of effort put into this. I think before my time servers were allocated to projects but even early on in my time at Google Borg had already adopted shared machine usage and therew was a whole system of resource quota implemented via cgroups.
Likewise there have been many optimization projects and they used to call these out at TGIF. No idea if they still do. One I remember was reducing the health checks via UDP for Stubby and given that every single Google product extensively uses Stubby then even a small (5%? I forget) reduction in UDP traffic amounted to 50,000+ cores, which is (and was) absolutely worth doing.
I wouldn't even put latency in the same category as "performance optimization" because often you decrease latency by increasing resource usage. For example, you may send duplicate RPCs and wait for the fastest to reply. That could be double or tripling effort.
xondono
Except you’re self selecting for a company that has high engineering costs, big fat margins to accommodate expenses like additional hardware, and lots of projects for engineers to work on.
The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
The problem is that almost no one is doing it, because the way we make these decisions has nothing to do with the economical calculus behind, most people just do “what Google does”, which explains a lot of the disfunction.
bjourne
I think the parent's point is that if Google with millions of servers can't make performance optimization worthwhile, then it is very unlikely that a smaller company can. If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
> The evaluation needs to happen in the margins, even if it saves pennies/year on the dollar, it’s best to have those engineers doing that than have them idling.
That's debatable. Performance optimization almost always lead to complexity increase. Doubled performance can easily cause quadrupled complexity. Then one has to consider whether the maintenance burden is worth the extra performance.
makeitdouble
> it is very unlikely that a smaller company can.
I think it's the reverse: a small company doesn't have the liquidity, buying power or ability to convert more resource into more money like Google.
And of course a lot of small companies will be paying Google with a fat margin to use their cloud.
Getting by with less resources, or even on-premise reduced hardware will be a way bigger win. That's why they'll pay a DBA full time to optimize their database needs to reduce costs 2 to 3x the salary. Or have full team of infra guys mostly dealing with SRE and performance.
maccard
> If salaries dominate over compute costs, then minimizing the latter at the expense of the former is counterproductive.
And with client side software, compute costs approach 0 (as the company isn’t paying for it).
arp242
> I don't remember the exact number of CPU cores amounted to a single SWE but IIRC it was in the thousands.
I think this probably holds true for outfits like Google because 1) on their scale "a core" is much cheaper than average, and 2) their salaries are much higher than average. But for your average business, even large businesses? A lot less so.
I think this is a classic "Facebook/Google/Netflix/etc. are in a class of their own and almost none of their practices will work for you"-type thing.
morepork
Maybe not to the same extent, but an AWS EC2 m5.large VM with 2 cores and 8 GB RAM costs ~$500/year (1 year reserved). Even if your engineers are being paid $50k/year, that's the same as 100 VMs or 200 cores + 800 GB RAM.
smikhanov
I don't know how to solve that problem or even if it will ever be "solved".
It will not be “solved” because it’s a non-problem.You can run a thought experiment imagining an alternative universe where human resource were directed towards optimization, and that alternative universe would look nothing like ours. One extra engineer working on optimization means one less engineer working on features. For what exactly? To save some CPU cycles? Don’t make me laugh.
karmakaze
Google doesn't come up with better compression and binary serialization formats just for fun--it improves their bottom line.
SilverSlash
The title made me think Carmack was criticizing poorly optimized software and advocating for improving performance on old hardware.
When in fact, the tweet is absolutely not about either of the two. He's talking about a thought experiment where hardware stopped advancing and concludes with "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Cordiali
It's related to a thread from yesterday, I'm guessing you haven't seen it:
https://news.ycombinator.com/item?id=43967208 https://threadreaderapp.com/thread/1922015999118680495.html
ngangaga
> "Innovative new products would get much rarer without super cheap and scalable compute, of course".
Interesting conclusion—I'd argue we haven't seen much innovation since the smartphone (18 years ago now), and it's entirely because capital is relying on the advances of hardware to sell what is to consumers essentially the same product that they already have.
Of course, I can't read anything past the first tweet.
jayd16
We have self driving cars, amazing advancement in computer graphics, dead reckoning of camera position from visual input...
In the meantime, hardware has had to go wide on threads as single core performance has not improved. You could argue that's been a software gain and a hardware failure.
cogman10
> single core performance has not improved.
Single core performance has improved, but at a much slower rate than I experienced as a kid.
Over the last 10 years, we are something like 120% improvement in single core performance.
And, not for nothing, efficiency has become much more important. More CPU performance hasn't been a major driving factor vs having a laptop that runs for 12 hours. It's simply easier to add a bunch of cores and turn them all off (or slow them down) to gain power efficiency.
Not to say the performance story would be vastly different with more focus on performance over efficiency. But I'd say it does have an effect on design choices.
voidspark
Single core performance has improved about 10x in 20 years
HappMacDonald
And I'd argue that we've seen tons of innovation in the past 18 years aside from just "the smartphone" but it's all too easy to take for granted and forget from our current perspective.
First up, the smartphone itself had to evolve a hell of a lot over 18 years or so. Go try to use an iPhone 1 and you'll quickly see all of the roadblocks and what we now consider poor design choices littered everywhere, vs improvements we've all taken for granted since then.
18 years ago was 2007? Then we didn't have (for better or for worse on all points):
* Video streaming services
* Decent video game market places or app stores. Maybe "Battle.net" with like 5 games, lol!
* VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
* Mapping applications on a phone (there were some stand-alone solutions like Garmin and TomTom just getting off the ground)
* QR Codes (the standard did already exist, but mass adoption would get nowhere without being carried by the smartphone)
* Rideshare, food, or grocery delivery services (aside from taxis and whatever pizza or chinese places offered their own delivery)
* Voice-activated assistants (including Alexa and other standalone devices)
* EV Cars (that anyone wanted to buy) or partial autopilot features aside from 1970's cruise control
* Decent teleconferencing (Skype's featureset was damn limited at the time, and any expensive enterprise solutions were dead on the launchpad due to lack of network effects)
* Decent video displays (flatscreens were still busy trying to mature enough to push CRTs out of the market at this point)
* Color printers were far worse during this period than today, though that tech will never run out of room for improvement.
* Average US Internet speeds to the home were still ~1Mbps, with speeds to cellphone of 100kbps being quite luxurious. Average PCs had 2GB RAM and 50GB hard drive space.
* Naturally: the tech everyone loves to hate such as AI, Cryptocurrencies, social network platforms, "The cloud" and SaaS, JS Frameworks, Python (at least 3.0 and even realistically heavy adoption of 2.x), node.js, etc. Again "Is this a net benefit to humanity" and/or "does this get poorly or maliciously used a lot" doesn't speak to whether or not a given phenomena is innovative, and all of these objectively are.
piperswe
> * Video streaming services
Netflix video streaming launched in 2007.
> * VSCode-style IDEs (you really would not have appreciated Visual Studio or Eclipse of the time..)
I used VS2005 a little bit in the past few years, and I was surprised to see that it contains most of the features that I want from an IDE. Honestly, I wouldn't mind working on a C# project in VS2005 - both C# 2.0 and VS2005 were complete enough that they'd only be a mild annoyance compared to something more modern.
> partial autopilot features aside from 1970's cruise control
Radar cruise control was a fairly common option on mid-range to high-end cars by 2007. It's still not standard in all cars today (even though it _is_ standard on multiple economy brands). Lane departure warning was also available in several cars. I will hand it to you that L2 ADAS didn't really exist the way it does today though.
eesmith
The future is unevenly distributed.
> Video streaming services
We watched a stream of the 1994 World Cup. There was a machine at MIT which forwarded the incoming video to an X display window
xhost +machine.mit.edu
and we could watch it from several states away. (The internet was so trusting in those days.)To be sure, it was only a couple of frames per second, but it was video, and an audience collected to watch it.
> EV Cars (that anyone wanted to buy)
People wanted to buy the General Motors EV1 in the 1990s. Quoting Wikipedia, "Despite favorable customer reception, GM believed that electric cars occupied an unprofitable niche of the automobile market. The company ultimately crushed most of the cars, and in 2001 GM terminated the EV1 program, disregarding protests from customers."
I know someone who managed to buy one. It was one of the few which had been sold rather than leased.
zelos
...TomTom just getting off the ground
TomTom was founded in 1991 and released their first GPS device in 2004. By 2007 they were pretty well established.
00N8
I worked for a 3rd party food delivery service in the summer of 2007. Ordering was generally done by phone, then the office would text us (the drivers) order details for pickup & delivery. They provided GPS navigation devices, but they were stand-alone units that were slower & less accurate than modern ones, plus they charged a small fee for using it that came out of our pay.
xnorswap
Your post seems entirely anachronistic.
2007 is the year we did get video streaming services: https://en.wikipedia.org/wiki/BBC_iPlayer
Steam was selling games, even third party ones, for years by 2007.
I'm not sure what a "VS-Code style IDE" is, but I absolutely did appreciate Visual Studio ( and VB6! ) prior to 2007.
2007 was in fact the peak of TomTom's profit, although GPS navigation isn't really the same as general purpose mapping application.
Grocery delivery was well established, Tesco were doing that in 1996. And the idea of takeaways not doing delivery is laughable, every establishment had their own delivery people.
Yes, there are some things on that list that didn't exist, but the top half of your list is dominated by things that were well established by 2007.
conorjh
most of that list is iteration, not innovation. like going from "crappy colour printer" to "not-so-crappy colour printer"
casey2
>netflix
>steam
>Sublime (Of course ed, vim, emacs, sam, acme already existed for decades by 2007)
>No they weren't TomTom already existed for years, GPS existed for years
>You're right that they already existed
>Again, already existed, glad we agree
>Tech was already there just putting it in a phone doesn't count as innovation
>NASA was driving electric cars on the moon while Elon Musk was in diapers
>I was doing that in the early 80s, but Skype is a fine pre 2007 example thanks again >Your right we didn't have 4k displays in 2007, not exactly a software innovation. This is a good example of a hardware innovation used to sell essentially the same product >? Are you sure you didn't have a bad printer there have been good color printers since the 90s let alone 2007. The price to performance arguably hasn't changed since 2007 you are just paying more in running costs than upfront. >This is definitely hardware. Scripting language 3.0 or FOTM framework isn't innovative in that there is no problem being solved and no economic gain, if they didn't exist people would use something else and that would be that. With AI the big story was that there WASN'T a software innovation and that what few innovation do exist will die to the Bitter lesson
bluGill
There has been a lot of innovation - but it is focused to some niche and so if you are not in a niche you don't see it and wouldn't care if you did. Most of the major things you need have already invented - I recall word processors as a kid, so they for sure date back to the 1970s - we still need word processors and there is a lot of polish that can be added, but all innovation is in niche things that the majority of us wouldn't have a use for if we knew about it.
Of course innovation is always in bits and spurts.
franktankbank
I think its a bad argument though. If we had to stop with the features for a little while and created some breathing room, the features would come roaring back. There'd be a downturn sure but not a continuous one.
MrBuddyCasino
This is exactly the point. People ignore that "bloat" is not (just) "waste", it is developer productivity increase motivated by economics.
The ability to hire and have people be productive in a less complicated language expands the market for workers and lowers cost.
gwern
A subtext here may be his current AI work. In OP, Carmack is arguing, essentially, that 'software is slow because good smart devs are expensive and we don't want to pay for them to optimize code and systems end-to-end as there are bigger fish to fry'. So, an implication here is that if good smart devs suddenly got very cheap, then you might see a lot of software suddenly get very fast, as everyone might choose to purchase them and spend them on optimization. And why might good smart devs become suddenly available for cheap?
null
agentultra
I heartily agree. It would be nice if we could extend the lifetime of hardware 5, 10 years past its, "planned obsolescence." This would divert a lot of e-waste, leave a lot of rare earth minerals in the ground, and might even significantly lower GHG emissions.
The market forces for producing software however... are not paying for such externalities. It's much cheaper to ship it sooner, test, and iterate than it is to plan and design for performance. Some organizations in the games industry have figured out a formula for having good performance and moving units. It's not spread evenly though.
In enterprise and consumer software there's not a lot of motivation to consider performance criteria in requirements: we tend to design for what users will tolerate and give ourselves as much wiggle room as possible... because these systems tend to be complex and we want to ship changes/features continually. Every change is a liability that can affect performance and user satisfaction. So we make sure we have enough room in our budget for an error rate.
Much different compared to designing and developing software behind closed doors until it's, "ready."
hermitShell
Point 1 is why growth/debt is not a good economic model in the long run. We should have a care & maintenance focused economy and center our macro scale efforts on the overall good of the human race, not perceived wealth of the few.
If we focused on upkeep of older vehicles, re-use of older computers, etc. our landfills would be smaller proportional to 'growth'.
I'm sure there's some game theory construction of the above that shows that it's objectively an inferior strategy to be a conservationist though.
agentultra
I sometimes wonder how the game theorist would argue with physics.
bob1029
We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
I think this specific class of computational power - strictly serialized transaction processing - has not grown at the same rate as other metrics would suggest. Adding 31 additional cores doesn't make the order matching engine go any faster (it could only go slower).
If your product is handling fewer than several million transactions per second and you are finding yourself reaching for a cluster of machines, you need to back up like 15 steps and start over.
HolyLampshade
> We've been able to run order matching engines for entire exchanges on a single thread for over a decade by this point.
This is the bit that really gets me fired up. People (read: system “architects”) were so desperate to “prove their worth” and leave a mark that many of these systems have been over complicated, unleashing a litany of new issues. The original design would still satisfy 99% of use cases and these days, given local compute capacity, you could run an entire market on a single device.
queuebert
Why can you not match orders in parallel using logarithmic reduction, the same way you would sort in parallel? Is it that there is not enough other computation being done other than sorting by time and price?
mike_hearn
It's an inherently serial problem and regulations require it to be that way. Users who submit first want their orders to be the one that crosses.
perlgeek
Stupid question, since I know next to nothing about exchanges and regulations... but couldn't you just process serially by security?
E.G. if user A wants to buy Apple stock and user B wants to buy Facebook stock, does it matter which order came first? And if yes, why?
tossandthrow
I think it is the temporal aspect of order matching - for exchanges it is an inherently serial process.
bluGill
You are only able to do that because you are doing simple processing on each transaction. If you had to do more complex processing on each transaction it wouldn't be possible to do that many. Though it is hard for me to imagine what more complex processing would be (I'm not in your domain)
bob1029
The order matching engine is mostly about updating an in-memory order book representation.
It is rarely the case that high volume transaction processing facilities also need to deal with deeply complex transactions.
I can't think of many domains of business wherein each transaction is so compute intensive that waiting for I/O doesn't typically dominate.
bluGill
HFT would love to do more complex calculations for some of their trades. They often make the compromise of using a faster algorithm that is known to be right only 60% of the time vs the better but slower algorithm that is right 90% of the time.
That is a different problem from yours though and so it has different considerations. In some areas I/O dominates, in some it does not.
agentultra
I work in card payments transaction processing and IO dominates. You need to have big models and lots of data to authorize a transaction. And you need that data as fresh as possible and as close to your compute as possible... but you're always dominated by IO. Computing the authorization is super cheap.
Tends to scale vertically rather than horizontally. Give me massive caches and wide registers and I can keep them full. For now though a lot of stuff is run on commodity cloud hardware so... eh.
AndrewDucker
Well, yes. It's an economic problem (which is to say, it's a resource allocation problem). Do you have someone spend extra time optimising your software or do you have them produce more functionality. If the latter generates more cash then that's what you'll get them to do. If the former becomes important to your cashflow then you'll get them to do that.
xondono
I think you’re right in that it’s an economics problem, but you’re wrong about which one.
For me this is a clear case of negative externalities inflicted by software companies against the population at large.
Most software companies don’t care about optimization because they’re not paying the real costs on that energy, lost time or additional e-waste.
magarnicle
Is there any realistic way to shift the payment of hard-to-trace costs like environmental clean-up, negative mental or physical health, and wasted time back to the companies and products/software that cause them?
tgv
It's the kind of economics that shifts the financial debt to accumulating waste, and technical debt, which is paid for by someone else. It's basically stealing. There are --of course-- many cases in which thorough optimizing doesn't make much sense, but the idea of just adding servers instead of rewriting is a sad state of affairs.
xyzzy123
It doesn't seem like stealing to me? Highly optimised software generally takes more effort to create and maintain.
The tradeoff is that we get more software in general, and more features in that software, i.e. software developers are more productive.
I guess on some level we can feel that it's morally bad that adding more servers or using more memory on the client is cheaper than spending developer time but I'm not sure how you could shift that equilibrium without taking away people's freedom to choose how to build software?
HappMacDonald
I feel like the argument is similar to that of all corporate externality pushes.
For example "polluting the air/water, requiring end-users to fill landfills with packaging and planned obscolescence" allows a company to more cheaply offer more products to you as a consumer.. but now everyone collectively has to live in a more polluted world with climate change and wasted source material converted to expensive and/or dangerous landfills and environmental damage from fracking and strip mining.
But that's still not different from theft. A company that sells you things that "Fell off the back of a truck" is in a position to offer you lower costs and greater variety, as well. Aren't they?
Our shared resources need to be properly managed: neither siphoned wastefully nor ruined via polution. That proper management is a cost, and it either has to be borne by those using the resources and creating the waste, or it is theft of a shared resource and tragedy of the commons.
inetknght
> It's basically stealing.
This is exactly right. Why should the company pay an extra $250k in salary to "optimize" when they can just offload that salary to their customers' devices instead? The extra couple of seconds, extra megabytes of bandwidth, and shittery of the whole ecosystem has been externalized to customers in search of ill-gotten profits.
3036e4
It's like ignoring backwards compatibility. That is really cheap since all the cost is pushed to end-users (that have to relearn the UI) or second/third-party developers (that have to rewrite their client code to work with a new API). But it's OK since everyone is doing it and also without all those pointless rewrites many of us would not have a job.
gwern
> has been externalized to customers in search of ill-gotten profits.
'Externality' does not mean 'thing I dislike'. If it is the customers running the software or waiting the extra couple of seconds, that's not an externality. By definition. (WP: "In economics, an externality is an indirect cost (external cost) or benefit (external benefit) to an uninvolved third party that arises as an effect of another party's (or parties') activity.") That is just the customers picking their preferred point on the tradeoff curves.
franktankbank
Also offloaded to the miserable devs maintaining the system.
esperent
> It's basically stealing
This feels like hyperbole to me. Who is being stolen from here? Not the end user, they're getting the tradeoff of more features for a low price in exchange for less optimized software.
skydhash
From what I’m seeing people do on their computers, it barely changed from what they’ve been doing on their pentium 4 one. But now, with Electron-based software and the generals state of Windows, you can’t recommend something older than 4 years. It’s hard to not see it as stealing when you have to buy a 1000+ laptop, when a 400 one could easily do the job if the software were a bit better.
cosmic_cheese
It’s only a tradeoff for the user if the user find the added features useful.
Increasingly, this is not the case. My favorite example here is the Adobe Creative Suite, which for many users useful new features became far and few between some time ~15 years ago. For those users, all they got was a rather absurd degree of added bloat and slowness for essentially the same thing they were using in 2010. These users would’ve almost certainly been happier had 80-90% of the feature work done in that time instead been bug fixes and optimization.
knowitnone
would you spend 100 years writing the perfect editor optimizing every single function and continueously optimizing and when will it ever be complete? No you wouldn't. Do you use python or java or C? Obviously, that can be optimized if you wrote in assembly. Practice what you preach, otherwise you'd be stealing.
victorbjorklund
Not really stealing. You could off course build software that is more optimized and with the same features but at a higher cost. Would most buyers pay twice the price for a webb app that loads in 1 sec instead of 2? Probably not.
skydhash
Try loading slack and youtube on a 4 year old laptop. It’s more in the 10s, and good luck if you only have 8GB of ram.
pier25
> Do you have someone spend extra time optimising your software or do you have them produce more functionality
Depends. In general, I'd rather have devs optimize the software rather than adding new features just for the sake of change.
I don't use most of the new features in macOS, Windows, or Android. I mostly want an efficient environment to run my apps and security improvements. I'm not that happy about many of the improvements in macOS (eg the settings app).
Same with design software. I don't use most of the new features introduced by Adobe. I'd be happy using Illustrator or Photoshop from 10 years ago. I want less bloat, not more.
I also do audio and music production. Here I do want new features because the workflow is still being improved but definitely not at the cost of efficiency.
Regarding code editors I'm happy with VSCode in terms of features. I don't need anything else. I do want better LSPs but these are not part of the editor core. I wish VSCode was faster and consumed less memory though.
MattSayar
Efficiency is critical to my everyday life. For example, before I get up from my desk to grab a snack from the kitchen, I'll bring any trash/dishes with me to double the trip's benefits. I do this kind of thing often.
Optimizing software has a similar appeal. But when the problem is "spend hours of expensive engineering time optimizing the thing" vs "throw some more cheap RAM at it," the cheaper option will prevail. Sometimes, the problem is big enough that it's worth the optimization.
The market will decide which option is worth pursuing. If we get to a point where we've reached diminishing returns on throwing hardware at a problem, we'll optimize the software. Moore's Law may be slowing down, but evidently we haven't reached that point yet.
conductr
Ultimately it's a demand problem. If consumer demands more performant software, they would pay a premium for it. However, the opposite is more true. They would prefer an even less performant version if it came with a cheaper price tag.
criticalfault
You have just explained how enshitification works.
fdr
One of the things I think about sometimes, a specific example rather than a rebuttal to Carmack.
The Electron Application is somewhere between tolerated and reviled by consumers, often on grounds of performance, but it's probably the single innovation that made using my Linux laptop in the workplace tractable. And it is genuinely useful to, for example, drop into a MS Teams meeting without installing.
So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
bigstrat2003
> So, everyone laments that nothing is as tightly coded as Winamp anymore, without remembering the first three characters.
I would far, far rather have Windows-only software that is performant than the Electron slop we get today. With Wine there's a decent chance I could run it on Linux anyway, whereas Electron software is shit no matter the platform.
jeroenhd
Wine doesn't even run Office, there's no way it'd run whatever native video stack Teams would use. Linux has Teams purely because Teams decided to go with web as their main technology.
Even the Electron version of Teams on Linux has a reduced feature set because there's no Office to integrate with.
kristianp
Of course Wine can run office! At least Word and Excel I have used under Wine. They're probably some of the primary targets of compatibility work.
inetknght
I was working as a janitor, moonlighting as an IT director, in 2010. Back then I told the business that laptops for the past five years (roughly since Nehalem) have plenty of horsepower to run spreadsheets (which is basically all they do) with two cores, 16 GB of RAM, and a 500GB SATA SSD. A couple of users in marketing did need something a little (not much) beefier. Saved a bunch of money by not buying the latest-and-greatest laptops.
I don't work there any more. Today I am convinced that's true today: those computers should still be great for spreadsheets. Their workflow hasn't seriously changed. It's the software that has. If they've continued with updates (can it even "run" MS Windows 10 or 11 today? No idea, I've since moved on to Linux) then there's a solid chance that the amount of bloat and especially move to online-only spreadsheets would tank their productivity.
Further, the internet at that place was terrible. The only offerings were ~16Mbit asynchronous DSL (for $300/mo just because it's a "business", when I could get the same speed for $80/mo at home), or Comcast cable 120Mbit for $500/mo. 120Mbit is barely enough to get by with an online-only spreadsheet, and 16Mbit definitely not. But worse: if internet goes down, then the business ceases to function.
This is the real theft that another commenter [0] mentioned that I wholeheartedly agree with. There's no reason whatsoever that a laptop running spreadsheets in an office environment should require internet to edit and update spreadsheets, or crazy amounts of compute/storage, or even huge amounts of bandwidth.
Computers today have zero excuse for terrible performance except only to offload costs onto customers - private persons and businesses alike.
alkonaut
"The world" runs on _features_ not elegant, fast, or bug free software. To the end user, there is no difference between a lack of a feature, and a bug. Nor is there any meaningful difference between software taking 5 minutes to complete something because of poor performance, compared to the feature not being there and the user having to spend 5 minutes completing the same task manually. It's "slow".
If you keep maximizing value for the end user, then you invariably create slow and buggy software. But also, if you ask the user whether they would want faster and less buggy software in exchange for fewer features, they - surprise - say no. And even more importantly: if you ask the buyer of software, which in the business world is rarely the end user, then they want features even more, and performance and elegance even less. Given the same feature set, a user/buyer would opt for the fastest/least buggy/most elegant software. But if it lacks any features - it loses. The reason to keep software fast and elegant is because it's the most likely path to be able to _keep_ adding features to it as to not be the less feature rich offering. People will describe the fast and elegant solution with great reviews, praising how good it feels to use. Which might lead people to think that it's an important aspect. But in the end - they wouldn't buy it at all if it didn't do what they wanted. They'd go for the slow frustrating buggy mess if it has the critical feature they need.
xg15
Almost all of my nontechnical friends and family members have at some point complained about bloated and overly complicated software that they are required to use.
Also remember that Microsoft at this point has to drag their users kicking and screaming into using the next Windows version. If users were let to decide for themselves, many would have never upgraded past Windows XP. All that despite all the pretty new features in the later versions.
I'm fully with you that businesses and investors want "features" for their own sake, but definitely not users.
jeroenhd
Every time I offer alternatives to slow hardware, people find a missing feature that makes them stick to what they're currently using. Other times the features are there but the buttons for it are in another place and people don't want to learn something new. And that's for free software, with paid software things become even worse because suddenly the hours they spend on loading times is worthless compared to a one-time fee.
Complaining about slow software happens all the time, but when given the choice between features and performance, features win every time. Same with workflow familiarity; you can have the slowest, most broken, hacked together spreadsheet-as-a-software-replacement mess, but people will stick to it and complain how bad it is unless you force them to use a faster alternative that looks different.
sabellito
Every software you use has more bloat than useful features? Probably not. And what's useless to one user might be useful to another.
light_hue_1
No way.
You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
If people could, no one would ever upgrade anything anymore. Look at how hard MS has to work to force anyone to upgrade. I have never heard of anyone who wanted a new version of Windows, Office, Slack, Zoom, etc.
This is also why everything (like Photoshop) is being forced into the cloud. The vast majority of people don't want the new features that are being offered. Including buyers at businesses. So the answer to keep revenue up is to force people to buy regardless of what features are being offered or not.
alkonaut
> You've got it totally backwards. Companies push features onto users who do not want them in order to make sales through forced upgrades because the old version is discontinued.
I think this is more a consumer perspective than a B2B one. I'm thinking about the business case. I.e. businesses purchase software (or has bespoke software developed). Then they pay for fixes/features/improvements. There is often a direct communication between the buyer and the developer (whether it's off-the shelf, inhouse or made to spec). I'm in this business and the dialog is very short "great work adding feature A. We want feature B too now. And oh the users say the software is also a bit slow can you make it go faster? Me: do you want feature B or faster first? Them (always) oh feature B. That saves us man-weeks every month". Then that goes on for feature C, D, E, ...Z.
In this case, I don't know how frustrated the users are, because the customer is not the user - it's the users' managers.
In the consumer space, the user is usually the buyer. That's one huge difference. You can choose the software that frustrates you the least, perhaps the leanest one, and instead have to do a few manual steps (e.g. choose vscode over vs, which means less bloated software but also many fewer features).
bee_rider
Agree WRT the tradeoff between features and elegance.
Although, I do wonder if there’s an additional tradeoff here. Existing users, can apparently do what they need to do with the software, because they are already doing it. Adding a new feature might… allow them to get rid of some other software, or do something new (but, that something new must not be so earth shattering, because they didn’t seek out other software to do it, and they were getting by without it). Therefore, I speculate that existing users, if they really were introspective, would ask for those performance improvements first. And maybe a couple little enhancements.
Potential new users on the other hand, either haven’t heard of your software yet, or they need it to do something else before they find it useful. They are the ones that reasonably should be looking for new features.
So, in “features vs performance” decision is also a signal about where the developers’ priorities lay: adding new users or keeping old ones happy. So, it is basically unsurprising that:
* techies tend to prefer the latter—we’ve played this game before, and know we want to be the priority for the bulk of the time using the thing, not just while we’re being acquired.
* buggy slow featureful software dominates the field—this is produced by companies that are prioritizing growth first.
* history is littered with beautiful, elegant software that users miss dearly, but which didn’t catch on broadly enough to sustain the company.
However, the tradeoff is real in both directions; most people spend most of their time as users instead of potential users. I think this is probably a big force behind the general perception that software and computers are incredibly shit nowadays.
sabellito
Perfectly put. People who try to argue that more time should be spent on making software perform better probably aren't thinking about who's going to pay for that.
For the home/office computer, the money spent on more RAM and a better CPU enables all software it runs to be shipped more cheaply and with more features.
nottorp
Unfortunately, bloated software passes the costs to the customer and it's hard to evaluate the loss.
Except your browser taking 180% of available ram maybe.
By the way, the world could also have some bug free software, if anyone could afford to pay for it.
jillesvangurp
What cost? The hardware is dirt cheap. Programmers aren't cheap. The value of being able to use cheap software on cheap hardware is basically not having to spend a lot of time optimizing things. Time is the one thing that isn't cheap here. So there's a value in shipping something slightly sub optimal sooner rather than something better later.
> Except your browser taking 180% of available ram maybe.
For most business users, running the browser is pretty much the only job of the laptop. And using virtual memory for open tabs that aren't currently open is actually not that bad. There's no need to fit all your gazillion tabs into memory; only the ones you are looking at. Browsers are pretty good at that these days. The problem isn't that browsers aren't efficient but that we simply push them to the breaking content with content. Content creators simply expand their resource usage whenever browsers get optimized. The point of optimization is not saving cost on hardware but getting more out of the hardware.
The optimization topic triggers the OCD of a lot of people and sometimes those people do nice things. John Carmack built his career when Moore's law was still on display. Everything he did to get the most out of CPUs was super relevant and cool but it also dated in a matter of a few years. One moment we were running doom on simple 386 computers and the next we were running Quake and Unreal with shiny new Voodoo GPUs on a Pentium II pro. I actually had the Riva 128 as my first GPU, which was one of the first products that Nvidia shipped running Unreal and other cool stuff. And while CPUs have increased enormously in performance, GPUs have increased even more by some ridiculous factor. Nvidia has come a long way since then.
I'm not saying optimization is not important but I'm just saying that compute is a cheap commodity. I actually spend quite a bit of time optimizing stuff so I can appreciate what that feels like and how nice it is when you make something faster. And sometimes that can really make a big difference. But sometimes my time is better spent elsewhere as well.
wtetzner
> Time is the one thing that isn't cheap here.
Right, and that's true of end users as well. It's just not taken into account by most businesses.
I think your take is pretty reasonable, but I think most software is too far towards slow and bloated these days.
Browsers are pretty good, but developers create horribly slow and wasteful web apps. That's where the optimization should be done. And I don't mean they should make things as fast as possible, just test on an older machine that a big chunk of the population might still be using, and make it feel somewhat snappy.
The frustrating part is that most web apps aren't really doing anything that complicated, they're just built on layers of libraries that the developers don't understand very well. I don't really have a solution to any of this, I just wish developers cared a little bit more than they do.
cube00
> I just wish developers cared a little bit more than they do.
Ask the nice product owner to stop crushing me with their deadlines and I'll happily oblige.
inetknght
> The hardware is dirt cheap.
Maybe to you.
Meanwhile plenty of people are living paycheck-to-paycheck and literally cannot afford a phone, let alone a new phone and computer every few years.
nottorp
> The hardware is dirt cheap.
It's not, because you multiply that 100% extra CPU time by all of an application's users and only then you come to the real extra cost.
And if you want to pick on "application", think of the widely used libraries and how much any non optimization costs when they get into everything...
kreco
Your whole reply is focused at business level but not everybody can afford 32GB of RAM just to have a smooth experience on a web browser.
branko_d
> The hardware is dirt cheap. Programmers aren't cheap.
That may be fine if you can actually improve the user experience by throwing hardware at the problem. But in many (most?) situations, you can't.
Most of the user-facing software is still single-threaded (and will likely remain so for a long time). The difference in single-threaded performance between CPUs in wide usage is maybe 5x (and less than 2x for desktop), while the difference between well optimized and poorly optimized software can be orders of magnitude easily (milliseconds vs seconds).
And if you are bottlenecked by network latency, then the CPU might not even matter.
ManlyBread
I have been thinking about this a lot ever since I played a game called "Balatro". In this game nothing extraordinary happens in terms of computing - some computations get done, some images are shuffled around on the screen, the effects are sparse. The hardware requirements aren't much by modern standards, but still, this game could be ported 1:1 to a machine with Pentium II with a 3dfx graphics card. And yet it demands so much more - not a lot by today standards, but still. I am tempted to try to run it on a 2010 netbook to see if it even boots up.
reidrac
It is made in lua using love2d. That helped the developers and comes with a cost in minimal requirements (even if they aren't much for a game released in 2024).
0xDEAFBEAD
One way to think about it is: If we were coding all our games in C with no engine, they would run faster, but we would have far fewer games. Fewer games means fewer hits. Odds are Balatro wouldn't have been made, because those developer hours would've been allocated to some other game which wasn't as good.
gwern
Balatro was started in vacation time and underwent a ton of tweaking: https://localthunk.com/blog/balatro-timeline-3aarh So if it had to be written in C, probably neither of those would have happened.
thn-gap
The game is ported to switch, and it does run slow when you do big combos. You can feel it visually to the point that it's a bit annoying.
There is an argument to be made that the market buys bug-filled, inefficient software about as well as it buys pristine software. And one of them is the cheapest software you could make.
It's similar to the "Market for Lemons" story. In short, the market sells as if all goods were high-quality but underhandedly reduces the quality to reduce marginal costs. The buyer cannot differentiate between high and low-quality goods before buying, so the demand for high and low-quality goods is artificially even. The cause is asymmetric information.
This is already true and will become increasingly more true for AI. The user cannot differentiate between sophisticated machine learning applications and a washing machine spin cycle calling itself AI. The AI label itself commands a price premium. The user overpays significantly for a washing machine[0].
It's fundamentally the same thing when a buyer overpays for crap software, thinking it's designed and written by technologists and experts. But IC1-3s write 99% of software, and the 1 QA guy in 99% of tech companies is the sole measure to improve quality beyond "meets acceptance criteria". Occasionally, a flock of interns will perform an "LGTM" incantation in hopes of improving the software, but even that is rarely done.
[0] https://www.lg.com/uk/lg-experience/inspiration/lg-ai-wash-e...