Skip to content(if available)orjump to list(if available)

Users don't care about your tech stack

Users don't care about your tech stack

257 comments

·February 21, 2025

gizmo

This argument always feels like a motte and bailey to me. Users don't literally care what what tech is used to build a product. Of course not, why would they?

But that's not how the argument is used in practice. In practice this argument is used to justify bloated apps, bad engineering, and corner-cutting. When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.

Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

Users don't care what language or libraries you use. Users care only about functionality, right? But guess what? These two things are not independent. If you want to make something that starts instantly you can't use electron or java. You can't use bloated libraries. Because users do notice. All else equal users will absolutely choose the zippiest products.

talksik

I like this take, though deadlines do force you to make some tradeoffs. That's the conclusion I've come to.

I do think people nowadays over-index on iteration/shipping speed over quality. It's an escape. And it shows, when you "ship".

nindalf

> a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

This isn't true. It took me two seconds to create a new project, run `cargo build` followed by `ls -hl ./target/debug/helloworld`. That tells me it's 438K, not 3.7MB.

Also, this is a debug build, one that contains debug symbols to help with debugging. Release builds would be configured to strip them, and a release binary of hello world clocks in at 343K. And for people who want even smaller binaries, they can follow the instructions at https://github.com/johnthagen/min-sized-rust.

Older Rust versions used to include more debug symbols in the build, but they're now stripped out by default.

gizmo

$ rustc --version && rustc hello.rs && ls -alh hello

rustc 1.84.1 e71f9a9a9 2025-01-27 -rwxr-xr-x 1 user user 9.1M hello

So 9.1 MB on my machine. And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.

Windows 95 came on 13x 3.5" floppies, so 22MB. The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.

Lvl999Noob

Fwiw, in something like hello world, most of the size is just the rust standard library that's statically linked in. Unused parts don't get removed as it is precompiled (unless there's some linker magic I am unaware of). A C program dynamically links to the system's libc so it doesn't pay the same cost.

steveklabnik

Before a few days ago I would have told you that the smallest binary rustc has ever produced is 137 bytes, but I told that to someone recently and they tried to reproduce and got it down to 135.

The default settings don’t optimize for size, because for most people, this doesn’t matter. But if you want to, you can.

Aurornis

> And as I pointed out in a comment below, your release binary of 440k is still larger than necessary by a factor 2000x or so.

This is such a silly argument because nobody is optimizing compilers and standard libraries for Hello World utilities.

It's also ridiculous to compare debug builds in rust against release builds for something else.

If you want a minimum sized Hello World app in rust then you'd use nostd, use a no-op panic handler, and make the syscalls manually.

> The rust compiler package takes up 240mb on my machine. That means rust is about 10x larger than a fully functional desktop OS from 30 years ago.

Fortunately for all of us, storage costs and bandwidth prices have improved by multiple orders of magnitude since then.

Which is why we don't care. The added benefits of modern software are great.

You're welcome to go back and use a 30 year old desktop OS if you'd like, though.

nindalf

rustc --version && rustc main.rs && ls -alh main

rustc 1.85.0 (4d91de4e4 2025-02-17) -rwxr-xr-x 1 user user 436K 21 Feb 17:17 main

What's your output for `rustup default`?

Also, what's your output when you follow min-sized-rust?

bitbasher

For me,

3.8M Feb 21 11:56 target/debug/helloworld

metaltyphoon

Why is not —release being passed to cargo? It’s not like the File Pilo mentioned by GP is released with debug symbols.

thfuran

>When people say “users don’t care about your tech stack,” what they really mean is that product quality doesn’t matter.

No, it means that product quality is all that matters. The users don't care how you make it work, only that it works how they want it to.

jeltz

I have never seen it used like that. I have always seen it used like parent said: to justify awful technical choices which hurt the user.

I have written performant high quality products in weird tech stacks where performance can be s bit tricky to get: Ruby, PL/PgSQL, Perl, etc. But it was done by a team who cared a lot about technology and their tech stack. Otherwise it would not have been possible to do.

JohnFen

This is a genuinely fascinating difference in perception to me. I don't remember ever hearing it used in the way you have. I've always heard it used to point out that devs often give more focus on what tools they use than they do on what actually matters to their customers.

kube-system

TFA uses the phase that way.

> What truly makes a difference for users is your attention to the product and their needs.

> Learn to distinguish between tech choices that are interesting to you and those that are genuinely valuable for your product and your users.

v3xro

Would like to echo this. I've seen this used to justify extracting more value from the user rather than spending time doing things that you can't ship next week with a marketing announcement.

bdcravens

I've also seen it used when discussing solutions that aren't stack pure (for instance, whether to stick with the ORM or write a more performant pure SQL version that uses database-engine specific features)

jasonlotito

> I have never seen it used like that.

Then you need to read more, because that's what it means. The tech stack doesn't matter. Only the quality of the product. That quality is defined by the user. Not you. Not your opinion. Not your belief. But the user of the product.

> which hurt the user.

This will self correct.

Horrible tech choices have lead to world class products that people love and cherish. The perfect tech choices have lead to things people laugh at and serve as a reminder that the tech stack doesn't matter, and in fact, may be a red flag.

wink

Look at every single discussion about Electron ;)

"It's a basic tool that sits hidden in my tray 99.9% of the time and it should not use 500MB of memory when it's not doing anything" is part of product quality.

elktown

Only 500MB? Now you're being charitable.

foldr

Using 500MB of memory while not doing anything isn’t really a problem. If RAM is scarce then it will get paged out and used by another app that is doing something.

bluefirebrand

Businesses need to learn that, like it or not, code quality and architecture quality is a part of product quality

You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time

This is why startups can outcompete incumbents sometimes

Suddenly there's a market shift and a startup can actually build your entire product and the new competitive edge in less time than it takes you to add just the new competitive edge, because your code and architecture has atrophied to the point it takes longer to update it than it would to rebuild from scratch

Maybe this isn't as common as I think, I don't know. But I am pretty sure it does happen

thfuran

>You can have a super great product that makes a ton of money right now that has such poor build quality that you become too calcified to improve in a reasonable amount of time

While it's true that that can be partially due to tech debt, there are generally other factors as well. The more years you've had to accrue customers in various domains, the more years of decisions you have to maintain backwards compatibility with, the more regulatory regimes you conduct business under and build process around, the slower you're going to move compared to someone trying to move fast and break things.

dartos

> No, it means that product quality is all that matters

But it says that in such a roundabout way that non technical people use it as an argument for MBAs to dictate technical decisions in the name of moving fast and breaking things.

null

[deleted]

commandlinefan

> product quality is all that matters

I don't know what technology was used to build the audio mixer that I got from Temu. I do know that it's a massive pile of garbage because I can hear it when I plug it in. The tech stack IS the product quality.

jmcqk6

I don't think that's broadly true. The unfortunate truth about our profession is that there is no floor to how bad code can be while yet generating billions of dollars.

thfuran

If it's making billions of dollars, somebody somewhere is getting a lot of what they want out of it. But it's possible that those people are actually the purchasing managers or advertisers rather than the users of the software. "Customers" probably would've been the more correct term. Or sometimes "shareholders".

portaouflop

If users care so much about product quality why is everyone using the most shitty software ever produced — such as Teams?

salomonk_mur

For 99% of users, what you describe really isn't something they know or care about.

thfuran

I might agree that 99% of users don't know what they want, but not that they don't care.

an-unknown

> Yesterday File Pilot (no affiliation) hit the HN frontpage. File Pilot is written from scratch and it has a ton of functionality packed in a 1.8mb download. As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

While the difference is huge in your example, it doesn't sound too bad at first glance, because that hello world just includes some Rust standard libraries, so it's a bit bigger, right? But I remember a post here on HN about some fancy "terminal emulator" with GPU acceleration and written in Rust. Its binary size was over 100MB ... for a terminal emulator which didn't pass vttest and couldn't even do half of the things xterm could. Meanwhile xterm takes about 12MB including all its dependencies, which are shared by many progams. The xterm binary size itself is just about 850kB of these 12MB. That is where binary size starts to hurt, especially if you have multiple such insanely bloated programs installed on your system.

> If you want to make something that starts instantly you can't use electron or java.

Of course you can make something that starts instantly and is written in Java. That's why AOT compilation for Java is a thing now, with SubstrateVM (aka "GraalVM native-image"), precisely to eliminate startup overhead.

bmicraft

alacritty (in the arch repo) is 8MB decompressed

alacritty is also written in rust and gpu accelerated, so the other vte must just be just be plain bad

Edit: Just tried turning on a couple bin-size optimizations which yielded a 3.3M binary

matula

> In practice this argument is used to justify bloated apps

Speaking of motte-and-bailey. But I actually disagree with the article's "what should you focus on". If you're a public-facing product, your focus should be on making something the user wants to use, and WILL use. And if your tech stack takes 30 seconds to boot up, that's probably not the case. However, if you spend much of your time trying eek out an extra millisecond of performance, that's also not focusing on the right thing (disclaimer: obviously if you have a successful, proven product/app already, performance gains are a good focus).

It's all about balance. Of course on HN people are going to debate microsecond optimizations, and this is perfect place to do so. But every so often, a post like this pops up as semi-rage bait, but mostly to reset some thinking. This post is simplistic, but that's what gets attention.

I think gaming is good example that illustrates a lot of this. The purpose of games is to appeal to others, and to actually get played. And there are SO many examples of very popular games built on slow, non-performant technologies because that's what the developer knew or could pick up easily. Somewhere else in this thread there is a mention of Minecraft. There are also games like Undertale, or even the most popular game last year Balatro. Those devs didn't build the games focusing on "performance", they made them focusing on playability.

gjsman-1000

I saw an HN post recently where a classic HN commentator was angry that another person was using .NET Blazor for a frontend; with the mandatory 2MB~3MB WASM module download.

He responded by saying that he wasn’t a front-end developer, and to build the fancy lightweight frontend would be extremely demanding of him, so what’s the alternative? His customers find immensely more value in the product existing at all, than by its technical prowess. Doing more things decently well is better than doing few things perfectly.

Although, look around here - the world’s great tech stack would be shredded here because the images weren’t perfectly resized to pixel-perfect fit their frames, forcing the browser to resize the image, which is slower, and wastes CPU cycles every time, when it could have been only once server side, oh the humanity, think about how much ice you’ve melted on the polar ice caps with your carelessness.

itishappy

> As somebody on Twitter pointed out, a debug build of "hello world!" in Rust clocks in at 3.7mb. (No shade on Rust)

Debug symbols aren't cheap. A release build with a minimal configuration (linked below) gets that down to 263kb.

https://stackoverflow.com/questions/29008127/why-are-rust-ex...

gizmo

My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.

Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.

How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.

The executable is 175 bytes.

[1] https://pastebin.com/p7VzLYxS

(Again, the point is not that Rust is bad or bloated but that people forget that 1 megabyte is actually a lot of data.)

Mawr

And your point is completely wrong. It makes no sense for a language to by default optimize for the lowest possible binary size of a "hello world"-sized program. Nobody's in the business of shipping "hello world" to binary-size-sensitive customers.

Non-toy programs tend to be big and the size of their code will dwarf whatever static overhead there is, so your argument does not scale.

Even then, binary size is a low priority item for almost all use cases.

But then even if you do care about it, guess what, every low level language, Rust, C, whatever, will let you get close to the lowest size possible if you put in the effort.

So no, on no level does your argument make sense with any of the examples you've given.

Aurornis

> My point was that many programmers have no conception of much functionality you can fit in a program of a few megabytes.

Many of my real-world Rust backend services are in the 1-2MB range.

> Here is a pastebin[1] of a Python program that creates a "Hello world" x64 elf binary.

> How large do you think the ELF binary is? Not 1kb. Not 10kb. Not 100kb. Not 263kb.

> The executable is 175 bytes.

You can also disable the standard library and a lot of Rust features and manually write the syscall assembly into a Rust program. With enough tweaking of compiler arguments you'd probably get it to be a very small binary too.

But who cares? I can transfer a 10MB file in a trivial amount of time. Storage is cheap. Bandwidth is cheap. Playing code golf for programs that don't do anything is fun as a hobby, but using it as a debate about modern software engineering is nonsensical.

itishappy

No disagreement here! Just curious how big the impact of debug symbols was and wanted to share my findings.

pythonaut_16

Thanks for pointing this out.

It does seem weird to complain about the file size of a debug build not a release build.

marcinzm

In my experience, the people who make these arguments often don't even know their own tech stack of choice well enough to make it work halfway efficiently. They say 10ms but that assumes someone who knows the tech stack, the tradeoffs and can optimize it. In their hands its going to be 1+ seconds and becomes such a tangled mess it can't be optimized down the line.

bloomingkales

If you want to make something that starts instantly you can't use electron or java.

This technical requirement is only on the spec sheet created by HN goers. Nobody else cares. Don't take tech specs from your competitors, but do pay attention. The user is always enchanted by a good experience, and they will never even perceive what's underneath. You'd need a competitor to get in their ear about how it's using Electron. Everyone has a motive here, don't get it twisted.

morcus

> They won’t notice those extra 10 milliseconds you save

They won't notice if this decision happens once, no. But if you make a dozen such decisions over the course of developing a product, then the user will notice. And if the user has e.g. old hardware or slow Internet, they will notice before a dozen such decisions are made.

austin-cheney

In my career of writing software most developers are fully incapable of measuring things. They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.

And yes, contrary to many comments here, users will notice that 10ms saved if it’s on every key stroke and mouse action. Closer to reality though is sub-millisecond savings that occurs tens of thousands of times on each user interaction that developers disregard as insignificant and users always notice. The only way to tell is to measure things.

jakevoytko

When I was at Google, our team kept RUM metrics for a bunch of common user actions. We had a zero regression policy and a common part of coding a new feature was running benchmarks to show that performance didn't regress. We also had a small "forbidden list" of JavaScript coding constructs that we measured to be particularly slow in at least one of Chrome/Firefox/Internet Explorer.

Outside contributors to our team absolutely hated us for it (and honestly some of the people on the team hated it too); everyone likes it when their product is fast, and nobody likes being held to the standard of keeping it that way. When you ask them to rewrite their functional coding as a series of `for` loops because the function overhead is measurably 30% slower across browsers[0], they get so mad.

[0] This was in 2010, I have no idea what the performance difference is in the Year of Our Lord 2025.

morkalork

Have you had the chance to interact with any of the web interfaces for their cloud products like GCP Console, Looker Studio, Big Query etc? It's painful, like when clicking a button or link you can feel a cloud run initializing itself in the background before processing your request.

otterley

Boy, do I wish more teams worked this way. Too many product leaders are tasked with improving a single KPI (for example, reducing service calls) but without requiring other KPIs such as user satisfaction to remain constant. The end result is a worse experience for the customer, but hey, at least the leader’s goal was met.

jjice

> They instead make completely unfounded assertions that are wrong more than 80% of the time and when they are wrong they tend to be wrong by several orders of magnitude.

I completely agree. It blows my mind how fifteen minutes of testing something gets replaced with a guess. The most common situation I see this in (over and over again) is with DB indexes.

The query is slow? Add a bunch of random indexes. Let's not look at the EXPLAIN and make sure the index improves the situation.

I just recently worked with a really strong engineer that kept saying we were going to need to shard our DB soon, but we're way to small of a company for that to be justified. Our DB shouldn't be working that hard (it was all CPU load), there had to be a bad query in there. He even started drafting plans for sharding because he was adamant that it was needed. Then we checked RDS Performance Insights and saw it was one rogue query (as one should expect). It was about about a 45 minute fix and after downsizing one notch on RDS, we're sitting at about 4% most of the time on the DB.

But this is a common thing. Some engineers will _think_ there's going to be an issue, or when there is one, completely guess what it is without getting any data.

Another anecdote from a past company was them upsizing their RDS instance way more than they should need for their load because they dealt with really high connection counts. There was no way this number of connections should be going on based on request frequency. After a very small amount of digging, I found that they would open a new DB connection per object they created (this was PHP). Sometimes they'd create 20 objects in a loop. All the code was synchronous. You ended up with some very simple HTTP requests that would cause 30 DB connections to be established and then dropped.

finnthehuman

My plex server was down and my lazy solution was to connect directly to the NAS. I was surprised just how much I noticed the responsiveness after getting used to web players. A week ago I wouldn't have said web player bothered me at all. Now I can't not notice.

exhaze

Can you show me any longitudinal studies that show examples of a causal connection between incrementality of latency and churn? It’s easy to make such a claim and follow up with “go measure it”. That takes work. There are numerous other things a company may choose to measure instead that are stronger predictors of business impact.

There is probably some connection. Anchoring to 10ms is a bit extreme IMO because it’s indirectly implying that latency is incredibly important which isn’t universally true - each product’s metrics that are predictive of success are much more nuanced and may even have something akin to the set of LLM neurons called “polysemantic” - it may be a combination of several metrics expressed via some nontrivial function that are the best predictor.

For SaaS, if we did want to simplify things and pick just one - usage. That’s the strongest churn signal.

Takeaway: don’t just measure. Be deliberate about what you choose to measure. Measuring everything creates noise and can actually be detrimental.

bluGill

Human factors has a long history of studying this. I'm 30 years out of school and wouldn't know where to find my notes (and thus references) , but there are places where users will notice 5ms. There are other places where seconds are not noticed.

The web forced people to get used to very long latency and so fail no longer comment on 10+ seconds but the old studies prove they notice them and shorter waits would drive better "feelings". Back in the old days (of 25mhz CPUs!) we had numbers of how long your application could take to do various things before users would become dissatisfied. Most of the time dissatisfied is not something they would blame on the latency even though the lab test proved that was the issue, instead it was a general 'feeling' they would be unable to explain.

There are many many different factors that UI studies used to measure. Lag in the mouse was a big problem, not just the point movement either: if the user clicks you have only so long before it must be obvious that the application saw a click (My laptop fails at this when I click on a link), but didn't have to bring up the respond nearly as fast so long as users could tell it was processing.

austin-cheney

Here is a study on performance that I did for JavaScript in the browser: https://github.com/prettydiff/wisdom/blob/master/performance...

TLDR; full state restoration of a OS GUI in the browser under 80ms from page request. I was eventually able to get that exact scenario down to 67ms. Not only is the state restoration complete but it covers all interactions and states of the application in a far more durable and complete way than big JavaScript frameworks can provide.

Extreme performance showed me two things:

1. Have good test automation. With a combination of good test automation and types/interfaces on everything you can refactor absolutely massive applications in about 2 hours with almost no risk of breaking anything.

2. Tiny performance improvements mean massive performance gains overall. The difference in behavior is extreme. Imagine pressing a button and what you want is just there before your brain can process screen flicker. This results in a wildly different set of user behaviors than slow software that causes users to wait between interactions.

Then there are downstream consequences to massive performance improvements, the second order consequences. If your software is extremely fast across the board then your test automation can be extremely fast across the board. Again, there is a wildly different set of expectations around quality when you can run end-to-end testing across 300 scenarios in under 8 seconds as compared to waiting 30 minutes to fully validate software quality. In the later case nobody runs the tests until they are forced to as some sort of CI step and even then people will debate if a given change is worth the effort. When testing takes less than 8 seconds everybody and their dog, including the completely non-technical people, runs the tests dozens of times a day.

I wrote my study of performance just a few months before being laid off from JavaScript land. Now, I will never go back for less than half a million in salary per year. I got tired of people repeating the same mistakes over and over. God forbid you know what the answer is to cure world hunger and bring in world peace, because any suggestion to make things better is ALWAYS met with hostility if it challenges a developer's comfort bubble. So, now I do something else where I can make just as much money without all the stupidity.

arkh

Soooooo, my "totally in the works" post about how direct connection to your RDMS is the next API may not be so tongue in cheek. No rest, no graphQL, no http overhead. Just plain SQL over the wire.

Authentication? Already baked-in. Discoverability? Already in. Authorization? You get it almost for free. User throttling? Some offer it.

Caching is for weak apps.

Aurornis

I find it fascinating that HN comments always assert that 10ms matters in the context of user interactions.

60Hz screens don't even update every 10ms.

What's even more amazing is that the average non-tech person probably won't even notice the difference between a 60Hz and a 120Hz screen. I've held 120Hz and 60Hz phones side by side in front of many people, scrolled on both of them, and had the other person shrug because they don't really see a difference.

The average user does not care about trivial things. As long as the app does what they want in a reasonable amount of time, it's fine. 10ms is nothing.

tuna74

60hz screens update every 16.7 ms. So if you add 10 ms to your frame time you will probably miss that 16.7 ms window.

Almost all can see the difference between 60 and 120 fps. Most probably don't care though.

alex77456

Multiply that by the number of users and total hours your software is used, and suddenly it's a lot of wasted Watts of energy people rarely talk about.

nprateem

But they still don't care about your stack. They care that you made something slow.

Fix that however you like but don't pretend your non-technical users directly care that you used go vs java or whatever. The only time that's relevant to them is if you can use it for marketing.

morcus

That's fine, but I am responding directly to the article.

> [Questions like] “Is this framework more performant than the other?” keep coming up. But the truth is: users don’t care about any of that.

marcinzm

> extra 10 milliseconds you saved

Strawmen arguments are no fun so let's look at an actual example:

https://www.rippling.com/blog/the-garbage-collector-fights-b...

P99 of 3 SECONDS. App stalls for 2-4 SECONDS. All due to Python.

Their improved p99 is 1.5 seconds. Tons of effort and still could only get 1.5 seconds.

https://www.gigaspaces.com/blog/amazon-found-every-100ms-of-...

> Amazon Found Every 100ms of Latency Cost them 1% in Sales

I've seen e-commerce companies with 1 second p50 latencies due to language choices. Not good for sales.

bearjaws

> Amazon Found Every 100ms of Latency Cost them 1% in Sales

I see this quoted but Amazon has become 5x slower (guestimate) and it doesn't seem like they are working on it as much. Sure the home page loads "fast" ~800ms over fiber, but clicking on a product routinely takes 2-3 seconds to load.

marcinzm

Amazon nowadays has a near monopoly powered by ad money due to the low margin on selling products versus ad spend. So unless you happen to be in the same position using them nowadays as an example isn't going to be very helpful. If they increased sales 20% at the cost of 1% less ad spend then they'd probably be at a net loss as a result.

scott_w

So you're kinda falling into a fallacy here. You're taking a specific example and trying to make a general rule out of it. I also think the author of the article is doing the same thing, just in a different way.

Users don't care about the specifics of your tech stack (except when they do) but they do care about whether it solves their problem today and in the future. So they indirectly care about your tech stack. So, in the example you provided, the user cares about performance (I assume Rippling know their customer). In other examples, if your tech stack is stopping you from easily shipping new features, then your customer doesn't care about the tech debt. They do care, however, that you haven't given them any meaningful new value in 6 months but your competitor has.

I recall an internal project where a team discussed switching a Python service with Go. They wanted to benchmark the two to see if there was a performance difference. I suggested from outside that they should just see if the Python service was hitting the required performance goals. If so, why waste time benchmarking another language? It wasn't my team, so I think they went ahead with it anyway.

codelion

I think there's a balance to be struck. While users don't directly care about the specific tech, they do care about the results – speed, reliability, features. So, the stack is indirectly important. Picking the right tools (even if there are several "good enough" options) can make a difference in delivering a better user experience. It's about optimizing for the things users do notice.

st3fan

All modern tech stacks have those properties in 2025.

bigstrat2003

They absolutely do not. In fact, relatively few do. Every single Electron app (which is a depressing number of apps) is a bloated mess. Most web pages are a bloated mess where you load the page and it isn't actually loaded, visibly loading more elements as the "loaded" page sits there.

Software sucks to use in 2025 because developers have stopped giving a shit about performance.

benrutter

This is so true, and yet. . . Bad, sluggish performance is everywhere. I sometimes use my phone for online shopping, and I'm always amazed how slow ecommerce companies can make something as simple as opening a menu.

vivzkestrel

happens when you use a react library with 30000 lines to show a simple select menu

za3faran

Having worked on similar solutions that use Java and Python, I can't say I agree (the former obviously being much faster).

benrutter

Yeah, those languages do a really good point of demonstrating the original point! Java would lead to a lot better performance in a lot of cases (like building a native application say) but Python despite being slow, has great FFI (which Java doesn't) so is a good shout for use cases like data science where you really just want a high level controller for some C or Rust code.

Point being, Python despite being slow as a snail or a cruise ship will lead to faster performance in some specific contexts, so context really is everything.

evidencetamper

This is a mixed bag of advice. While it seems wise at the surface, and certainly works as an initial model, the reality is a bit more complex than aphorisms.

For example, what you know might not provide the cost benefit ratio your client would. Or the performance. If you only know Cloud Spanner but now there is a need for a small relational table? These maxims have obvious limitations.

I do agree that the client doesn't care about the tech stack. Or that seeking a golden standard is a McGuffin. But it does much deeper than that. Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.

A good engineer balances tradeoffs and solves problems in a satisfying way sufficing all requirements. That can be MySQL and Node. But it can also be C++ and Oracle Coherence. Shying away from a tool just because it has a reputation is just as silly as using it for a hype.

DrScientist

> Maybe a great solution will be a mix of OCaml, Lua, Python, Perl, low latency and batches.

Your customer does care about how quickly you can iterate new features over time, and product stability. A stack with a complex mix of technologies is likely to be harder to maintain over the longer term.

evidencetamper

That's also an aphorism that may or may not correspond to reality.

Not only there are companies with highly capable teams that are able to move fast using a complex mix of technologies, but also there are customers who have very little interest in new features.

This is the point of my comment: these maxims are not universal truths, and taking them as such is a mistake. They are general models of good ideas, but they are just starter models.

A company needs to attend to its own needs and solve its own problems. The way this goes might be surprisingly different from common sense.

DrScientist

Sure universal truths are rare - though I think there are many more people using such an argument to justify an overly complex stack, than there are cases where it truly is the best solution long term.

Remember even if you have an unchanging product, change can be forced on you in terms of regulatory compliance, security bugs, hardware and OS changes etc.

I think the point of the original post is that most important part of the context is the people ( developers ) and what they know how to use well and I'd agree.

I'd just say that one thing I've learnt is that even if the developer in the future that has to add some feature or fix some bug, is the developer who originally wrote it, life is so much easier if the original is as simple as possible - but hey maybe that's just me.

mexicocitinluez

> these maxims are not universal truths, and taking them as such is a mistake.

Amen.

999900000999

How big is your team?

One person writing a stack in 6 languages is different from a team of 100 using 6 languages.

The problem emerges if you have some eccentric person who likes using a niche language no one else on the team knows. Three months into development they decide they hate software engineering and move to a farm in North Carolina.

Who else is going to be able to pick up their tasks, are you going to be able to quickly on board someone else. Or are you going to have to hire someone new with a specialty in this specific language.

This is a part of why NodeJS quickly ate the world. A lot of web studios had a bunch of front end programmers who were already really good with JavaScript. While NodeJS and frontend JS aren't 100% the same, it's not hard to learn both.

Try to get a front end dev to learn Spring in a week...

evidencetamper

Excellent comment. What you raised are two important aspects of the analysis that the article didn't bother thinking about:

- how to best leverage the team you currently have

- what is the most likely shape your team will have in the future

Jane Street has enough resources and experts to be able to train developers on OCaml, Nubank and Clojure also comes to mind. If one leaves, the impact is not devastating. Hiring is not straightforward, but they are able to hire engineers willing to learn and train them.

This is not true for a lot of places, that have tighter teams and budgets, whose product is less specialized, and so on.

But this is where the article fails and your comment succeeds: actually setting out parameters to establish a strategy.

DrScientist

> This is a part of why NodeJS quickly ate the world

And the other part is you can share, say, data validation code between client and server easily - or move logic either side of the network without having to rewrite it.

ie Even if you are an expert in Java and Javascript - there are still benefits to running the same both ends.

tstrimple

Very much this. The concerns with running a six person team are quite a bit different from the concerns of directing hundreds to thousands of developers across multiple projects. No matter how good the team is and how well they are paid and treated, there will be churn. Hiring and supporting folks until they are productive is very expensive and gets more expensive the more complicated and number of different stacks you have to maintain.

If you want to have efficient portability of developers between teams you've got to consolidate and simplify your stacks as much as possible. Yeah your super star devs already know most of the languages and can pick up one more in stride no problem. But that's not your average developer. That average dev in very large organizations has worked on one language in one capacity for the last 5-15 years and knows almost nothing else. They aren't reading HN or really anything technology related not directly assigned via certification requirements. It's just a job. They aren't curious about the craft. How are you able to get those folks as productive as possible within your environment while still building institutional resiliency and, when possible, improving things?

That's why the transition from small startup with a couple pizza teams to large organizations with hundreds of developers is so difficult. They are able to actually hire full teams of amazing developers who are curious about the craft. The CTO has likely personally interviewed every single developer. At some point that doesn't become feasible and HR processes become involved. So inevitably the hiring bar will drop. And you'll start getting in more developers who are better about talking through an interview process than jumping between tech stacks fluidly. At some point, you have to transition to a "serious business" with processes and standards and paperwork and all that junk that startup devs hate. Maybe you can afford to have a skunkworks team that can play like startups. But it's just not feasible for the rest of Very Large Organizations. They have to be boring and predictable.

JohnFen

> Your customer does care about how quickly you can iterate new features over time

How true this is depends on your particular target market. There is a very large population of customers that are displeased by frequent iterations and feature additions/changes.

phantomathkg

The author didn't say listen to the opinion of other, hype or not. The author said "set aside time to explore new technologies that catch your interest ... valuable for your product and your users. Finding the right balance is key to creating something truly impactful.".

It means we should make our own independent, educated judgement based on the need of the product/project we are working on.

mexicocitinluez

> , the reality is a bit more complex than aphorisms.

This is the entire tech blog, social media influencer, devx schtick though. Nuance doesn't sell. Saying "It depends" doesn't get clicks.

thecleaner

> Shying away from a tool just because it has a reputation is just as silly as using it for a hype.

Trying to explain this to a team is one of the most frustrating things ever. Most of the time people pick / reject tools because of "feels".

On a related note, I never understood the hype around GraphQL for example.

evidencetamper

I heavily dislike GraphQL for all of the reasons. But I'll say that for a lot of developers, if you are already setting up an API gateway, you might as well batch the calls, and simplify the frontend code.

I don't buy it :) but I can see the reasoning.

xandrius

I'd saw nowadays, C++ is rarely the best answer, especially for the users.

bluGill

C++ is often the best answer for users, but this is about how bad the other options are, and not that C++ is good. Options like Rust doesn't have the mature frameworks that C++ does. (rust-qt is often used as a hack instead of a pure rust framework). There is a big difference between modern C++ and the old C++98 as well, and the more you force you code to be modern C++ the less the footguns in C++ will hit you. The C++ committee is also driving forward in eliminating the things people don't like about C++.

Users don't care about your tech stack. They care about things like battery life, and how fast your program runs, how fast your program starts - places where C++ does really well. (C, rust... also do very well). Remember this is real world benchmarks, you can find micro benchmarks where python is just as fast as well written C, but if write a large application in python they will be 30-60 times slower than the same written in C++.

Note however that users only care about security after it is too late. C++ can be much better than C, but since it is really easy to write C style code in C++ you need a lot more care than you would want.

If for your application Rust or ada does have mature enough frameworks to work with then I wouldn't write C++, but all too often the long history of C++ means it is the best choice. In some applications managed languages like Java works well, but in others the limits of the runtime (startup, worse battery life) make it a bad choice. Many things are scripts you won't run very much and so python is just fine despite how slow it is. Make the right choice, but don't call C++ a bad choice just because for you it is bad.

williamcotton

For real time audio synthesis or video game engines then C++ is the industry standard.

evidencetamper

It's true, and of course, all models are wrong, especially as you go into deeper detail, so I can't really argue an edge case here. Indeed, C++ is rarely the best answer. But we all know of trading systems and gaming engines that rely heavily on C++ (for now, may Rust keep growing).

ChrisMarshallNY

...unless you do HFT...

austin-cheney

It would be funny if it weren’t tragic. So many of the comments here echo the nonsense of my software career: developers twisting themselves in knots to justify writing slow software.

hnthrow90348765

I've not seen a compelling reason start the performance fight in ordinary companies doing CRUD apps. Even if I was good at performance, I wouldn't give that away for free, and I'd prefer to go to companies where it's a requirement (HFT or games), which only furthers the issue about slowness being ubiquitous.

For example, I dropped a 5s paginated query doing a weird cross join to ~30ms and all I got for that is a pat on the back. It wasn't skill, but just recognizing we didn't need the cross join part.

We'd need to start firing people who write slow queries, forcing them to become good, or pay more for developers who know how to measure and deliver performance, which I also don't think is happening.

zwnow

For 99% of apps slow software is compensated by fast hardware. In almost all cases, the speed of your software does not matter anymore. Unless speed is critical, you can absolutely justify writing slow software if its more maintainable that way.

bluGill

And thus when I clicked on the link to a NPR story just now it was 10 seconds before the page was reasable on my computer.

Now my computer (pinebook pro) was never known as fast, but still it runs circles around the first computer I ran a browser. (I'm not sure which computer that was, but likely the CPU was running at 25mhz, could have been a 80486 or a Sparc CPU though - now get off my lawn you kids)

austin-cheney

Those are things developers say to keep themselves employable.

bigstrat2003

Your users feel otherwise. If you actually care at all about the quality of the software you produce, stop rationalizing slow software.

null

[deleted]

ChrisMarshallNY

This is a fairly classic rant. Many have gone before, and many will come after.

I have found that it's best to focus on specific tools, and become good with them, but always be ready to change. ADHD-style "buzzword Bingo," means that you can impress a lot of folks at tech conferences, but may have difficulty reliably shipping.

I have found that I can learn new languages and "paradigms," fairly quickly, but becoming really good at it, takes years.

That said, it's a fast-changing world, and we need to make sure that we keep up. Clutching onto old tech, like My Precioussss, is not likely to end well.

manmal

What do you think of Elixir in that regard? It seems to be evolving in parallel to current trends, but it still seems a bit too niche for my taste. I‘m asking because I‘m on the fence on whether I should/want to base my further server side career on it. My main income will likely come from iOS development for at least a few more years, but some things feel off in the Apple ecosystem, and I feel the urge to divest.

HalcyonicStorm

Ive been working in Elixir since 2015. I love the ecosystem and think its the best choice for building a web app from a pure tech/stability/scalability/productivity perspective (I also have a decade+ experience in Ruby on rails, Nodejs, and Php laravel, plus Rust to a lesser extent).

I am however having trouble in the human side of it. Ive got a strong resume but I was laid off in Nov 2024 and Im having trouble even getting Elixir interviews (with 9+ years of production Elixir experience!). Hiring people with experience was also hard when I was the hiring manager. It is becoming less niche these days. I love it too much to leave for other ecosystems in the web sphere

manmal

Thanks for sharing your perspective. FWIW, I hope you'll find a nice position soon!

ChrisMarshallNY

I couldn't even begin to speak to Elixir. Never used it.

Most of my work is client-side (native Apple app development, in Swift).

For server-side stuff, I tend to use PHP (not a popular language, hereabouts). Works great.

cess11

I like PHP. It has a raw power that is really nice to have in web development.

Elixir is similar but concurrency is a 'first class citizen', processes instead of objects, kind of. It's worth a look. I've never used it but there's a project for building iOS applications with the dominant Elixir web framework, https://github.com/liveview-native/live_view_native .

cess11

Elixir can be used for scripting tasks, config and test rig are usually scripts. In theory you can use the platform for desktop GUI too, one of the bespoke monitoring tools is built that way. Since a few years back there are libraries for numeric and ML computing too.

dzonga

Users don't care - but users care how reliable your software is, users care about how quickly you can ship the features they request.

Tech stack determines software quality depending on the authors of the software of course.

But certain stacks allow devs to ship faster, fix bugs faster and accommodate user needs.

Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.

[1]: https://x.com/jorgemanru/status/1889989498986958958

Aurornis

> Look at what 37Signals is able to do with [1] 5 Product Software Engineers. their output is literally 100x of their competitors.

The linked Tweet thread says they have 16 Software Engineers and a separate ops team that he's not counting for some reason.

There are also comments further down that thread about how their "designers" also code, so there is definitely some creative wordplay happening to make the number of programmers sound as small as possible.

Basecamp (37Signals) also made headlines for losing a lot of employees in recent years. They had more engineers in the past when they were building their products.

Basecamp is also over 20 years old and, to be honest, not very feature filled. It's okay-ish if your needs fit within their features, but there's a reason it's not used by a lot of people.

DHH revealed their requests per second rate in a Twitter argument a while ago and it was a surprisingly low number. This was in the context of him claiming that he could host it all on one or two very powerful servers, if I recall correctly.

When discussing all things Basecamp (37Signals) it's really important to remember that their loud internet presence makes them seem like they have a lot more users than they really do. They've also been refining basically the same product for two decades and had larger teams working in the past.

benrutter

Just joining all the other comments to say there's a split between:

- users don't care about your tech stack - you shouldn't care about your tech stack

I don't on-paper care what metal my car is going to be made of, I don't know enough information to have an opinion. But I reeaaally hope the person designing it has a lot of thoughts on the subject.

buss_jan

I find it funny that is message resurfaces on the front page once or twice a year for at least 10 years now. Product quality is often not the main argument advanced when deciding on a tech stack, only indirectly. Barring any special technical requirements, in the beginning what matters is: - Can we build quickly without making a massive mess? - Will we find enough of the right people who can and want to work with this stack? - Will this tech stack continue to serve us in the future?

Imagine it's 2014 and you're deciding between two hot new framework ember and react, this is not just a question about what is hot or shiny and new.

mrkeen

There's an obvious solution to "language doesn't matter". Let the opinionated people pick the stack. Then you satisfy the needs of the people who care and those who don't care.

bonoboTP

The opinionated people disagree.

mrkeen

The opinionated people I disagree with sure like saying "language doesn't matter", as long as it preserves their status quo.

danjl

This discussion is not about technology. It's about technical people learning that business, product and users are actually important. The best advice I can give technical people about working at startups is that you should learn everything you can about business. You can do that at a startup much easier than at a big tech company. Spend as much time as you can with your actual users, watching them use your product. It will help you communicate with the rest of the team, prioritize your technical tasks, and help you elevate your impact.

mmarian

Problem is your hiring manager at a startup will still care whether you're an expert in the stack-du-jour. So technical people aren't incentivised to care about the business.

null

[deleted]