PaulHoule
resonious
As a counter example (re: agents), I routinely delegate simple tasks to Claude Code and get near-perfect results. But I've also had experiences like yours where I ended up wasting more time than saved. I just kept trying with different types of tasks, and narrowed it down to the point where I have a good intuition for what works and what doesn't. The benefit is I can fire off a request on my phone, stick it in my pocket, then do a code review some time later. This process is very low mental overhead for me, so it's a big productivity win.
SchemaLoad
Sounds like a slot machine. Insert api tokens, get something that's pretty close to right, insert more tokens and hope it works this time.
resonious
Except the tokens you insert have meaning, and some yield better results than others. Not like a slot machine at all, really. Last I checked, those only have 1 possible input, no way to improve your odds.
Aeolun
That’s fine if your expectations are consummate.
PaulHoule
How's that different from a human developer? Give the same task to different developers and you'll get different levels of correctness and quality. Give the task to the same developer on different days and it is the same.
DHRicoF
The cost is in the context switching. Throw 3 tasks that came 15, 20 and 30 min later. The first is mostly ok, you finish by hand. The second have some problems, ask for a rework. Then came the other and, while ok, is have some design problems. Ask another rework. Comes back the second one, and you have to remember the original task and what things you asked for change.
xyzzy123
Thats cool, how are you integrating your phone with your Claude workflow?
discordance
You can set up hooks: https://docs.anthropic.com/en/docs/claude-code/hooks-guide
And use something like ntfy to get notifications on your phone:
I’ve also seen people assign Claude code issues on GitHub and then use the GitHub mobile app on their phone to get notifications and review PRs.
ChadNauseam
I don't know how to do it with Claude Code, but I was at a beach vacation for the past few days and I was studying french on my phone with an webapp that I made. Sometimes I'd notice something bug me, and I used cursor's "background agents" tool to ask it to make a change. This is essentially just a website where you can type in your request, and they allocate a VM, check out your repository, then run the cursor LLM agent inside that VM to implement your requested changes, then push it and create a pull request to your repo. Because I have CI/CD setup, I then just merged the change and waited for it to deploy (usually going for a swim in-between).
I realized as I was doing it that I wouldn't be able to tell anyone about it because I would sound like the most obnoxious AI bro ever. But it worked! (For the simple requests I used it on.) The most annoying part was that I had to tell it to run rustfmt every time, because otherwise it would fail CI and I wouldn't be able to merge it. And then it would take forever to install a rust toolchain and figure out how to run clippy and stuff. But it did feel crazy to be able to work on it from the beach. Anyway, I'm apparently not very good at taking vacations, lol
resonious
My dev environment works perfectly on Termux, and so does Claude Code. So I just run `claude` like normal, and everything is identical to how I do it on desktop.
Edit: clarity
Aeolun
I just SSH into my CC machine from the phone, then use CC.
cycomanic
I've already written about this several times here. I think the current trend of LLMs chasing benchmark scores are going in the wrong direction at least as programming tools. In my experience they get it wrong with enough probability, so I always need to check the work. So I end up in a back and forth with the LLM and because of the slow responses it becomes a really painful process and I could often have done the task faster if I sat down and thought about it. What I want is an agent that responds immediately (and I mean in subseconds) even if some benchmark score is 60% instead of 80%.
pron
Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
I think coding assistants would end up being more helpful if, instead of trying to do what they're asked, they would come back with questions that help us (or force us) to think. I wonder if a context prompt that says, "when I ask you to do something, assume I haven't thought the problem through, and before doing anything, ask me leading questions," would help.
I think Leslie Lamport once said that the biggest resistance to using TLA+ - a language that helps you, and forces you to think - is because that's the last thing programmers want to do.
cycomanic
> Programmers (and I'm including myself here) often go to great lengths to not think, to the point of working (with or without a coding assistant) for hours in the hope of avoiding one hour of thinking. What's the saying? "An hour of debugging/programming can save you minutes of thinking," or something like that. In the end, we usually find that we need to do the thinking after all.
This is such a great observation. I'm not quite sure why this is. I'm not a programmer, but a signal-processing/system engineer/researcher. The weird thing seems that it's the process of programming that causes the "not-thinking" behaviour, e.g. when I program a simulation and I find that I must have a sign error somewhere in my implementation (sometimes you can see this from the results), I end up switching every possible sign around instead of taking a pen and pencil and comparing theory and implementation, if I do other work, e.g. theory, that's not the case. I suspect we try to avoid the cost of the context switch and try to stay in the "programming-flow".
ChrisMarshallNY
I do both. I like to develop designs in my head, and there’s a lot of trial and error.
I think the results are excellent, but I can hit a lot of dead ends, on the way. I just spent several days, trying out all sorts of approaches to PassKeys/WebAuthn. I finally settled on an approach that I think will work great.
I have found that the old-fashioned “measure twice, cut once” approach is highly destructive. It was how I was trained, so walking away from it was scary.
PaulHoule
Sometimes thinking and experimenting go together. I had to do some maintenance on some Typescript/yum that I didn't write but had done a little maintenance.
Typescript can make astonishingly complex error messages when types don't match up so I went through a couple of rounds of showing the errors to the assistant and getting suggestions to fix it that were wrong but I got some ideas and did more experiments and over the course of two days (making desired changes along the way) I figured out what was going wrong and cleared up the use of types such that I was really happy with my code and when I saw a red squiggle I usually knew right away what was wrong and if I did ask the assistant it would also get it right right away.
I think there's no way I would have understood what was going on without experimenting.
pjmlp
I agree with your comment in general, however I would say that on my field, the resistence to TLA+ isn't having to think, rather having to code twice without guarantees that it actually maps to the theorical model.
Tools like Lean and Dafny are much more appreciated, as they generate code from the model.
creamyhorror
> "An hour of debugging/programming can save you minutes of thinking,"
I get what you're referring to here, when it's tunnel-vision debugging. Personally I usually find that coding/writing/editing is thinking for me. I'm manipulating the logic on screen and seeing how to make it make sense, like a math problem.
LLMs help because they immediately think through a problem and start raising questions and points of uncertainty. Once I see those questions in the <think> output, I cancel the stream, think through them, and edit my prompt to answer the questions beforehand. This often causes the LLM's responses to become much faster and shorter, since it doesn't need to agonise over those decisions any more.
panarky
> assume I haven't thought the problem through
This is the essence of my workflow.
I dictate rambling, disorganized, convoluted thoughts about a new feature into a text file.
I tell Claude Code or Gemini CLI to read my slop, read the codebase, and write a real functional design doc in Markdown, with a section on open issues and design decisions.
I'll take a quick look at its approach and edit the doc to tweak its approach and answer a few open questions, then I'll tell it to answer the remaining open questions itself and update the doc.
When that's about 90% good, I'll tell the local agent to write a technical design doc to think through data flow, logic, API endpoints and params and test cases.
I'll have it iterate on that a couple more rounds, then tell it to decompose that work into a phased dev plan where each phase is about a week of work, and each task in the phase would be a few hours of work, with phases and tasks sequenced to be testable on their own in frequent small commits.
Then I have the local agent read all of that again, the codebase, the functional design, the technical design, and the entire dev plan so it can build the first phase while keeping future phases in mind.
It's cool because the agent isn't only a good coder, it's also a decent designer and planner too. It can read and write Markdown docs just as well as code and it makes surprisingly good choices on its own.
And I have complete control to alter its direction at any point. When it methodically works through a series of small tasks it's less likely to go off the rails at all, and if it does it's easy to restore to the last commit and run it again.
makeitdouble
In general agreement about the need to think it through, and she should be careful to not oraise the other extreme.
> "An hour of debugging/programming can save you minutes of thinking"
The trap so many dev fall into is assuming code behaves like they think it is. Or believing documentation or seemingly helpful comments. We really want to believe.
People's mental image is more often than not wrong, and debugging tremendously helps bridge the gap.
alfalfasprout
it's funny, I feel like I'm the opposite and it's why I truly hate working with stuff like claude code that constantly wants to jump into implementation. I want to be in the driver's seat fully and think about how to do something thoroughly before doing it. I want the LLM to be, at most, my assistant. Taking on the task of being a rubber duck, doing some quick research for me, etc.
It's definitely possible to adapt these tools to be more useful in that sense... but it definitely feels counter to what the hype bros are trying to push out.
quarkcarbon279
World of LLMs or not, development should always strive for being fast. In the LLM World, users should always have the controls on accuracy Vs speed. (Though we can try for improving both and not one way or other). For eg at rtrvr.ai we use Gemini Flash as our default and did benchmarking on flash too with 0.9 min per task in the benchmark still yielding top results. That said, I have to accept there are certain web tasks on tail end sites that needs pro to accurately navigate at this point. This is the limitation given our reliance on Gemini models straight up, once we move to our models trained on web trajectories this hopefully will not be a problem.
If using off the shelf LLMs always have a bottleneck of their speed.
markasoftware
GitHub copilot's inline completions still exist, and are nearly instant!
citizenpaul
The only thing I've found that LLM speeds up my work is a sort of advanced find replace.
A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
It has been pretty decent at these types of changes and saves time of poking though and finding all the places I would have updated manually in a way that find/replace never could. Though I've never tried this on a huge code base.
zahlman
> A prompt like " I want to make this change in the code where any logic deals with XXX. To be/do XXX instead/additionally/somelogicchange/whatever"
If I reached a point where I would find this helpful, I would take this as a sign that I have structured the code wrongly.
baq
You would be right about the code but probably wrong about the you. I’ve done such requests to clean up code written over the years by dozens of other people copying patterns around because ship was king… until it wasn’t. (They worked quite well, btw.)
rtpg
sometimes you want a cutpoint for a refactor and only that refactor. And turns out that there is no nice abstraction that is useful beyond that refactor.
skydhash
I supposed you haven’t tried emacs grep mode or vim quickfix? If the change is mechanical, you create a macro and be done in seconds. If it’s not, you still got the high level overview and quick navigation.
kfajdsl
Finding and jumping to all the places is usually easy, but non trivial changes often require some understanding of the code beyond just line based regex replace. I could probably spend some time recording a macro that handles all the edge cases, or use some kind of AST based search and replace, but cursor agent does it just fine in the background.
citizenpaul
I'm decent at that kind of stuff. However thats not really what I'm talking about. For instance today I needed two logic flows. One for data flowing in one direction. Then a basically but not quite reversed version of the same logic for when the data comes back. I was able to write the first version then tell the LLM
"Now duplicate this code but invert the logic for data flowing in the opposite direction."
I'm simplifying this whole example obviously but that was the basic task I was working on. It was able to spit out in a few seconds what would have taken me probably more than an hour and at least one tedium headache break. I'm not aware of any pre LLM way to do something like that.
Or a little while back I was implementing a basic login/auth for a website. I was experimenting with high output token LLM's (i'm not sure that's the technical term) and asked it to make a very comprehensive login handler. I had to stop it somewhere in the triple digits of cases and functions. Perhaps not a great "pro" example of LLM but even though it was a hilariously over complex setup it did give me some ideas I hadn't thought about. I didn't use any of the code though.
Its far from the magic LLM sellers want us to believe but it can save time same as various emac/vim tricks can to devs that want to learn them.
Karrot_Kream
emacs macros aren't the same. You need to look at the file, observe a pattern, then start recording the macro and hope the pattern holds. An LLM can just do this.
Karrot_Kream
I guess it depends? The "refactor" stuff, if your IDE or language server can handle it, then yeah I find the LLM slower for sure. But there are other cases than an LLM helps a lot.
I was writing some URL canonicalization logic yesterday. Because we rolled this out as an MVP, customers put URLs in all sorts of ways and we stored it into the DB. My initial pass at the logic failed on some cases. Luckily URL canonicalization is pretty trivially testable. So I took the most used customers from our DB, send them to Claude and told Claude to come up with the "minimum spanning test cases" that cover this behavior. This took maybe 5-10 sec. I then told Zed's agent mode using Opus to make me a test file and use these test cases to call my function. I audited the test cases and ended up removing some silly ones. I iterated on my logic and that was that. Definitely faster than having to do this myself.
tomrod
I'm consistently seeing personal and shared anecdotes of a 40%-60% speedup on targeted senior work.
As much as I like agents, I am not convinced the human using them can sit back and get lazy quite yet!
stavros
Eeeh, I spend less time writing code, but way more time reviewing and correcting it. I'm not sure I come ahead overall, but it does make development less boilerplaty and more high level, which leads to code that otherwise wouldn't have been written.
tomrod
I wonder if you observe this when you use it in a domain you know well versus a domain you know less well.
I think LLM assistants help you become functional across a more broad context -- and completely agree that testing and reviewing becomes much, much more important.
E.g - a front end dev optimizing database queries, but also being given nonsensical query parameters that don't exist.
toenail
That sounds plausible if the senior did lots of simple coding tasks and moves that work to an agent. Then the senior basically has to be a team lead and do code reviews/qa.
michaelsalim
Curious, what do you count as senior work?
tomrod
Roughly:
A senior can write, test, deploy, and possibly maintain a scalable microservice or similar sized project without significant hand-holding in a reasonable amount of time.
A junior might be able to write a method used by a class but is still learning significant portions and concepts either in the language, workflow orchestration, or infrastructure.
A principal knows how each microservice fits into the larger domain they service, whether they understand all services and all domains they serve.
A staff has significant principal understanding across many or all domains an organization uses, builds, and maintains.
AI code assistance help increase breadth and, with oversight, improve depth. One can move from the "T" shape to "V" shape skillset far easier, but one must never fully trust AI code assistants.
roncesvalles
All the references to LLMs in the article seemed out-of-place like poorly done product placement.
LLMs are the anti-thesis of fast. In fact, being slow is a perceived virtue with LLM output. Some sites like Google and Quora (until recently) simulate the slow typed output effect for their pre-cached LLM answers, just for credibility.
pjmlp
Not only that, I am already typing enough for coding, I don't want to type on chat windows as well, and so far the voice assistance is so so.
cornfieldlabs
I switch to vs code from cursor many times a day just to use their python refactoring feature. The pylance server that comes with cursor doesn't support refactoring.
old-gregg
Fun story time!
Early in my career as a software engineer, I developed a reputation for speeding things up. This was back in the day where algorithm knowledge was just as important as the ability to examine the output of a compiler, every new Intel processor was met with a ton of anticipation, and Carmak and Abrash were rapidly becoming famous.
Anyway, the 22 year old me unexpectedly gets invited to a customer meeting with a large multinational. I go there not knowing what to expect. Turns out, they were not happy with the speed of our product.
Their VP of whatever said, quoting: "every saved second here adds $1M to our yearly profit". I was absolutely floored. Prior to that moment I couldn't even dream of someone placing a dollar amount on speed, and so directly. Now 20+ years later it still counts as one of the top 5 highlights of my career.
P.S. Mentioning as a reaction to the first sentence in the blog post. But the author is correct when she states that this happens rarely.
P.P.S. There was another engineer in the room, who had the nerve to jokingly ask the VP: "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?". They didn't laugh, although I thought it was quite funny. Hey, Doug! :)
emmelaich
Working with a task scheduling system, we were told that every minute a airplane is delayed costs $10k. This was back in the 90s, so adjust accordingly.
ctenb
Why do you count it as a highlight if your product failed to meet expectations?
asimovDev
if you ever remember that engineer's name you should tell them that I found the joke funny
null
ensemblehq
RE: P.P.S... God I love that humour. Actually was very funny.
felideon
So, did you make it faster?
old-gregg
Unfortunately, there wasn't a single bottleneck. A bunch of us, not just me, worked our asses off improving performance by a little bit in several places. The compounded improvement IIRC was satisfactory to the customer.
adwn
> "so if we make it execute in 0 seconds, does it mean you're going to make an infinite amount of money?"
I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M.
old-gregg
> I don't get it. Wouldn't going from 1 second to 0 seconds add the same amount of money to the yearly profit as going from 2 seconds to 1 second did? Namely, $1M
Of course the joke was silly. But perhaps I should have provided some context. We were making industrial automation software. This stuff runs in factories. Every saved second shrinks the manufacturing time of a part, leading to increase of the total factory output. When extrapolating to abusrd levels, zero time to manufacture means infinite output per factory (sans raw materials).
stronglikedan
yeah it's one of those things that are funny to the people saying it because they don't yet realize it doesn't make sense. I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Otek
> I bet they felt that later, in the hotel room, in the shower, probably with a bottle of scotch.
Geez, life in my opinion is not so serious. It’s okay to say stupid things and not feel bad about it, as long as you are not trying to hurt anyone.
I bet they felt great and immediately forgot about this bad joke.
betterhealth12
earlier in my career it'd be appealing to make jokes like that, or include a comment in an email. eventually you realize that people - especially "older" or those already a few years into their career - mostly don't want to joke around and just want to actually get the thing done you are meeting about.
null
flobosg
A process taking 0 seconds means that, in one year, it can be run 31540000 sec/0 sec = ∞ times, multiplying the profit by ∞.
willsmith72
Since when is the constraint "how many times can I run this thing"?
jodrellblank
It's well known, but this video[1] is a proof of concept demonstration from 4 years ago, Casey Muratori called out Microsoft's new Windows Terminal for slow performance and people argued that it wasn't possible, practical, or maintainable to make a faster terminal and that his claims of "thousands of frames per second" were hyperbolic, and one person said it would be a "PHD level research project".
In response, Casey spent <1 week making RefTerm, a skeleton proto-terminal with the same constraints Microsoft people had - using Windows APIs for things, using DirectDraw with GPU rendering, handling terminal escape codes, colours, blinking, custom fonts, missing font character fallback, line wrap, scrollback, Unicode and Right-to-Left Arabic combining characters, etc. RefTerm had 10x faster throughput than Windows Terminal and ran at 6-7000 frames per second. It was single-threaded, not profiled, not tuned, no advanced algorithms, no-cheating by sending some data to /dev/null, all it had to speed it up was simple code without tons of abstractions and a Least Recently Used (LRU) glyph cache to avoid re-rendering common characters, written the first way that he thought of. Around that time he did a video series on that YouTube channel about optimization and arguing that even talking about 'optimization' was too hopeful, we should be talking about 'not-pessimization', that most software is not slow because it has unavoidable complexity and abstractions needed to help maintenance, it's slow because it's choked by a big pile of do-nothing code and abstraction layers added for ideological reasons which hurt maintenance as well as performance.
[1] https://www.youtube.com/watch?v=hxM8QmyZXtg - "How fast should an unoptimized terminal run?"
This video[2] is another specific details one, Jason Booth talking about his experience of game development, and practical examples of changing data layout and C++ code to make it do less work, be more cache friendly, have better memory access patterns, and run orders of magnitude faster without adding much complexity and sometimes removing complexity.
[2] https://www.youtube.com/watch?v=NAVbI1HIzCE - "Practical Optimizations"
jodrellblank
Someone posted their word game Cobble[1] on HN recently, the game gives some letters and the challenge is to find two English words which together use up all the given letters, and the combined two words to be as short as possible.
A naive brute-force solver takes the Cobble wordlist of 64k words and compares every word against every other word and does 64k x 64k = 4Bn loops and in the inner loop body, loops over the combined characters. If the combined words average 10 characters long, that's 40 billion operations just for the code structure, plus character testing and counting and data structures to store the counts. Seconds or Minutes of work for a puzzle that feels like any modern computer should solve it in microseconds.
It's always mildly intresting to me how a simple to explain problem, a tiny amount of data, and four lines of nested loop, can generate enough work to choke a modern CPU for minutes. Then considering how much work 3D games do in milliseconds. It highlights how impressive algorithmic research of the 1960s was to find ways to get early computers to do anything in a reasonable time, let alone find fast paths through complex problem patterns. Or perhaps, of all the zillions of possible problems which could exist, find any which can be approached by human minds and computers.
sgarland
I simultaneously love and hate watching Casey Muratori. Love because he routinely does things like this, hate because I have conversations like this entirely too often at work, except no one cares.
phtrivier
I would have loved to live in a universe where we could replace the Windows Terminal with RefTerm - if only, to measure how many hours would pass before a Fortune 500 company has to halt operations, because RefTerm does not properly re- implement one of the subtle bugs creeping from one of the bazillion features that had made WinTerm slow over the years. [1]
jodrellblank
I sighed when I read your comment, a comment which is exemplary of what Casey Muratori was ranting against - casual lazy dismissal of the idea that software can be faster, based on misunderstanding and lack of knowledge and/or interest, and throwing out the first objection that comes to mind as if it's an impassable obstacle. There were no bazillion features that made WinTerm slow over the years because Windows Terminal was a new product for Windows 10, released in 2019.[1]. There were piles of problems in Windows Terminal, Casey calls out that it didn't render Right-to-Left Arabic combining glyphs and it wasn't a perfect highly polished program from the outset. And it was an optional download, Fortune 500s wouldn't run it if they didn't want to.
RefTerm was explicitly not a production quality terminal and was not intended to be a replacement for Windows Terminal. RefTerm was a lower bound for performance of an untuned single-thread terminal. RefTerm was a proof of concept that if Microsoft had spent money and engineering skill on performance they could have profiled and used fancy algorithms and shortcuts and reimplemented slow Windows APIs with faster local ones, used threading, and improved on RefTerm's performance. A proof that "significantly faster terminals are unrealistic" is not true, that all the casual dismissals of why it's impossible are not the reasons for slowness, and that 10x better is an easily achievable floor, not a distant unreachable ceiling.
As a result of Casey's public shaming, Windows Terminal developers did improve performance.
9rx
> Rarely in software does anyone ask for “fast.”
They don't explicitly ask for it, but they won't take you seriously if you don't at least pretend to be. "Fast" is assumed. Imagine if Rust had shown up, identical in every other way, but said "However, it is slower than Ruby". Nobody would have given it the time of day. The only reason it was able to gain attention was because it claimed to be "Faster than C++".
Watch HN for a while and you'll start to notice that "fast" is the only feature that is necessary to win over mindshare. It is like moths to a flame as soon as something says it is faster than what came before it.
mvieira38
Only in the small subset of programmers that post on HN is that the case. Most users or even most developers don't mind slow stuff or "getting into flow state" or anything like that, they just want a nice UI. I've seen professional data scientists using Github Desktop on Windows instead of just learning to type git commands for an easy 10x time save
lukevp
GitHub Desktop is way better for reviewing diffs than the git cli. Everyone I’ve ever worked with who preferred cli tools also did an add and commit everything, and their PRs always have more errors overall that would be caught before even being committed if they reviewed visual diffs while committing.
jeremyjh
The best interface is magit, IMO. I use a clone of it in VS Code that is nearly as good. But you get the speed of CLI while still being very easy to stage/unstage individual chunks, which is probably the piece that does not get done enough by CLI users.
sebmellen
Sublime Merge gets you all those benefits, PLUS it’s really fast!
SchemaLoad
They do mind, which is why we see such a huge drop off in retention if pages load even seconds too low. They just don't describe it in the same way.
They don't say they buy the iPhone because it has the fastest CPU and most responsive OS, they just say it "just works".
0wis
Not everyone is conscious about it but I feel like it’s something that people will always want.
Like the « evergreen » things Amazon decided to focus on : faster delivery, greater selection, lower cost.
didibus
You're taking the wrong conclusion, "Fast" is a winning differentiator only when you offer the same feature-set, but faster.
Your example says it, people will go, this is like X (meaning it does/has the same features as X), but faster. And now people will flock from X to your X+faster thing.
Which tells us nothing about if people would also move to a X+more-features, or a X+nicer-ux, or a X+cheaper, etc., without them being any faster than X or even possibly slower.
gherkinnn
I hate it but it's true. Look at me, my fridge as an integrated tablet that tells me the weather outside. Never mind that it is a lil louder and the doors are creaky. It tells me the weather!
willvarfar
And is your fridge within line of sight of a window? :)
emmelaich
Really not sure about that. People will give up features for speed all the time. See git vs bzr/hg/svn/darcs/monotone,...
didibus
Hum, personally I've always found git having more features than those, though I don't know them all, at least when git released it distinguished itself by its features mostly, specifically the distributed nature and rebase. And hg/bzr never looked to me like they had more features, more so similar features +/-, so they'd be a good example of git has the same features+faster so it won.
Dylan16807
Maybe for languages, but fast is easily left behind when looking for frameworks. People want features, people want compatibility, people will use electron all over.
9rx
> fast is easily left behind when looking for frameworks.
Nah. React, for example, only garnered attention because it said "Look how much faster the virtual DOM is!". We could go on all day.
> People want features, people want compatibility
Yes, but under the assumption that it is already built to be as "fast" as possible. "Fast" is assumed. That's why "faster" is such a great marketing trick, as it tunes people into "Hold up. What I'm currently using actually sucks. I'd better reconsider."
"Fast" is deemed important, but it isn't asked for as it is considered inconceivable that you wouldn't always make things "fast". But with that said, keep in mind that the outside user doesn't know what "fast" is until there is something to compare it with. That is how some products can get away with not being "fast" — until something else comes along to show that it needn't be that way.
NohatCoder
It is only fast compared to a really dumb baseline. But you are right that the story of React being fast was a big part of selling it.
PaulHoule
"Look how quickly it can render the component 50 times!"
timeon
Isn't React one of the slower frameworks?
https://krausest.github.io/js-framework-benchmark/current.ht...
atq2119
And yet we live in a world of (especially web) apps that are incredibly slow, in the sense that an update in response to user input might take multiple seconds.
Yes, fast wins people over. And yet we live in a world where the actual experience of every day computing is often slow as molasses.
benrutter
Molasses can be fast if you leave it in the packet and hurl it!
Seriously though, you're so right- I often wonder why this is. If it's that people genuinely don't care, or that it's more that say ecommerce websites compete on so many things already (or in some cases maintain monopolies) that fast doesn't come into the picture.
9rx
The trouble is that "fast" doesn't mean anything without a point of comparison. If all you have is a slow web app, you have to assume that the web app is necessarily slow — already as fast as it can be. We like to give people the benefit of the doubt, so there is no reason to think that someone would make something slower than is necessary.
"Fast" is the feature people always wanted, but absent better information, they have to assume that is what they already got. That is why "fast" marketing works so well. It reveals that what they thought was pretty good actually wasn't. Adding the missing kitchen sink doesn't offer the same emotional reaction.
renlo
> The trouble is that "fast" doesn't mean anything without a point of comparison.
This is what people are missing. Even those "slow" apps are faster than their alternatives. People demand and seek out "fast", and I think the OP article misses this.
Even the "slow" applications are faster than their alternatives or have an edge in terms of speed for why people use them. In other words, people here say "well wait a second, I see people using slow apps all the time! People don't care about speed!", without realizing that the user has already optimized for speed for their use case. Maybe they use app A which is 50% as fast as app B, but app A is available on their toolbar right now, and to even know that app B exists and to install it and learn how to use it would require numerous hours of ramp up time. If the user was presented with app A and app B side by side, all things equal, they will choose B every time. There's proficiency and familiarity; if B is only 5% faster than A, but switching to B has an upfront cost in days to able to utilize that speed, well that is a hidden speed cost and why the user will choose A until B makes it worth it.
Speed is almost always the universal characteristic people select for, all things equal. Just because something faster exists, and it's niche, and hard to use (not equal for comparison to the common "slow" option people are familiar with), it doesn't mean that people reject speed, they just don't want to spend time learning the new thing, because it is _slower_ to learn how to use the new thing at first.
lblume
> you have to assume
We don't have to assume. We know that JavaScript is slow in many cases, that shipping more bundles instead of less will decrease performance, and that with regard to the amount of content served generally less is more.
Whether this amount of baggage every web app seems to come with these days is seen as "necessary" or not is subjective, but I would tend to agree that many developers are ignorant of different methods or dislike the idea of deviating from the implied norms.
whartung
I’ll tell you what fast is.
I’ve mentioned this before.
Quest Diagnostics, their internal app used by their phlebotomists.
I honestly don’t know how this app is done, I can only say it appears to run in the tab of a browser. For all I know it’s a VB app running in an ActiveX plugin, if they still do that on Windows.
L&F looks classic Windows GUI app, it interfaces with a signature pad, scanner, and a label printer.
And this app flies. Dialogs come and go, the operator rarely waits on this UI, when she is keying in data (and they key in quite a bit), the app is waiting for the operator.
Meanwhile, if I want to refill a prescription, it fraught with beach balls, those shimmering boxes, and, of course, lots of friendly whitespace and scrolling. All to load a med name, a drugstore address, and ask 4 yes/no questions.
I look at that Quest app mouth agape, it’s so surprisingly fast for an app in this day and age.
atq2119
This is a disingenuous response because I made it plenty clear what I meant with "fast": interactive response times.
And for that, we absolutely do have points of comparison, and yeah, pretty much all web apps have bad interactivity because they are limited too much by network round trip times. It's an absolute unicorn web app that does enough offline caching.
It's also absurd to assume that applications are as fast as they could be. There is basically always room for improvement, it's just not being prioritised. Which is the whole point here.
underdeserver
Eh, I think the HN crowd likes fast because most tech today is unreasonably slow, when we know it could be fast.
RandomBacon
It's infuriating when I have to use a chatbot, and it pretends to be typing (or maybe looking up a pre-planned generic response or question)...
I'm already pissed I have to use the damn thing, please don't piss me off more.
FridgeSeal
Press enter.
Wait.
Wait for typing indicator.
Wait for cute text-streaming.
Skip through the paragraph of restating your question and being pointlessly sycophantic.
Finally get to the meat of the response.
It’s wrong.
qingcharles
What's sad is that I always open grok.com if it's a quick simple query because their UI loads about 10X faster than GPT/Gemini/Claude.
blub
The claim was not that Rust was faster than C++, they said it’s about as fast.
C and C++ were and are the benchmark, it would have been revolutionary to be faster and offer memory safety.
Today, in some cases Rust can be faster, in others slower.
asa400
To a first approximation HN is a group of people who have convinced themselves that it's a high quality user experience to spend 11 seconds shipping 3.8 megabytes of Javascript to a user that's connected via a poor mobile connection on a cheap dual-core phone so that user can have a 12 second session where they read 150 words and view 1 image before closing the tab.
Fast is _absolutely not_ the only thing we care about. Not even top 5. We are addicted to _convenience_.
9rx
It's not that they convinced themselves, but that they don't know how to do any better. It is as fast as it can be to the extent of their knowledge, skill, and ability.
You see some legendary developers show up on HN from time to time, sure, but it is quite obvious that the typical developer isn't very good. HN is not some kind of exclusive club for the most prestigious among us. It is quite representative of a general population where you expect that most aren't very good.
lblume
The fact that this article and similar ones get upvoted very frequently on this platform is strong evidence against this claim.
Considering the current state of the Web and user application development, I tend to agree with regard to its developers, but HN seems to still abide by other principles.
alt227
This kind of slop is often imposed on developers by execs demanding things.
I imagine a large chunk of us would gladly throw all that out the window and only write super fast efficient code structures, if only we could all get well paid jobs doing it.
hn_throwaway_99
Just want to say how much I thank YCom for not f'ing up the HN interface, and keeping it fast.
I distinctly remember when Slashdot committed suicide. They had an interface that was very easy for me to scan and find high value comments, and in the name of "modern UI" or some other nonsense needed to keep a few designers employed, completely revamped it so that it had a ton of whitespace and made it basically impossible for me to skim the comments.
I think I tried it for about 3 days before I gave up, and I was a daily Slashdot reader before then.
FlyingSnake
HN is literally the website I open to check if I have internet connectivity. HN is truly a shining beacon in the trashy landscape of web bloat.
theandrewbailey
I usually load my blog to check internet connectivity.
I work at an e-waste recycling company. Earlier this week, I had to test a bunch of laptop docking stations, so I kept force refreshing my blog to see if the Ethernet port worked. Thing is, it loads so fast, I kept the dev tools open to see if it actually refreshed.
inopinatus
I like to use example.com/net/org
bonus, these have both http & https endpoints if you needed a differential diagnosis or just a means to trip some shitty airline/hotel walled garden into saying hello.
abrookewood
yep, I do exactly the same thing. If HN isn't loading, something is definitely fckd.
dang
Except when HN itself is fckd.
It does happen less than it used to, but still.
kikoreis
Oh it's lwn.net for me!
throwawayexmple
I find pinging localhost a bit more reliable, and faster too.
I blame HN switching to AWS. Downtime also increased after the switch.
dang
When did you notice HN switching to AWS, and what changed?
(Those are trick questions, because we haven't switched to AWS. But I genuinely would like to hear the answers.)
(We did switch to AWS briefly when our hosting provider went down because of a bizarre SSD self-bricking incident a few years ago..but it was only for a day or two!)
frutiger
The HN UI could do with some improvements, especially on mobile devices. The low contrast and small tap areas for common operations make it less than ideal, as well as the lack of dark mode.
I wrote my take on an ideal UI (purely clientside, against the free HN firebase API, in Elm): https://seville.protostome.com/.
hn_throwaway_99
To each their own, but I find the text for the number of points and "hours ago" extremely low contrast and hard to read on your site. More importantly, I think it emphasizes the wrong thing. I almost never really care who submitted a post, but I do care about its vote count.
frutiger
That’s all totally fair.
I actually never care about the vote count but have been on this site long enough to recognise the names worth paying attention to.
Also the higher contrast items are the click/tap targets.
dang
Anyone who goes to the trouble of making their own HN front end is entitled to complain as much as they want, in my book! Nicely done.
apaprocki
It’s hilarious to me that I find this thread. I read the comment you’re replying to before I saw who wrote it. I exclusively read HN on iOS using https://hackerweb.app/ in dark mode precisely because I found it to be the most pleasing mobile experience. And here’s dang replying to my co-worker who commented that he wrote his own HN reader because the actual site isn’t the best on mobile. I could literally reach out my hand, show my phone and share my mobile HN experience with him, except I’m 99% remote. (But I did sit at his desk just last Thursday when he was remote.)
Just goes to show that all of us reading HN don’t actually share with each other how we’re reading HN :)
Too funny… thank you!!
Eji1700
Information density and ease of identification is the antithesis of "engagement" which often has some time on site metric they're hunting.
If you can find what you want and read it you might not spend 5 extra seconds lost on their page and thus they can pad their stats for advertisers. Bonus points if the stupid page loads in such a way you accidentally click on something and give them a "conversion".
Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
andsoitis
> Sadly financial incentive is almost always towards tricking people into doing something they don't want to do instead of just actually giving them what they fucking want.
Northstar should be user satisfaction. For some products that might be engagement (eg entertainment service) while for others it is accomplishing a task as quickly as possible and exiting the app.
KPGv2
The one and only thing I'd do is make the font bigger and increase padding. There's overwhelming consensus that you should have (for English) about 50–70 characters per line of text for the best, fastest, most accurate readability. That's why newspapers pair a small font with multiple columns: to limit number of characters per line of text.
HN might have over 100 chars per line of text. It could be better. I know I could do it myself, and I do. But "I fixed it for me" doesn't fix it for anyone else.
amiga-workbench
I use HN zoomed in at 133%. Its a lot more comfortable even when I'm wearing my glasses.
stevage
Increased padding comes at the cost of information density.
I think low density UIs are more beginner friendly but power users want high density.
AnonC
I agree. In my experience, the default HN is terrible for accessibility (in many ways). I’ve just been waiting for dang and tomhow to get a lot older so that they face the issues themselves enough times to care.
NobodyNada
A narrow column of text can make it easier to read individual sentences, but it does so by sacrificing vertical space, which makes it harder to skim a page for relevant content and makes it easier for me to lose track of my place since I can't see as much context, images, and headings on screen all at once. I also find it much harder to read text when the paragraphs form monotonous blocks spanning 10 lines of text rather than being irregularly shaped and covering 3-5 lines. I find Wikipedia articles much harder to read in "standard" mode compared to "wide" mode for this reason.
Different people process visual information differently, and people reading articles have different goals, different eyesight, and different hardware setups. And we already have a way for users to tell a website how wide they want its content to be: resizing their browser window. I set the width of my browser window based on how wide I want pages to be; and web designers who ignore this preference and impose unreadable narrow columns because they read about the "optimal" column width in some study or another infuriate me to no end. Optimal is not the same for everyone, and pretending otherwise is the antithesis of accessibility.
KronisLV
I’d very much prefer more padding between the clickable UI elements on mobile in particular, because the zoom in -> click upvote -> zoom out, or the click downvote by accident -> try to unvote -> try to upvote again, well, it gets pretty old pretty fast.
The text density, however, I rather like.
portaouflop
There are dozens of alternative HN front ends that would satisfy your needs
HarHarVeryFunny
I don't think it was UI that killed Slashdot. The value was always in the comments, and in the very early years often there would be highly technical SMEs commenting on stories.
The site seemed to start to go downhill when it was sold, and got into a death spiral of less informed users, poor moderation, people leaving, etc. It's amazing that it's still around.
phkahler
It's not bad. I still read it, but less than HN.
cruffle_duffle
For me, Slashdot became full of curmudgeons. It’s pretty tiring when every “+5 Insightful” on a hard drive article questioning why you’d ever want so big of a drive, or why you’d require more than 256 colors or whatever new thing came out… like why are you even on a technology enthusiast site when you bitterly complain about every new thing? Basically either accept change or get left in the dust and slashdot’s crowd seemed determined to be left in the dust… forever loosing its relevance in the tech community.
Plus Rusty just pushed out Kuro5hin and it felt like “my scene” kind of migrated over.
As an aside, Kuro5hin was the only “large” forum that I ever bothered remembering people’s usernames. Every other forum it’s all just random people. (That isn’t entirely true, but true enough)
sien
Kuro5hin was far less about technology though.
It was interesting in a different way though.
Like Adequacy.
Did you also move over to MetaFilter ?
wldlyinaccurate
It brings me genuine joy to use websites like HN or Rock Auto that haven't been React-ified. The lack of frustration I feel when using fast interfaces is noticeable.
I don't really get why so many websites are slow and bloated these days. There are tools like SpeedCurve which have been around for years yet hardly anyone I know uses them.
postalcoder
It’s not modern UIs that prevent websites from being performant. Look at old.reddit.com, for instance. It’s the worst of both worlds. An old UI that, although much better than its newer abomination, is fundamentally broken on mobile and packed to the gills with ad scripts.
qingcharles
What changes have been made to the HN design since it was launched?
I know there are changes to the moderation that have taken place many times, but not to the UI. It's one of the most stable sites in terms of design that I can think of.
What other sites have lasted this long without giving in to their users' whims?
Over the last 4 years my whole design ethos has transformed to "WWHND" (What Would Hacker News Do?) every time I need to make any UI changes to a project.
jq-r
Slashdot looked a lot like HN with high information density. It was fast and easy to read all the comments. Then a redesign happened because of web 2.0 or "mobile-first" hype and most of the comments got hidden/collapsed by default, sorted almost randomly etc. So a new user would come there and say "wtf this is a dead conversation" or would have to click too many times to get to the full conversation. So new user would leave, and so would the old ones because the page was so hard to use. It just lost users and that was that. All because of the redesign which they never wanted to revert. Sad really because I still think it had/has the best comment moderation by far.
SatvikBeri
The only one I remember is adding the ability to collapse comment threads
MawKKe
Similar thing happened (to me) with Hackaday around 2010-2011. I used to check it almost daily, and then never again after the major re-design.
ilyakaminsky
Fast is also cheap. Especially in the world of cloud computing where you pay by the second. The only way I could create a profitable transcription service [1] that undercuts the rest was by optimizing every little thing along the way. For instance, just yesterday I learned that the image size I've put together is 2.5× smaller than the next open source variant. That means faster cold boots, which reduces the cost (and providers a better service).
sipjca
ive approached the same thing but slightly differently. i can run it on consumer hardware for vastly cheaper than the cloud and don't have to worry about image sizes at all. (bare metal is 'faster') offering 20,000 minutes of transcription for free up to the rate limit (1 Request Every 5 Seconds)
I contributed "whisperfile" as a result of this work:
* https://github.com/Mozilla-Ocho/llamafile/tree/main/whisper....
* https://github.com/cjpais/whisperfile
if you ever want to chat about making transcription virtually free or so cheap for everyone let me know. I've been working on various projects related to it for a while. including open source/cross-platform superwhisper alternative https://handy.computer
ilyakaminsky
> i can run it on consumer hardware for vastly cheaper than the cloud
Woah, that's really cool, CJ! I've been toying the with idea of standing up a cluster of older iPhones to run Apple's Speech framework. [1] The inspiration came from this blog post [2] where the author is using it for OCR. A couple of things are holding me back: (1) the OSS models are better according to the current benchmarks and (2) I have customers all over the world, so that geographical load-balancing is a real factor. With that said, I'll definitely spend some time checking out your work. Thanks for sharing!
[1] https://developer.apple.com/documentation/speech
[2] https://terminalbytes.com/iphone-8-solar-powered-vision-ocr-...
austin-cheney
Fast is cheap everywhere. The only reasons software isn’t faster:
* developer insecurity and pattern lock in
* platform limitations. This is typically software execution context and tool chain related more than hardware related
* most developers refuse to measure things
Even really slow languages can result in fast applications.
mlhpdx
Is S3 slow or fast? It’s both, as far as I can tell and represents a class of systems (mine included) that go slow to go fast.
S3 is “slow” at the level of a single request. It’s fast at the level of making as many requests as needed in parallel.
Being “fast” is sometimes critical, and often aesthetic.
claytonjy
We have common words for those two flavors of “fast” already: latency and throughput. S3 has high latency (arguable!), but very very high throughput.
zahlman
Yep. I'm hoping that installed copies of PAPER (at least on Linux) will be somewhere under 2MB total (including populating the cache with its own dependencies etc). Maybe more like 1, although I'm approaching that line faster than I'd like. Compare 10-15 for pip (and a bunch more for pipx) or 35 for uv.
HarHarVeryFunny
Fast doesn't necessarily mean efficient/lightweight and therefore cheaper to deploy. It may just mean that you've thrown enough expensive hardware at the problem to make it fast.
null
b_e_n_t_o_n
Your CSS is broken fyi
willsmith72
Not in development and maintenance dollars it's not
ilyakaminsky
Hmm… That's a good point. I recall a few instances where I went too far to the detriment of production. Having a trusty testing and benchmarking suite thankfully helped with keeping things more stable. As a solo developer, I really enjoy the development process, so while that bit is costly, I didn't really consider that until you mentioned it.
nu11ptr
This is interesting. It got me to think. I like it when articles provoke me to think a bit more on a subject.
I have found this true for myself as well. I changed back over to Go from Rust mostly for the iteration speed benefits. I would replace "fast" with "quick", however. It isn't so much I think about raw throughput as much as "perceived speed". That is why things like input latency matter in editors, etc. If something "feels fast" (ie Go compiles), we often don't even feel the need to measure. Likewise, when things "feel slow" (ie Java startup), we just don't enjoy using them, even if in some ways they actually are fast (like Java throughput).
christophilus
I feel the same way about Go vs Rust. Compilation speed matters. Also, Rust projects resemble JavaScript projects in that they pull in a million deps. Go projects tend to be much less dependency happy.
kettlecorn
One of the Rust ecosystem's biggest mistakes, in my opinion, was not establishing a fiercely defensive mindset around dependency-bloat and compilation speed.
As much as Rust's strongest defenders like to claim, compilation speed and bloat just really wasn't a goal. That's cascaded down into most of the ecosystem's most used dependencies, and so most Rust ecosystem projects just adopt the mindset of "just use the dependency". It's quite difficult to build a substantial project without pulling in 100s of dependencies.
I went on a lengthy journey of building my own game engine tools to avoid bloat, but it's tremendously time consuming. I reinvented the Mac / Windows / Web bindings by manually extracting auto-generated bindings instead of using crates that had thousands of them, significantly cutting compile time. For things like derive macros and serialization I avoided using crates like Serde that have a massive parser library included and emit lots of code. For web bindings I sorted out simpler ways of interacting with Javascript that didn't require a heavier build step and separate build tool. That's just the tip of the iceberg I can remember off the top of my head.
In the end I had a little engine that could do 3D scenes, relatively complex games, and decent GPU driven UI across Mac, Windows, and Web that built in a fraction of the time of other Rust game engines. I used it to build a bunch of small game jam entries and some web demos. A clean release build on the engine on my older laptop was about 3-4 seconds, vastly faster than most Rust projects.
The problem is that it was just a losing battle. If I wanted Linux support or to use pretty much any other crate in the Rust ecosystem, I'd have to pull in dependencies that alone would multiple the compile time.
In some ways that's an OK tradeoff for an ecosystem to make, but compile times do impede iteration loops and they do tend to reflect complexity. The more stuff you're building on top of the greater the chances are that bugs are hard to pin down, that maintainers will burn out and move on, or that you can't reasonably understand your stack deeply.
Looking completely past the languages themselves I think Zig may accrue advantages simply because its initial author so zealously defined a culture that cares about driving down compile times, and in turn complexity. Pardon the rant!
dist1ll
It's fascinating to me how the values and priorities of a project's leaders affect the community and its dominant narrative. I always wondered how it was possible for so many people in the Rust community to share such a strong view on soundness, undefined behavior, thread safety etc. I think it's because people driving the project were actively shaping the culture.
Meanwhile, compiler performance just didn't have a strong advocate with the right vision of what could be done. At least that's my read on the situation.
nu11ptr
And that leads to dependency hell once you realize that those dependencies all need different versions of the same crate. Most of the time this "just works" (at the cost of more dependencies, longer compile time, bigger binary)... until it doesn't then it can be tough to figure out.
In general, I like cargo a lot better than the Go tooling, but I do wish the Rust stdlib was a bit more "batteries included".
noisy_boy
I feel like Rust could have added commonly used stuff as extensions and provided separate builds that have them baked in for those that want to avoid dependency hell while still providing the standard builds like they currently do. Sure the versions would diverge somewhat but not sure how big of a problem that would be.
asa400
This is all well and good that we developers have opinions on whether Go compiles faster than Rust or whatever, but the real question is: which is faster for your users?
nu11ptr
...and that sounds nice to me as well, but if I never get far enough to give it to my users then what good is fast binaries? (implying that I quit, not that Rust can't deliver). The holy grail would be to have both. Go is generally 'fast enough', but I wish the language was a bit more expressive.
nakedneuron
Website is superfast. Reason I usually go for the comments first on HN is exactly this: they're fast. THIS is notably different.
On interfaces:
It's not only the slowness of the software or machine we have to wait for, it's also the act of moving your limb that adds a delay. Navigating a button (mouse) adds more friction than having a shortcut (keyboard). It's a needless feedback loop. If you master your tool all menus should go away. People who live in the terminal know this.
As a personal anecdote, I use custom rofi menus (think raycast for Linux) extensively for all kinds of interaction with data or file system (starting scripts, opening notes, renaming/moving files). It's notable how your interaction changes if you remove friction.
Venerable tools in this vein: vim, i3, kitty (former tmux), ranger (on the brim), qutebrowser, visidata, nsxiv, sioyek, mpv...
Essence of these tools is always this: move fast, select fast and efficiently, ability to launch your tool/script/function seamlessly. Be able to do it blindly. Prefer peripheral feedback.
I wish more people saw what could be and built more bicycles for the mind.
Cthulhu_
The website is fast because it's minimal, just under 80 kB of which 55 is the custom font; this is fine for plain content sites, but others will have other requirements.
There's never a reason to make a content website use heavyweight JS or CSS though.
nvarsj
That’s actually why I don’t like discourse at all. If your community site needs loading icons I don’t want to use it.
SatvikBeri
I've noticed over and over again at various jobs that people underestimate the benefit of speed, because they imagine doing the same workflow faster rather than doing a different workflow.
For example, if you're running experiments in one big batch overnight, making that faster doesn't seem very helpful. But with a big enough improvement, you can now run several batches of experiments during the day, which is much more productive.
IshKebab
I think people also vastly underestimate the cost of context switching. They look at a command that takes 30 seconds and say "what's the point of making it take 3 seconds? you only run it 10 times in a day; it's only 5 minutes". But the cost is definitely way more than that.
owlbite
Whenever we make our code faster the users just run bigger models :P.
01HNNWZ0MV43FF
Me, looking at multi-hour CI pipelines, thinking how many little lint warnings I'd fix up if CI could run in like 20 minutes
zavg
Pavel Durov (founder of Telegram) totally nailed this concept.
He pays special attention to the speed of application. The Russian social network VK worked blazingly fast. The same is about Telegram.
I always noticed it but not many people verbalized it explicitly.
But I am pretty sure that people realize it subconsciously and it affects user behaviour metrics positively.
dominicq
Telegram is pretty slow, both the web interface and the Android app. For example, reactions to a message always take a long time to load (both when leaving one, and when looking at one). Just give me emoji, I don't need your animated emoji!
hu3
Can't agree.
These operations are near instant for me on telegram mobile and desktop.
It's the fastest IM app for me by a magnitude.
gishglish
[flagged]
bravesoul2
I find most jobs I had fast becomes a big issue once things are too slow. Or expensive.
It's a retroactively fixed thing. Like imagine forgetting to make a UI, shipping just an API to a customer then thinking "oh shit, they need a UI they are not programmers". And only noticing from customer complaints. That is how performance is often treated.
This is probably because performance problems usually require load or unusual traffic patterns, which require sales, which require demos, which dont require performance tuning as there is one user!
If you want to speed your web service up first thing is invest time, maybe money in really good observability. Should be easy for anyone in the team to find a log, see what CPU is at etc. Then set up proxy metrics around speed you care about and talk about them every week and take actions.
Proxy metrics means you likely cant (well probably should not) check the speed that Harold can sum his spreadsheet every minute, but you can check the latency of the major calls involved. If something is slow but metrics look good then profiling might be needed.
Sometimes there is an easy speed up. Sometimes you need a new architecture! But at least you know what's happening.
FridgeSeal
In addition to all this, I’m also of the opinion that most users just have software “lumped on them” and have little to no recourse for complaint, so they’re just forced/trained to put-up-and-shut-up about it.
As a result, performance (and a few other things) functionally never gets “requested”. Throw in the fact that for many mid-to-large orgs, software is not bought by the people who are forced to use it and you have the perfect storm for never hearing about performance complaints.
This in turn, justifies never prioritising performance.
bodhi_mind
I’m senior developer on a feature bloated civil engineering web app that has 2 back end servers (one just proxies to the other) and has 8k lines of stored procedures as the data layer and many multi k line react components that intentionally break react best practices.
I loathe working on it but don’t have the time to refactor legacy code.
———————-
I have another project that I am principal engineer and it uses Django, nextjs, docker compose for dev and ansible to deploy and it’s a dream to build in and push features to prod. Maybe I’m more invested so it’s more interesting to me but also not waiting 10 seconds to register and hot reload a react change is much more enjoyable.
Kinda funny but I think LLM-assisted workflows are frequently slow -- that is, if I use the "refactor" features in my IDE it is done in a second, if I ask the faster kind of assistant it comes back in 30 seconds, if I ask the "agentic" kind of assistant it comes back in 15 minutes.
I asked an agent to write an http endpoint at the end of the work day when I had just 30 min left -- my first thought was "it took 10 minutes to do what would have taken a day", but then I thought, "maybe it was 20 minutes for 4 hours worth of work". The next day I looked at it and found the logic was convoluted, it tried to write good error handling but didn't succeed. I went back and forth and ultimately wound up recoding a lot of stuff manually. In 5 hours I had it done for real, certainly with a better test suite than I would have written on my own and probably better error handling.
See https://www.reddit.com/r/programming/comments/1lxh8ip/study_...