Performance optimization is hard because it's fundamentally a brute-force task
88 comments
·April 29, 2025tikotus
AaronAPU
I spent 10 years straight doing C++ and assembly optimization. My work is still fun these days but that was probably the most enjoyable work of my career in terms of the actual day to day coding.
Code cleanup in general is the same for me, but it’s really hard to justify putting much time into that when running your own company solo.
jasonthorsness
What tools did you use to assess the results of your changes?
AaronAPU
The routines were individually benchmarked using some custom tools (iterate repeatedly and use statistical analysis to converge on an estimate). Always compared against a plain C reference implementation.
Then there was a system for benchmarking the software as a whole on a wide variety of architectures, including NUMA. With lots of plots and statistics.
Usually you’d eventually end up at a point where the improvements are below the noise floor or they help on some systems and cause regression on others. The rule was usually “no regressions”
VTune for multithreading optimization. Built a fibers and lockfree system for efficient scheduling.
01HNNWZ0MV43FF
I remember one fun time between jobs, that I stayed up till 4 or 5 am optimizing something. It always felt like I was making progress and about to beat the original implementation
Unfortunately I had to give up when I was still 10 times slower than the reference lol
senderista
Same here, last time I was between jobs I optimized my defunct startup's database from ~50K TPS to nearly 5M TPS (no durability, if you're wondering), and that was unbelievably rewarding.
optymizer
I see you.
jmull
> I dislike the “intuition doesn’t work, profile your code” mantra because it seemingly says profiling is a viable replacement for theoretical calculations, which it isn’t.
This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
Measuring/profiling just means observing the system you want to optimize in a systematic way. You certainly won't be very effective at optimizing anything if you don't observe it.
Theoretical calculations means you've formed a model of what's happening and you're devising a plan to optimize against that model. But of course, a useful model needs to represent the significant aspects of your system (and a good model should exclude most insignificant ones). Failing to observe your system means your model could be bad -- focused on insignificant aspects and missing significant ones -- and you'd never know.
purplesyringa
You're right, I could've phrased that better.
Profiling to find suboptimal code is perfectly fine. Then you need to figure out how to fix it. Many people don't understand how performance optimization works, so they blindly add caching, improve constant time by invoking more low-level methods, etc. This obviously doesn't work, yet intuitively (to those people, anyway) it should produce good results.
That's why the mantra exists: don't trust your intuition, don't believe it when it says these changes improve performance, instead measure that performance and only apply changes that work. This is also perfectly fine, but this is a double-edged sword, and I've seen people go too far in this direction.
For example, they refuse to do any changes that don't immediately improve performance according to the profiler. If they modify one computation and performance decreases, they abandon this path altogether. They treat optimization as a game with a dense fog of war, and they refuse to apply deductive reasoning and, of course, intuition to apply changes that, according to the profiler at least, are not immediately rewarding.
mnahkies
I think there's a related problem where profiling/measurements can be made poorly and not reflect the real world.
Eg: indexing or partitioning a database table may appear to make things slower if you don't have both a representative amount of data and representative query patterns when you're measuring the change.
You should still measure your changes, but sometimes you need to be careful about measuring them in the right way, and possibly simulating a future context (eg: more scale) before drawing a conclusion.
Intuition about how the context will evolve and what effect that might have on the tradeoffs of different approaches is helpful
necovek
"Intuitively" literally means without having to learn something.
Adding caches or switching to lower-level calls is definitely something learned, and I wouldn't call it "intuitive".
What I think you are referring to is that sometimes, simply reading and understanding the code can tell you where the problem really is — still, my experience is that you want to measure the before and after to at least identify the general area you should be looking at, and more often than not, I could figure out what needs optimizing and how without having to get detailed profiles.
I did end up being surprised a few times, but it was mostly due to external code like buggy library implementations that didn't do the things they promised (eg. async library really synchronizing everything with a badly placed mutex).
At the same time, it's wrong to focus on a single case or a single profile (executing one code path in one set of circumstances), but what you are describing simply sounds like bad engineering — the fact that you can misuse the tool does not make the tool bad, it's still the engineer wielding it who's at fault.
scratcheee
And yet their statement makes perfect sense to me.
Caching and lower level calls are generic solutions that work everywhere, but are also generally the last and worst way to optimise (thus why they need such careful analysis since they so often have the opposite effect).
Better is to optimise the algorithms, where actual profiling is a lesser factor. Not a zero factor of course, as a rule of thumb it’s probably still wise to test your improvements, but if you manage to delete an n^2 loop then you really don’t need a profiler to tell you that you’ve made things better.
Capricorn2481
Sounds like a tricky balancing act. There are things that are extremely difficult to "game out." CPUs are very complicated. There are optimizations that seem like they could be cache friendly in theory, but aren't in practice.
jandrewrogers
I think many people have seen both sides of this in practice. I’ve seen engineers follow the profiler into a dead-end because they see nothing that stands out in the profiler or they don’t grok the essential nature of the code or system they are profiling. I’ve seen engineers with deep domain expertise consistently make accurate estimates of how a code changes will impact performance without ever using a profiler because their mental model of the code execution maps to reality with high fidelity.
Profilers fill in gaps in our mental model of code execution but they are not a substitute for it. Computer behavior is largely knowable from first principles, albeit requiring a considerable degree of expertise and detail. For some people in some contexts, the profiler adds little information to what they already understand. I know a few people that do almost all of their optimization work using pencil and paper with great effectiveness and precision. Not something I would recommend for most software engineers but not unreasonable at the limit either.
infogulch
Optimize according to your mental model; profile to check that your mental model matches reality.
If you optimize infrequently, or haven't profiled code like this recently, or haven't profiled this specific codebase recently, then your mental model is probably due a refresh.
kaptainscarlet
A lot of the optimisation I do largely consists of a series of highly educated guesses and the resulting fixes are right in 99% percent of the cases.
hinkley
I think the confusion sneaks in because “measuring” is something you do several times and each iteration has different connotation.
Once you “find” a candidate change you measure it to see if what you did made things worse and you put it back if it did, or maybe you try it in combination with other changes to see if its value is complementary.
But people fuck up all the time reading the initial telemetry, which is often where I come in. I get tired of hearing people say, “we’ve done all we can, look how flat this chart is,” and hand someone my beer. You won’t find all of the good candidates in the percentage of run time list. That’s not all the data that’s there, and not every change that works needs to even be supported by the initial data. It only needs to be supported by the delta afterward.
titzer
No amount of measuring and squeezing--not even years of it--is a subsitute for high-level thinking. And vice versa.
Imagine: function F() { for (i = 0; i < 10; i++) { A(); B(); C(); } }
If we profile this code, we might find out, e.g. B takes the majority of the time--let's say 90%. So you spend hours, days, weeks, making B 2X faster. Great. Now you removed 45% of execution time. But the loop in the outer function F is just a few instructions, it is not "hot"--it won't show up in profiles except for ones that capture stacks.
If you're just stuck in the weeds optimizing hot functions that show up in profiles, it's possible to completely overlook F. That loop might be completely redundant, causing 10X the workload by repeatedly computing A, B, and C, which may don't need to be recomputed.
There are bazillions of examples like this. Say you find out that a function is super, super hot. But it's just a simple function. There are calls to it all over the code. You can't make it any faster. Instead you need to figure out how to not call it at all, e.g. by caching or rethinking the whole algorithm.
> How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
This happens more than you think. Understanding how the system works in enough detail and also at a high level to formulate a plan is in short supply. Jumping in and hacking in things, like a cache or something, is surprisingly common.
hinkley
Small functions need special attention not just because they show up as leaf nodes everywhere but also because they are difficult for profilers to account properly. You get two functions listed as each taking 4% of CPU time, and one could easily be taking up twice as much compute as the other. The sort of memory pressure that small functions can generate can end up scapegoating a big function that uses a large fraction of memory because it gets stuck with cold cache or lots of GC pressure from the piddly functions fouling the nest.
One of my best examples of this, I had a function reported as still taking 10% of cumulative run time, after I’d tweaked it as much as I could. But I’d set up a benchmark that called a code path a deterministic number of times and this function was getting called twice as much as it should. I found two sibling methods asking the same question and rearranged them so the answer came as an argument and nixed the duplicate call. I reran the benchmark and instead of getting a reduction of 5% (10/2), I got 20%. That was all memory pressure.
The worst memory pressure I ever fixed I saw a 10x improvement by removing one duplicate call. Now, there was a quadratic part of that call but it was a small enough n that I expected 3x and hoped for 4x, and was as shocked as anyone when it went from 30s to 3s with one refactor.
dmurray
> it won't show up in profiles except for ones that capture stacks
I don't think I've ever used a profiler that couldn't report you were in F() here. One that only captures your innermost functions really doesn't seem that useful, for exactly the reasons you give.
AtNightWeCode
Agree with this. But not what I concluded from OP. Architectural decisions from the start is where most optimizations should happen. I remember from school some kids that did this super optimized loop and the teacher said. Do you really have to do that same calculation on every iteration?
But, in the real world. Code bases are massive. And it is hard to predict when worlds collide. Most things does not matter until they do. So measuring is the way to go I believe.
hinkley
Measuring is also useless once someone has introduced bottom up caching.
There’s so much noise at that point that even people who would usually catch problems start to miss them.
There’s usual response to this is, “well you can turn caching off to do profiling” but that’s incorrect because once people know they can get a value from the cache they stop passing it on the stack. So your function that calls A() three times that should have called it 2? You find now that it’s being called ten times.
And the usual response to that is, “well it’s free now so who cares?” Except it’s not free. Every cache miss now either costs you multiple, or much more complex cache bookkeeping which is more overhead, and every hit resets the MRU data on that entry making it more likely that other elements get evicted.
For instance in NodeJS concurrent fetches for the same resource often go into a promise cache, but now the context of the closure for the promise is captured in the cache, and it doesn’t take much to confuse v8 into keeping a bunch of data in scope that’s not actually reachable. I’ve had to fix that a few times. Hundreds of megabytes in one case because it kept an entire request handler in scope.
klysm
I think what he’s saying here is you can’t skip the basic math step to arrive at good performance. Staring at profiling results will lead you to a local minima
null
cogman10
Profiling doesn't mean you don't do the math. Profiling is simply there to show you that "hey, this is where the problems actually are".
You do profiling because it's WAY too easy to get obsessed about theoretical problems when a simple measurement will show you the actual problems.
You do the math on the actual problem location, not a method with O(n!) which only gets called with n=3.
You still have to look at the entire call stack when profiling (which means thinking about the overarching algorithm).
9rx
Well, he's saying that intuition does work... But does it really?
If a problem area is so intuitively obvious, why would you introduce the problem in the first place? In reality, performance optimizations are usually needed where you least expect them. Which means that you can't get there intuitively. Hence, the suggestion of using profiling to help track down where the problem is instead.
mjmahone17
Most code I’ve seen is written in an environment where people can’t afford to “intuit”: thinking through the repercussions of specific language or algorithm choices comes second to solving a problem “well enough”. Building an intuition takes learning how you’ve built something poorly, and for a specific problem can take hours to gain context and walk yourself through many paths.
Bring in a performance expert and their intuition can quickly identify many performance improvements. The tough part for the business is when and where do you pay for the experience of performant code, and when is good enough to leave alone?
mystified5016
Do you want to claim you've never written quick and ugly code to get something working to come back and fix it up later?
Pretty much everyone I know will throw down an O(n^2) algorithm or whatever in their first pass and replace it with something more thought out once they have the time to think deeply about it.
If you're fretting about optimization at every stage of development, you're really doing it wrong. This is precisely the type of early optimization that everyone and their dog will tell you is bad. You should focus first on making your program work and laying out the logic and data flows. Optimization does not matter until the program is almost finished.
absolutelastone
I think the person you are responding to is criticizing the original "mantra" quote not the author's criticism of it(?)
bqmjjx0kac
FYI, the author is a woman.
jmull
Yeah, that's why I think it's nonsense.
Measuring doesn't mean don't think. Measuring and thinking are two different things. You need to do them both to optimize effectively.
Joel_Mckay
Or optimizing a scheduling noop loop dozens of times given local encapsulation obscures global functional context.
The fact remains most projects that do small trivial modular prototypes first will ultimately know which paths are viable before painting themselves into a corner algorithmically.
Best of luck =3
null
fabian2k
While profiling and measuring is very important if you want to optimize performance, there are a lot of things you can do without any profiling. In many situations the consequences of each approach are well known or easily reasoned about. Most of the time it's simply "do less work" = "more performance", or avoiding obvious and well-known patterns like N+1 database queries.
But it's also very easily to mislead yourself that way, many "optimizations" might do much less than you think. So you should avoid implementing more complex or harder to understand code just because you think it is faster, but otherwise I'd certainly try to write faster code by default in areas I know well enough to judge that.
tmoertel
> > I dislike the “intuition doesn’t work, profile your code” mantra because it seemingly says profiling is a viable replacement for theoretical calculations, which it isn’t.
> This seems like a nonsensical statement to me. How could measuring be a substitute for thinking/analyzing/predicting/forming a plan?
I believe the mantra “intuition doesn’t work, profile your code” is understood to mean “don't rely on your intuition alone; profile your work.” It's a warning that when it comes to performance optimization, intuition is often unreliable, so you need to supplement your mental model with actual data if you want to avoid big mistakes.
I don't know why the author of the blog post is interpreting the mantra literally, as if it claims intuition is obsoleted by profiling.
kccqzy
I think theoretical calculations could mean a detailed calculation of the number of times a certain function or operation is called. We all know from tech interviews that there are big O time complexity, but this is usually very hand-wavy and not precise enough. You can usually come up with a more precise formula though it can get messy with recurrences if your algorithm involves recursion. You probably need a computer algebra system. Spending an afternoon doing these symbolic calculations might give you better intuition of why the profiling produced such a measurement.
ttd
Good article that I agree with mostly. One interesting note is this:
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
This is true, but can be seen as a failure of language and tooling. For example, Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level. This separation lets you express the algorithm once, and then "schedule" it by specifying parallelism, vectorization, etc. You can provide multiple schedules for one algorithm, which allows you to specialize / make different choices depending on varying factors.
It's a really interesting concept, though maybe limited in practice to DSLs. I'm not sure a general purpose language would be a good fit for this model, but then, for general purpose programs written in general purpose languages, perf optimization at the level TFA discusses is frequently limited to just specific hot sections. Those hot sections could be extracted out into specialized components written in such a DSL.
almostgotcaught
> Halide [1] pioneered (AFAIK) the concept of separating algorithm from implementation at the language level.
you don't need to go all the way to Halide to do what the article is claiming isn't possible - you can do it just by including a "micro-kernel" in your library and have the code branch to that impl (depending on something at runtime) instead of whatever the C code compiled down to. this is done every single day in every single GPU lib (famously cublas ships with hundreds/thousands of these of such ukernels for gemms depending on shapes).
purplesyringa
I was going for something different: I don't want to choose a different implementation in runtime, I want the compiler to see through my code and apply constant propagation -- not just for constant inputs, but inputs with known properties, like `n < 1000` or `(n & 7) == 0`. I want it to also learn facts about the output values, e.g. that my `isqrt(n) -> m` function always returns `m` such that `m^2 <= n`. None of this is possible with runtime selection because runtime selection was never the point.
almostgotcaught
i have no idea what that has to do with what op quoted from your article:
> There is no way to provide both optimized assembly and equivalent C code and let the compiler use the former in the general case and the latter in special cases.
this is manifestly obviously possible (as i've said).
what you're talking about is something completely different goes by many names and uses many techniques (symbex, conex, sccp, scev, blah blah blah). many of these things are implemented in eg LLVM.
ttd
Ah ok, I see what you mean (and likely sibling comment too w.r.t. gcc feature). Yes that is a fair point - though still has the substantial downfall of maintaining many different implementation of any given algorithm.
secondcoming
gcc supports function multiversioning:
https://gcc.gnu.org/onlinedocs/gcc/Function-Multiversioning....
ttd
This is useful but not equivalent. Using this type of tooling you still have to write the algorithm itself in N versions. Changing the algorithm then requires changing all N implementations. This contrasts with the Halide approach where the algorithm is written once, and then schedules can be modified without worrying that you are changing the algorithm itself.
hansvm
> too many cases to analyze
One of my early-career successes was just creating a framework for generating every permutation of perf optimizations for every (log-scaled -- clz is very fast) input size and checking which was best, dropping the results into a lookup table of function pointers to branch on. The university had a large supply of heterogeneous computers, replete with all the normal problems like being able to double floating-point addition throughput on Haswell CPUs by abusing the fmadd instruction, so I made a framework (probably closer to a DSL) for encoding your algorithms in a way that you could analyze perf tradeoffs at compile time and tune your result for the given computer. It's kind of like what ATLAS does for some linear algebra tasks.
Such practices are almost never optimal, but they're pretty easy to implement, and the results are near-optimal for almost all inputs. In the tradeoff between human and computer performance, I think it's a nice option.
willvarfar
I think it is worth making a distinction between "micro" (what the blogpost is about) and "macro", or "tactical" and "strategic", optimisations.
Strategic optimisations is often basically free if you have domain expertise. It's that easy to know that the business wants x outcome and algorithm y is the right choice etc if its all internal thought processes. Whereas if you don't know enough then you're likely to make very expensive to undo decisions.
jandrewrogers
I often refer to those as architectural optimizations. Even some of these tend to sensitive to the details of the operating environment.
karmakaze
I agree with many of the comments here including some of the ones that are in disagreement. The difference is context where for most situations the starting point is far from optimal and the any of N better choices is a good improvement.
That doesn't seem to be what this post it talking about. It seems to talking about well worn areas trying to improve the state of the art. An example that illustrates it for me is DeepMind's AlphaTensor finding a better way to multiply matrices[0] in 2022. It wasn't a brute-force solution, but the scale of it makes it appear so.
> On 4x4 matrices, AlphaTensor unexpectedly discovered a solution with 47 multiplication steps, an improvement over the 49 required with Strassen’s algorithm of 1969, albeit restricted to mod 2 arithmetic. Similarly, AlphaTensor solved 5x5 matrices with 96 rather than Strassen's 98 steps. Based on the surprising discovery that such improvements exist, other researchers were quickly able to find a similar independent 4x4 algorithm, and separately tweaked Deepmind's 96-step 5x5 algorithm down to 95 steps in mod 2 arithmetic and to 97[24] in normal arithmetic.[25] Some algorithms were completely new: for example, (4, 5, 5) was improved to 76 steps from a baseline of 80 in both normal and mod 2 arithmetic.
This to me shows that direct profiling and observation wouldn't have led to the optimization. Improvements needed a sort-of but not actually brute-force effort of many people trying, but also being clever with their attempts.
[0] https://en.wikipedia.org/wiki/Matrix_multiplication_algorith...
jmward01
I have almost always found that simple code runs faster than complex code. I think this is because optimization is likely an NP problem and like all NP problems, the best algorithm we have for solving it is divide and conquer. The core thing about D&C is that you divide until you reach a level that you can actually find the optimum answer within the resources given but accept that by dividing the problem you will likely have some high level inefficiencies. This means simple code, code you understand all pieces of, can actually be locally optimum. When I see people try to optimize code I often see them fall into that trap of optimizing too large/complex a problem which leads to them not actually being able to find a true local optimum and, often, making far slower code than had they just tried for much simpler code. This likely NP behavior runs rampant in software development where we often think we can design some elaborate process to design things and when things fail it was because people failed to follow the process and not because the problem was NP. We all love building machines which is why we likely do this, but unless you know something the rest of the world doesn't then D&C, and admitting you are only looking for a local optimum, is the only algorithm we have for attacking NP problems. (Shameless plug here for smaller, far more autonomous teams instead of monolithic dev shops with one big shared feature list)
tmoertel
I think that there is a different reason that an emphasis on simple code often results in faster systems. When you write simple code, you spend less time writing code. Therefore, you have more time left to invest in optimizing the very small subset of your overall system that actually matters. You didn't burn engineering resources for speed where it didn't matter.
jmward01
I'd argue that is a big part of the point I am making. If you take too big of a bite the time it takes to build it optimally goes up in an NP manor. If the bites are the right size then it balances the time/resources you have compared to all the other bites you make to get a locally optimal answer given all resource constraints. Long story short, cutting a problem into manageable pieces is a solid strategy. I will add one thing though, and that is that most people think they have cut things into manageable pieces but in reality they have left them too intertwined and they aren't really independent pieces. For divide and conquer to actually work requires that the pieces have clearly defined, and very limited, communication.
compiler-guy
From the article:
"Even Apple’s LLVM fork lacks scheduling annotations for Apple Silicon. How am I supposed to write efficient code when Apple doesn’t bother to tune their own compiler?"
In addition to its public fork of LLVM, Apple also keeps a very private fork where (one assumes) they keep all the really juicy stuff.
I agree that it is frustrating not to have the data, but don't confuse "I don't have the data" with "no one has the data".
hinkley
It’s like dieting. Everyone wants to hear your one weird trick and zones out when you tell them it’s hard work. Yes, mechanical sympathy is a thing but usually pulling off a series of wins isn’t luck or particularly bad work by your peers, it’s being methodical and persistent. It’s about being stubborn as fuck, and not taking no for an answer.
nostrademons
This is coming from the perspective of a performance engineer whose day job is squeezing every last bit of performance out of system libraries and low-level code. This is important work, and it can pay very well if you get one of the few positions in it. But for an application developer whose primary day job is cranking out features and then spending 10% of the time at the end optimizing them, the conclusion (and the headline) very much does not hold.
For them, the systematic way to optimize goes: profile your code, and then apply domain knowledge of the product to optimize the hotspots with these common techniques:
Don't do repeated work. (If you have an expensive invariant calculation in a loop or function call, move it out, to a higher level of the program where it can be done once.)
Save your work. (Caching and memoization.)
Do less work. (Alter the product requirements to use less computationally-intensive techniques in cases where users won't notice the difference.)
Do work when the user isn't looking. (Move computations to background tasks, apply concurrency, perform async requests.)
If all else fails, call in a performance engineer like the author to micro-optimize your building blocks.
You can often get speedups of 3-6 orders of magnitude by applying these techniques, simply because the original code is so brain-dead. Performance engineers like the author tend to work on code that has already been tightly optimized, and so there is less low-hanging fruit to pick.
MatthiasWandel
An interesting aspect is data dependencies. If your next statement reuses data you just computed, that can cause pipeline bubbles, as that result you want to use just isn't available yet. I dived into that topic for a video about relative performance of old PCs I just published today.
jandrewrogers
Yes, there is non-obvious structure in some algorithms solely for the purpose of turning a single logical stream of dependent instructions into multiple concurrent streams of dependent instructions running through the same pipeline. The caveat of doing this, of course, is that it typically increases register pressure.
PaulKeeble
When it comes to micro optimisations the issue is partly our usual tools in algorithm analysis and hardware intuition are very far apart. Random accessing memory is very slow compared to linear, some branches are considerably worse than others and it's actually quite hard to predict how much you can improve the low level details before you start, especially for big changes. Our tools can show us where time is being lost in inefficiencies but can't help us predict how changes will improve things.
I once had this kind of body recovery/stress level measuring thingy on me for a few days, and a doctor would then analyze my health and such. I was under some stress those days and (according to the measurements) I wasn't recovering properly even during the nights. But then there was this one, long, flat, deep green curve in the middle of my work day. I checked from my VCS what I was doing during that period: I was optimizing.
I've since noticed this many times. Optimizing is like meditation to me. It's very mechanical (measure), with a sprinkle of creative work (once you know what is slow, it's quite obvious how to make it faster, but just challenging enough to be engaging), and it has a very nice tight feedback loop: Something is slow. I make a change. Now it's fast. Next.
Optimizing is my happy place.