Skip to content(if available)orjump to list(if available)

FFmpeg devs boast of another 100x leap thanks to handwritten assembly code

AaronAPU

When I spent a decade doing SIMD optimizations for HEVC (among other things), it was sort of a joke to compare the assembly versions to plain c. Because you’d get some ridiculous multipliers like 100x. It is pretty misleading, what it really means is it was extremely inefficient to begin with.

The devil is in the details, microbenchmarks are typically calling the same function a million times in a loop and everything gets cached reducing the overhead to sheer cpu cycles.

But that’s not how it’s actually used in the wild. It might be called once in a sea of many many other things.

You can at least go out of your way to create a massive test region of memory to prevent the cache from being so hot, but I doubt they do that.

torginus

Sorry for the derail, but it sounds like you have a ton of experience with SIMD.

Have you used ISPC, and what are your thoughts on it?

I feel it's a bit ridiculous that in this day and age you have to write SIMD code by hand, as regular compilers suck at auto-vectorizing, especially as this has never been the case with GPU kernels.

capyba

Personally I’ve never been able to beat gcc or icx autovectorization by using intrinsics; often I’m slower by a factor of 1.5-2x.

Do you have any wisdom you can share about techniques or references you can point to?

almostgotcaught

> Have you used ISPC

No professional kernel writer uses Auto-vectorization.

> I feel it's a bit ridiculous that in this day and age you have to write SIMD code by hand

You feel it's ridiculous because you've been sold a myth/lie (abstraction). In reality the details have always mattered.

izabera

ffmpeg is not too different from a microbenchmark, the whole program is basically just: while (read(buf)) write(transform(buf))

fuzztester

the devil is in the details (of the holy assembly).

thus sayeth the lord.

praise the lord!

yieldcrv

> what it really means is it was extremely inefficient to begin with

I care more about the outcome than the underlying semantics, to me thats kind of a given

Aardwolf

The article somtimes says 100x, other times it says 100% speed boost. E.g. it says "boosts the app’s ‘rangedetect8_avx512’ performance by 100.73%." but the screenshot shows 100.73x.

100x would be a 9900% speed boost, while a 100% speed boost would mean it's 2x as fast.

Which one is it?

ethan_smith

It's definitely 100x (or 100.73x) as shown in the screenshot, which represents a 9973% speedup - the article text incorrectly uses percentage notation in some places.

MadnessASAP

100x to the single function 100% (2x) to the whole filter

pizlonator

The ffmpeg folks are claiming 100x not 100%. Article probably has a typo

k_roy

That would be quite the percentage difference with 100x

torginus

I'd guess the function operates of 8 bit values judging from the name. If the previous implementation was scalar, a double-pumped AVX512 implementation can process 128 elements at a time, making the 100x speedup plausible.

ErrorNoBrain

[flagged]

tombert

Actually a bit surprised to hear that assembly is faster than optimized C. I figured that compilers are so good nowadays that any gains from hand-written assembly would be infinitesimal.

Clearly I'm wrong on this; I should probably properly learn assembly at some point...

MobiusHorizons

Almost all performance critical pieces of c/c++ libraries (including things as seemingly mundane as strlen) use specialized hand written assembly. Compilers are good enough for most people most of the time, but that’s only because most people aren’t writing software that is worth optimizing to this level from a financial perspective.

mananaysiempre

Looking at the linked patches, you’ll note that the baseline (ff_detect_range_c) [1] is bog-standard scalar C code while the speedup is achieved in the AVX-512 version (ff_detect_rangeb_avx512) [2] of the same computation. FFmpeg devs prefer to write straight assembly using a library of vector-width-agnostic macros they maintain, but at a glance the equivalent code looks to be straightforwardly expressible in C with Intel intrinsics if that’s more your jam. (Granted, that’s essentially assembly except with a register allocator, so the practical difference is limited.) The vectorization is most of the speedup, not the assembly.

To a first approximation, modern compilers can’t vectorize loops beyond the most trivial (say a dot product), and even that you’ll have to ask for (e.g. gcc -O3, which in other cases is often slower than -O2). So for mathy code like this they can easily be a couple dozen times behind in performance compared to wide vectors (AVX/AVX2 or AVX-512), especially when individual elements are small (like the 8-bit ones here).

Very tight scalar code, on modern superscalar CPUs... You can outcode a compiler by a meaningful margin, sometimes (my current example is a 40% speedup). But you have to be extremely careful (think dependency chains and execution port loads), and the opportunity does not come often (why are you writing scalar code anyway?..).

[1] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346725.h...

[2] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346726.h...

kasper93

Moreover the baseline _c function is compiled with -march=generic and -fno-tree-vectorize on GCC. Hence it's the best case comparison for handcrafted AVX512 code. And while it's is obviously faster and that's very cool, boasting the 100x may be misinterpreted by outsider readers.

I was commenting there with some suggested change and you can find more performance comparison [0].

For example with small adjustment to C and compiling it for AVX512:

  after (gcc -ftree-vectorize --march=znver4)
  detect_range_8_c:                                      285.6 ( 1.00x)
  detect_range_8_avx2:                                   256.0 ( 1.12x)
  detect_range_8_avx512:                                 107.6 ( 2.65x)
Also I argued that it may be a little bit misleading to post comparison without stating the compiler and flags used for said comparison [1].

P.S. There is related work to enable -ftree-vectorize by default [2]

[0] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346813.h...

[1] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346794.h...

[2] https://ffmpeg.org/pipermail/ffmpeg-devel/2025-July/346439.h...

mafuy

If you ever dabble more closely in low level optimization, you will find the first instance of the C compile having a brain fart within less than an hour.

Random example: https://stackoverflow.com/questions/71343461/how-does-gcc-no...

The code in question was called quadrillions of times, so this actually mattered.

brigade

It's AVX512 that makes the gains, not assembly. This kernel is simple enough that it wouldn't be measurably faster than C with AVX512 intrinsics.

And it's 100x because a) min/max have single instructions in SIMD vs cmp+cmov in scalar and b) it's operating in u8 precision so each AVX512 instruction does 64x min/max. So unlike the unoptimized scalar that has a throughput under 1 byte per cycle, the AVX512 version can saturate L1 and L2 bandwidth. (128B and 64B per cycle on Zen 5.)

But, this kernel is operating on an entire frame; if you have to go to L3 because it's more than a megapixel then the gain should halve (depending on CPU, but assuming Zen 5), and the gain decreases even more if the frame isn't resident in L3.

mhh__

Compilers are extremely good considering the amount of crap they have to churn through but they have zero information (by default) about how the program is going to be used so it's not hard to beat them.

haiku2077

If anyone is curious to learn more, look up "profile-guided optimization" which observes the running program and feeds that information back into the compiler

null

[deleted]

ivanjermakov

Related: ffmpeg's guide to writing assembly: https://news.ycombinator.com/item?id=43140614

cpncrunch

Article is unclear what will actually be affected. It mentions "rangedetect8_avx512" and calls it an obscure function. So, what situations is it actually used for, and what is the real-time improvement in performance for the entire conversion process?

brigade

It's not conversion. Rather, this filter is used for video where you don't know whether the pixels are video or full range, or whether the alpha is premultiplied, and determining that information. Usually so you can tag it correctly in metadata.

And the function in question is specifically for the color range part.

cpncrunch

It's still unclear from your explanation how it's actually used in practice. I run thousands of ffmpeg conversions every day, so it would be useful to know how/if this is likely to help me.

Are you saying that it's run once during a conversion as part of the process? Or that it's a specific flag that you give, it then runs this function, and returns output on the console?

(Either of those would be a one-time affair, so would likely result in close to zero speed improvement in the real world).

pavlov

Only for x86 / x86-64 architectures (AVX2 and AVX512).

It’s a bit ironic that for over a decade everybody was on x86 so SIMD optimizations could have a very wide reach in theory, but the extension architectures were pretty terrible (or you couldn’t count on the newer ones being available). And now that you finally can use the new and better x86 SIMD, you can’t depend on x86 ubiquity anymore.

Aurornis

AVX512 is a set of extensions. You can’t even count on an AVX512 CPU implementing all of the AVX512 instructions you want to use, unless you stick to the foundation instructions.

Modern encoders also have better scaling across threads, though not infinite. I was in an embedded project a few years ago where we spent a lot of time trying to get the SoC’s video encoder working reliably until someone ran ffmpeg and we realized we could just use several of the CPU cores for a better result anyway

jauntywundrkind

Kind of reminds me of Sound Open Firmware (SOF), which can compile with e8ther unoptimized gcc, or using the proprietary Cadence XCC compiler that can can use the Xtensa HiFi SIMD intrinsics.

https://thesofproject.github.io/latest/introduction/index.ht...

shmerl

Still waiting for Pipewire + xdg desktop portal screen / window capture support in ffmpeg CLI. It's been dragging feet forever with it.

askvictor

[flagged]

Arubis

This intrinsically feels like the opposite of a good use case for an LLM for code gen. This isn’t boilerplate code by any means, nor would established common patterns be helpful. A lot of what ffmpeg devs are doing at the assembly level is downright novel.

pizlonator

The hardest part of optimizations like this is verifying that they are correct.

We don’t have a reliable general purpose was of verifying if any code transformation is correct.

LLMs definitely can’t do this (they will lie and say that something is correct even if it isn’t).

viraptor

But we do! For llvm there's https://github.com/AliveToolkit/alive2 There are papers like https://people.cs.rutgers.edu/~sn349/papers/cgo19-casmverify... There's https://github.com/google/souper There's https://cr.yp.to/papers/symexemu-20250505.pdf And probably other things I'm not aware of. If you're limiting the scope to a few blocks at a time, symbolic execution will do fine.

pizlonator

> limiting the scope to a few blocks

Yeah so like that doesn’t scale.

The interesting optimizations involve reasoning across thousands of blocks

And my point is there is no reliable general purpose solution here. „Only works for a few blocks at a time” is not reliable. It’s not general purpose

hashishen

It doesn't matter. This is inherently better because the dev knows exactly what is being done. Llms could cripple entire systems with assembly access

gronglo

You could run it in a loop, asking it to improve the code each time. I know what the ffmpeg devs have done is impressive, but I would be curious to know if something like Claude 4 Opus could make any improvements.

eukara

I think if it was easy for them to improve critical projects like ffmpeg, we'd have seen some patches that mattered already. The only activity I've seen is LLMs being used to farm sec-ops bounties which get rejected because of poor quality.

https://daniel.haxx.se/blog/2024/01/02/the-i-in-llm-stands-f...

minimaxir

That can work with inefficient languages like Python, but not raw Assembly.

smj-edison

I'd be worried about compile times, lol. Final binaries are quite often tens to hundreds of megabytes, pretty sure an LLM processes tokens much slower than a compiler completes passes.

EDIT: another thought: non-deterministic compilation would also be an issue unless you were tracking the input seed, and it would still cause spooky action at a distance unless you had some sort of recursive seed. Compilers are supposed to be reliable and deterministic, though to be fair advanced optimizations can be surprising because of the various heuristics.

viraptor

There's no reason to run the optimisation discovery at compile time. Anything that changes the structure can be run to change the source ahead of time. Anything that doesn't can be generalised into a typical optimisation step in the existing compiler pipeline. Same applies to Souper for example - you really don't want everyone to run it.

smj-edison

I'm not quite understanding your comment, are you saying that ANNs are only useful for tuning compiler heuristics?

hansvm

If we're considering current-gen LLMs, approximately zero. They're bad at this sort of thing even with a human in the loop.

viraptor

https://arxiv.org/html/2505.11480v1 we're getting there. This is for general purpose code, which is going to be easier than heavy SIMD where you have to watch out for very specific timing, pipelines and architecture details. But it's a step in that direction.

LtWorf

> I wonder how many optimisations like this could be created by LLMs

Zero. There's no huge corpus of stackoverflow questions on highly specific assembly optimisations so…

astrange

You can run an agent in a loop, but for something this small you can already use a SAT solver or superoptimizer if you want to get out of the business of thinking about things yourself.

I've never seen anyone actually do it, mostly because modeling the problem is more work than just doing it.

ksclk

> you can already use a SAT solver

Could you elaborate please? How would you approach this problem, using a SAT solver? All I know is that a SAT solver tells you whether a certain formula of ANDs and ORs is true. I don't know how it could be useful in this case.

gametorch

There are literally textbooks on optimization. With tons of examples. I'm sure there are models out there trained on them.

wk_end

There are literally textbooks on computational theory, with tons of example proofs. I'm sure there are models trained on them. Why hasn't ChatGPT produced a valid P vs. NP proof yet?

null

[deleted]