Skip to content(if available)orjump to list(if available)

FFmpeg School of Assembly Language

FFmpeg School of Assembly Language

225 comments

·February 22, 2025

computerbuster

Another resource on the same topic: https://blogs.gnome.org/rbultje/2017/07/14/writing-x86-simd-...

As I'm seeing in the comments here, the usefulness of handwritten SIMD ranges from "totally unclear" to "mission critical". I'm seeing a lot on the "totally unclear" side, but not as much on the "mission critical", so I'll talk a bit about that.

FFmpeg is a pretty clear use case because of how often it is used, but I think it is easier to quantify the impact of handwriting SIMD with something like dav1d, the universal production AV1 video decoder.

dav1d is used pretty much everywhere, from major browsers to the Android operating system (superseding libgav1). A massive element of dav1d's success is its incredible speed, which is largely due to how much of the codebase is handwritten SIMD.

While I think it is a good thing that languages like Zig have built-in SIMD support, there are some use cases where it becomes necessary to do things by hand because even a potential performance delta is important to investigate. There are lines of code in dav1d that will be run trillions of times in a single day, and they need to be as fast as possible. The difference between handwritten & compiler-generated SIMD can be up to 50% in some cases, so it is important.

I happen to be somewhat involved in similar use cases, where things I write will run a lot of times. To make sure these skills stay alive, resources like the FFmpeg school of assembly language are pretty important, in my opinion.

cornstalks

One of the fun things about dav1d is that since it’s written in assembly, they can use their own calling convention. And it can differ from method to method, so they have very few stack stores and loads compared to what a compiler will generate following normal platform calling conventions.

janwas

I'm curious why there are even function calls in time-critical code, shouldn't just about everything be inlined there? And if it's not time-critical, why are we interested in the savings from a custom calling convention?

rbultje

Binary size was a concern, so excessive inlining was undesirable.

And don't forget that any asm-optimized variant always has a C fallback for generic platforms lacking a hand-optimized variant which is also used to verify the asm-optimized variant using checkasm. This might not be linked into your binary/library (the linker eliminated it because it's never used), but the code exists nonetheless.

hrydgard

Function calls are very fast (unless there's really a lot of parameter copying/saving-to-stack) and if you can re-use a chunk of code from multiple places, you'll reduce pressure on the instruction cache. Inlining is not always ideal.

ajb

Codecs often have many redundant ways of doing the same thing, which are chosen on the basis of which one uses the fewest bits, for a specific piece of data. So you can't inline them as you don't know ahead of time which will be used.

weebull

Cache misses hurt.

MortyWaves

Doesn’t this just make it harder to maintain ports to other architectures though?

epr

For what's written in assembly, lack of portability is a given. The only exceptions would presumably be high level entry points called to from C, etc. If you wanted to support multiple targets, you have completely separate assembly modules for each architecture at least. You'd even need to bifurcate further for each simd generation (within x64 for example).

antoinealb

Yes, but on projects like that, ease of maintenance is a secondary priority when compared to performance or throughput.

wolf550e

There indeed have been bugs caused by amd64 assembly code assuming unix calling convention being used for Windows builds and causing data corruption. You have to be careful.

secondcoming

SIMD instructions are already architecture dependent

janwas

I'm also in the mission-critical camp, with perhaps an interesting counterpoint. If we're focusing on small details (or drowning in incidental complexity), it can be harder to see algorithmic optimizations. Or the friction of changing huge amounts of per-platform code can prevent us from escaping a local minimum.

Example: our new matmul outperforms a well-known library for LLM inference, sometimes even if it uses AMX vs our AVX512BF16. Why? They seem to have some threading bottleneck, or maybe it's something else; hard to tell with a JIT involved.

This would not have happened if I had to write per-platform kernels. There are only so many hours in the day. Writing a single implementation using Highway enabled exploring more of the design space, including a new kernel type and an autotuner able to pick not only block sizes, but also parallelization strategies and their parameters.

Perhaps in a second step, one can then hand-tune some parts, but I sure hope a broader exploration precedes micro-optimizing register allocation and calling conventions.

rbultje

> I sure hope a broader exploration precedes micro-optimizing register allocation and calling conventions.

It should be obvious that both are pursued independently whenever it makes sense. The idea that one should precede the other or is more important than the other is simply untrue.

janwas

How can tuning be independent of devising the algorithm?

Are you really suggesting writing a variant of a kernel, tuning it to the max, then discovering a new and different way to do it, and then discarding the first implementation? That seems like a lot of wasted effort.

dundarious

What does Zig offer in the way of builtin SIMD support, beyond overloads for trivial arithmetic operations? 90% of the utility of SIMD is outside of those types of simple operations. I like Zig, but my understanding is you have to reach for CPU specific builtins for the vast majority of cases, just like in C/C++.

GCC and Clang support the vector_size attribute and overloaded arithmetic operators on those "vectorized" types, and a LOT more besides -- in fact, that's how intrinsics like _mm256_mul_ps are implemented: `#define _mm256_mul_ps(a,b) (__m256)((v8sf)(a) * (v8sf)(b))`. The utility of all of that is much, much greater than what's available in Zig.

anonymoushn

Zig ships LLVM's internal generic SIMD stuff, which is fairly common for newish systems languages. If you want dynamic shuffles or even moderately exotic things like maddubs or aesenc then you need to use LLVM intrinsics for specific instructions or asm.

MortyWaves

I’m also wondering what “built in” even means. Many have SIMD, Vector, Matrix, Quaternions and the like as part of the standard library, but not necessarily as their own keywords. C#/.NET, Java has SIMD by this metric.

neonsunset

Java's Panama Vectors are work in progress and are far from being competitive with .NET's implementation of SIMD abstractions, which is mostly on par with Zig, Swift and Mojo.

You can usually port existing SIMD algorithms from C/C++/Rust to C# with few changes retaining the same performance, and it's practically impossible to do so in Java.

I feel like C veterans often don't realize how unnecessarily ceremonious platform-specific SIMD code is given the progress in portable abstractions. Unless you need an exotic instruction that does not translate across architectures and/or common patterns nicely, there is little reason to have a bespoke platform-specific path.

zbobet2012

So on point. We do _a lot_ of hand written SIMD on the other side (encoders) as well for similar reasons. In addition on the encoder side it's often necessary to "structure" the problem so you can perform things like early elimination of loops, and especially loads. Compilers simply can not generate autovectorized code that does those kinds of things.

buserror

I used to do quite a bit of SIMD version of critical functions, but now I rarely do -- one thing to try is isolate that code, and run it in the Most Excellent Compiler Explorer [0].

And stare at the generated code!

More often than not, the auto-vectorisation now generates pretty excellent SIMD version of your function, and all you have to do is 'hint' the compiler -- for example explicitly list alignment, provide your own vector source/destination type -- you can do a lot by 'styling' your C code while thinking about what the compiler might be able to do with it -- for example, use extra intermediary variables, really break down all the operations you want etc.

Worst case if REALLY the compiler isn't clever enough, this give you a good base to adapt the generated assembly to tweak, without having to actually write the boilerplate bits.

In most case, the resulting C function will be vectorized as good, or better than the hand coded one I'd do -- and in many other cases, it's "close enough" not to matter that much. The other good news is that that code will probably vectorize fine for WASM and NEON etc without having to have explicit versions.

[0] https://godbolt.org/

kimixa

We did something slightly similar - for the very few isolated things it makes sense (e.g. image up/download and conversions in the gpu driver that weren't supported/large enough to be worth firing off a gpu job to complete), they were initially written in C and used the compiler annotations to specify things like the alignment or allowed pointer aliasing in order to make it generate the code wanted. GCC and Clang both support some vector extensions, that allow somewhat portable implementations of things like scatter-gather, or shuffling things around or masking elements in a single register that's hard to specify clearly enough so that it's both readable for humans and will always generate the expected code between compiler versions in "plain" C.

But due to needing to support other compilers and platforms we actually ended up importing the generated asm from those source files in the actual build.

ack_complete

As a counterpoint, I regularly run into trivial cases that compilers are not able to autovectorize well:

https://gcc.godbolt.org/z/rjEqzf1hh

This is an unsigned byte saturating add. It is directly supported as a single instruction in both x86-64 and ARM64 as PADDUSB and UQADD.16B. But all compilers make a mess of it from a straightforward description, either failing to vectorize it or generating vectorized code that is much larger and slower than necessary.

This is with a basic, simple vectorization primitive. It's difficult to impossible to get compilers to use some of the more complex ones, like a rounded narrowing saturated right shift (UQRSHRN).

buserror

Oh I agree it is not foolproof, in fact I never understood why saturated math isn't 'standard' somewhere, even as an operator. Given we have 'normalisation' operator there's alway a way to find a natural looking syntax of sort.

But again, if you don't like the generated code, you can take the generated code and tweak it, and use that; I did it quite a few times.

holowoodman

Problem is, you have to take care to look at the compiler output and compare it to your expectations. Maybe fiddle with it a bit until it matches what you would have written yourself. Usually, it is quicker to just write it yourself...

Narishma

> Problem is, you have to take care to look at the compiler output and compare it to your expectations. Maybe fiddle with it a bit until it matches what you would have written yourself.

And keep redoing that for every new compiler or version of a compiler, or if you change compile options. Any of those things can prevent the auto-vectorization.

Narishma

IME, auto-vectorization is a fragile optimization that will silently fail under all sorts of conditions. I don't like to rely on it.

eddd-ddde

You can just store the generated binary / assembly and rely on that if you want stable code.

anonymoushn

I have no idea how to get the compiler to generate wider-than-16 pshufb in the general case, for example, and for the 16-wide case, writing the actual definition of pshufb prevents you from getting pshufb while writing a version with UB gets you pshufb.

null

[deleted]

kierank

I am the author of these lessons.

Ask me anything.

ilyagr

As a user of an ARM Mac, I wonder: how much effort does it take to get such optimized code to work the same in all platforms? I guess you must have very thorough tests and fallback algorithms?

If it's so heavy in assembly, the fact that ffmpeg works on my Mac seems like a miracle. Is it ported by hand?

rbultje

> If it's so heavy in assembly, the fact that ffmpeg works on my Mac seems like a miracle. Is it ported by hand?

Not ported, but rather re-implemented. So: yes.

A bit more detail: during build, on x86, the FFmpeg binary would include hand-written AVX2 (and SSSE3, and AVX512, etc.) implementations of CPU-intensive functions, and on Arm, the FFmpeg binary would include hand-written Neon implementations (and a bunch of extensions; e.g. dotprod) instead.

At runtime (when you start the FFmpeg binary), FFmpeg "asks" the CPU what instruction sets it supports. Each component (decoder, encoder, etc.) - when used - will then set function pointers (for CPU-intensive tasks) which are initialized to a C version, and these are updated to the Neon or AVX2 version depending on what's included in the build and supported by this specific device.

So in practice, all CPU-intensive tasks for components in use will run hand-written Neon code for you, and hand-written AVX2 for me. For people on obscure devices, it will run the regular C fallback.

saagarjha

While the instructions are different, every platform will have some implementation of the basic operations (load, store, broadcast, etc.), perhaps with a different bit width. With those you can write an accelerated baseline implementation, typically (sometimes these are autogenerated/use some sort of portable intrinsics, but usually they don't). If you want to go past that then things get more complicated and you will have specialized algorithms for what is available.

cnt-dracula

Hi, thanks for your work!

I have a question, as someone who can just about read assembly but still do not intuitively understand how to write or decompose ideas to utilise assembly, do you have any suggestions to learn / improve this?

As in, at what point would someone realise this thing can be sped up by using assembly? If one found a function that would be really performant in assembly how do you go about writing it? Would you take the output from a compiler that's been converted to assembly or would you start from scratch? Does it even matter?

qingcharles

You're looking for the tiniest blocks of code that are run an exceptional number of times.

For instance, I used to work on graphics renderers. You'd find the bit that was called the most (writing lines of pixels to the screen) and try to jiggle the order of the instructions to decrease the number of cycles used to move X bits from system RAM to graphics RAM.

When I was doing it, branching (usually checking an exit condition on a loop) was the biggest performance killer. The CPU couldn't queue up instructions past the check because it didn't know whether it was going to go true or false until it got there.

booi

Don’t modern or even just not ancient cpus use branch prediction to work past a check knowing that the vast majority of the time the check yields the same result?

epr

The best answer to your question is some variant of "write more assembly".

When someone indicates to me they want to learn programming for example, I ask them how many programs they've written. The answer is usually zero, and in fact I've never even heard greater than 10. No one will answer a larger number because that selects out people who would even ask the question. If you write 1000 programs that solve real problems, you'll be at least okay. 10k and you'll be pretty damn good. 100k and you might be better than the guy who wrote the assembly manual.

For a fun answer, this is a $20 nand2tetris-esque game that holds your hand through creating multiple cpu architectures from scratch with verification (similarly to prolog/vhdl), plus your own assembly language. I admittedly always end up writing an assembler outside of the game that copies to my clipboard, but I'm pretty fussy about ux and prefer my normal tools.

https://store.steampowered.com/app/1444480/Turing_Complete/

otteromkram

This is one heck of a question.

I don't know assembly, but my advice would be to take the rote route by rewriting stuff in assembly.

Just like anything else, there's no quick path to the finish line (unless you're exceptionally gifted), so putting in time is always the best action to take.

HALtheWise

What's your perspective on variable-width SIMD instruction sets (like ARM SVE or the RISC-V V extension)? How does developer ergonomics and code performance compare to traditional SIMD? Are we approaching a world with fewer different SIMD instruction sets to program for?

janwas

Var-width SIMD can mostly be written using the exact same Highway code, we just have to be careful to avoid things like arrays of vectors and sizeof(vector).

It can be more complicated to write things which are vector-length dependent, such as sorting networks or transposes, but we have always found a way so far.

On the contrary, there are increasing numbers of ISAs, including the two LoongArch LSX/LASX, AVX-512 which is really really good on Zen5, and three versions of Arm SVE. RISC-V V also has lots of variants and extensions. In such a world, I would not want to have to implement per-platform implementations.

201984

How does FFmpeg generate SEH tables for assembly functions on Windows? Is this something that x86asm.inc handles, or do you guys just not worry about it?

qingcharles

As someone who wrote x86 optimization code professionally in the 90s, do we need to do this manually still in 2025?

Can we not just write tests and have some LLM try 10,000 different algorithms and profile the results?

Or is an LLM unlikely to find the optimal solution even with 10,000 random seeds?

Just asking. Optimizing x86 by hand isn't the easiest, because to think through it you start to have to try and fit all the registers in your mind and work through the combinations. Also you need to know how long each instruction combination will take; and some of these instructions have weird edge cases that take vastly longer or quicker to run that is hard for a human to take into account.

Ecco

I guess your question could be rephrased as "couldn't we come up with better compilers?" (LLM-based or not, brute force based or not).

I don't have an answer but I believe that a lot of effort has been put in making (very smart) compilers already, so if it's even possible I doubt it's easy.

I also believe there are some cases where it's simply not possible for a compiler to beat handwritten assembly : indeed there is only so much info you can convey in a C program, and a developer who's aware of the whole program's behavior might be able to make extra assumptions (not written in the C code) and therefore beat a compiler. I'm sure people here would be able to come up with great practical examples of this.

magicalhippo

While using a LLM might not be the best approach, it would be interesting to know if there are some tools these days that can automate this.

Like, I should be able to give the compiler a hot loop and a week, and see what it can come up with.

One potential pitfall I can see is that there are a lot of non-local interactions in moderns systems. We have large out-of-order buffers, many caching layers, complex branch predictors, and an OS running other tasks at the same time, and a dozen other things.

What is optimal on paper might not be optimal in the real world.

dist-epoch

> Like, I should be able to give the compiler a hot loop and a week, and see what it can come up with.

There are optimization libraries which can find the optimum combination of parameters for an objective, like Optuna.

It would be enough to expose all the optimization knobs that LLVM has, and Optuna will find the optimum for a particular piece of code on a particular test payload.

danybittel

janwas

Collaborators have actually superoptimized some of the more complicated Highway ops on RISC-V, with interesting gains, but I think the approach would struggle with largish tasks/algorithms?

kierank

I have tried with Grok3 and Claude. They both seem to have an understanding of the algorithms and data patterns which is more than I expected but then just guess a solution that's often nonsensical.

saagarjha

You would need to be very careful about verifying the output. Having an LLM generate patterns and then running them through a SAT solver might work, but usually it's only really feasible for short sequences of code.

null

[deleted]

qingcharles

[flagged]

christiangenco

Hacker News is such a cool website.

Hi thank you for writing this!

Daniel_Van_Zant

I'm curious from anyone who has done it. Is there any "pleasure" to be had in learning or implementing assembly (like there is for LISP or RISC-V) or is it something you learn and implement because you want to do something else (like learning COBOL if you need to work with certain kinds of systems). It has always piqued my interest but I don't have a good reason in my day-to-day job to get into it. Wondering if it is worth committing some time to for the fun of it.

msaltz

I did the first 27 chapters of this tutorial just because I was interested in learning more and it was thoroughly enjoyable: https://mariokartwii.com/armv8/

I actually quite like coding in assembly now (though I haven’t done much more than the tutorial, just made an array library that I could call from C). I think it’s so fun because at that level there’s very little magic left - you’re really saying exactly what should happen. What you see is mostly what you get. It also helped me understand linking a lot better and other things that I understood at a high level but still felt fuzzy on some details.

Am now interested to check out this ffmpeg tutorial bc it’s x86 and not ARM :)

Daniel_Van_Zant

This looks to be very cool will check it out. Wild to see it on a Mario Kart Wii Site, but I guess modders/hackers are one of the groups of people who still need to work with assembly frequently.

crq-yml

Learning at least one assembly language is very rewarding because it puts you in touch with the most primitive forms of practical programming: while there are theoretical models like Turing machines or lambda calculus that are even more simplistic, the architectures that programmers actually work with have some forgiving qualities.

It isn't a thing to be scared of - assembly is verbose, not complex. Everything you do in it needs load and store, load and store, millions of times. When you add some macros and build-time checks, or put it in the context of a Forth system(which wraps an interpreter around "run chunks of assembly", enabling interactive development and scripting) - it's not that far off from C, and it removes the magic of the compiler.

I'm an advocate for going retro with it as well; an 8-bit machine in an emulator keeps the working model small, in a well-documented zone, and adds constraints that make it valuable to think about doing more tasks in assembly, which so often is not the case once you are using a 32-bit or later architecture and you have a lot of resources to throw around. People who develop in assembly for work will have more specific preferences, but beginners mostly need an environment where the documentation and examples are good. Rosetta Code has some good assembly language examples that are worth using as a way to learn.

btown

One “fun” thing about it is that it’s higher level than you think, because the actual chip may do things with branch prediction and pipelining that you can only barely control.

I remember a university course where we competed on who could have the most performant assembly program for a specific task; everyone tried various variants of loop unrolling to eke out the best performance and guide the processor away from bad branch predictions. I may or may not have hit Ballmer Peak the night before the due date and tried a setup that most others missed, and won the competition by a hair!

There’s also the incredible joy of seeing https://github.com/chrislgarry/Apollo-11 and quipping “this is a Unix system; I know this!” Knowing how to read the language of how we made it to the moon will never fade in wonder.

Short answer: yes!

brown

Learning assembly was profound for me, not because I've used it (I haven't in 30 years of coding), but because it completed the picture - from transistors to logic gates to CPU architecture to high-level programming. That moment when you understand how it all fits together is worth the effort, even if you never write assembly professionally.

renox

While I think that learning assembly is very useful, I think that one must be careful at applying assembly language concepts in a HLL C/X++/Zig..

For example, an HLL pointer is different from an assembly pointer(1). Sure the HLL pointer will be lowered to an assembly language pointer eventually but it still has a different semantic.

1: because you're relying on the compiler to use efficiently the registers, HLL pointers must be restricted otherwise programs would be awfully slow as soon as you'd use one pointer.

Daniel_Van_Zant

This out of everything, convvinced me. The more I get the "full picture" the more I appreciate what a wondrous thing computers are. I've learned all the way down to Forth/C and from the bottom up to programming FPGAs with Verilog so Assembly may be just what I need to finally close that last gap.

daeken

I have spent the last ~25 years deep in assembly because it's fun. It's occasionally useful, but there's so much pleasure in getting every last byte where it belongs, or working through binaries that no one has inspected in decades, or building an emulator that was previously impossible. It's one of the few areas where I still feel The Magic, in the way I did when I first started out.

kevingadd

Learning assembly is really valuable even if you never write any. Looking at the x64 or ARM64 assembly generated by i.e. the C or C# you write can help you understand its performance characteristics a lot better, and you can optimize based on that knowledge without having to drop down to a lower level.

Of course, most applications probably never need optimization to that degree, so it's still kind of a niche skill.

ghhrjfkt4k

I once used it to get a 4x speedup of sqrt computations, by using SIMD. It was quite fun, and also quite self contained and manageable.

The library sqrt handles all kinds of edge-cases which prevent the compiler from autovectorizing it.

sigbottle

If you're working with C++ (and I'd imagine C), knowing how to debug the assembly comes up. And if you've written assembly it helps to be aware of basic patterns such as loops, variables, etc. to not get completely lost.

Compilers have debug symbols, you can tune optimization levels, etc. so it's hopefully not too scary of a mess once you objdump it, but I've seen people both use their assembly knowledge at work and get rewarded handsomely for it.

jupp0r

I personally don't think there's much value in writing assembly (vs using intrinsics), but it's been really helpful to read it. I have often used Compiler Explorer (https://godbolt.org/) to look at the assembly generated and understand optimizations that compilers perform when optimizing for performance.

frontfor

Your commented is directly contradicted by the article.

> To make multimedia processing fast. It’s very common to get a 10x or more speed improvement from writing assembly code, which is especially important when wanting to play videos in real time without stuttering.

TinkersW

They said they prefer intrinsics which the article says are only about 10% slower(citation needed), you misunderstood and made a comparison against scalar.

Personally I'd say the only good reason to use assembly over intrinsics is having control over calling convention, for example the windows CC is absolute trash and wastes many SIMD registers.

edward28

And how often are you doing multimedia processing?

slicktux

Kudos for the K&R reference! That was the book I bought to learn C and programming in general. I had initially tried C++ as my first language but I found it too abstract to learn because I kept asking what was going on underneath the hood.

lukaslalinsky

This is perfect. I used to know the x86 assembly at the time of 386, but for the more advanced processors, it was too complex. I'd definitely like to learn more about SIMD on recent CPUs, so this seems like a great resource.

foresto

> Note that the “q” suffix refers to the size of the pointer *(*i.e in C it represents *sizeof(*src) == 8 on 64-bit systems, and x86asm is smart enough to use 32-bit on 32-bit systems) but the underlying load is 128-bit.

I find that sentence confusing.

I assume that i.e is supposed to be i.e., but What is *(* supposed to mean? Shouldn't that be just an open parenthesis?

In what context would *sizeof(*src) be considered valid? As far as I know, sizeof never yields a pointer.

I get the impression that someone sprinkled random asterisks in that sentence, or maybe tried to mix asterisks-denoting-italics with C syntax.

kevingadd

Yes, this looks like something went wrong with the markdown itself or the conversion of the source material to markdown.

SavioMak

I think the first two asterisks are used like footnotes pairs

sweeter

Wouldn't it return the size of the pointer? I would guess it's exclusively used to handle architecture differences

foresto

Strictly speaking, or maybe just the way I personally think of it, sizeof doesn't return anything. It's not a function, so it doesn't return at all. (At least, not at run time.)

Nitpicking aside, the result of sizeof(*src) would be the size of the object at which the pointer points. The type of that result is size_t. That's what makes this code from the lesson I quoted invalid:

*sizeof(*src)

That first asterisk tries to dereference the result of sizeof as though it were a pointer, but it's a size_t: an unsigned integer type. Not a pointer.

sweeter

Yea but that first asterisk is incorrect

wruza

I don’t care about the split, just wanted to say that this guide is so good. I wish I had this back when I was interested in low-low-level.

imglorp

Asm is 10x faster than C? That was definitely true at some point but is it still true today? Have compilers really stagnated so badly they can't come close to hand coded asm?

jsheard

C with intrinsics can get very close to straight assembly performance. The FFmpeg devs are somewhat infamously against intrinsics (IIRC they don't allow them in their codebase even if the performance is as good as equivalent assembly) but even by TFAs own estimates the difference between intrinsics and assembly is on the order of 10-15%.

You might see a 10x difference if you compare meticulously optimized assembly to naive C in cases where vectorization is possible but the compiler fails to capitalize on that, which is often, because auto-vectorization still mostly sucks beyond trivial cases. It's not really a surprise that expert code runs circles around naive code though.

CyberDildonics

You might see a 10x difference if you compare meticulously optimized assembly to naive C in cases where vectorization is possible but the compiler fails to capitalize on that,

I can get far more than 10x over naive C just by reordering memory accesses. With SIMD it can be 7x more, but that can be done with ISPC, it doesn't need to be done with asm.

magicalhippo

> I can get far more than 10x over naive C

However you can write better than naive C by compiling and watching the compiler output.

I stopped writing assembly back around y2k as I was fairly consistently getting beaten by the compiler when I wrote compiler-friendly high-level code. Memory organization is also something you can control fairly well on the high-level code side too.

Sure some niches remained, but for my projects the gains were very modest compared to invested time.

UltraSane

"The FFmpeg devs are somewhat infamously against intrinsics (they don't allow them in their codebase even if the performance is as good as equivalent assembly)"

Why?

Narishma

I don't know if it's their reason but I myself avoid them because I find them harder to read than assembly language.

oguz-ismail

Have you seen C code with SIMD intrinsics? They are an eyesore

schainks

Did you read lesson one?

TL;DR They want to squeeze every drop of performance out of the CPU when processing media, and maintaining a mixture of intrinsics code and assembly is not worth the trade off when doing 100% assembly offers better performance guarantees, readability, and ease of maintenance / onboarding of developers.

1propionyl

It's not a matter of compiler stagnation. The compiler simply isn't privy to the information the assembly author makes use of to inform their design.

Put more simply: a C compiler can't infer from a plain C implementation that you're trying to do certain mathematics that could alternately be expressed more efficiently with SIMD intrinsics. It doesn't have access to your knowledge about the mathematics you're trying to do.

There are also target specific considerations. A compiler is, necessarily, a general purpose compiler. Problems like resource (e.g. register) allocation are NP-complete (equivalent to knapsack) and very few people want their compiler to spend hours upon hours searching for the absolute most optimal (if indeed you can even know that statically...) asmgen.

lukaslalinsky

This is for heavily vectorized code, using every hack possible to fully utilize the CPU. Compilers are smart when it comes to normal code, but codecs are not really normal code. Not a ffmpeg programmer, but have some background dealing with audio.

PaulDavisThe1st

> codecs are not really normal code.

Not really a fair comment. They are entirely normal code in most senses. They differ in one important way: they are (frequently) perfect examples of where "single instruction, multiple data" completely makes sense. "Do this to every sample" is the order of the day, and that is a bit odd when compared with text processing or numerical computation.

But this is true of the majority of signal processing, not just codecs. As simple a thing as increasing the volume of an audio data stream means multiplying every sample by the same value - more or less the definition of SIMD.

astrange

There's a difference because audio processing is often "massively parallel", or at least like 1024 samples at once, but in video codecs operations could be only 4 pixels at once and you have to stretch to find extra things to feed the SIMD operations.

bad_username

> codecs are not really normal code.

Codecs are pretty normal code. You can get decent performance by just writing quality idiomatic C or C++, even without asm. (I implemented a commercial x.264 codec and worked on a bunch of audio codecs.)

variadix

C compilers are still pretty bad at auto vectorization. For problems where SIMD is applicable, you can reasonably expect a 2x-16x speed up over the naive scalar implementation.

astrange

Also, if you write code with intrinsics the autovectorization can make it _worse_. eg a pattern is to write a SIMD main loop and then a scalar tail, but it can autovectorize that and mess it up.

janwas

Given the wider availability of masking (AVX-512, RISC-V and SVE), I figure scalar tails are no longer the preferred pattern everywhere.

jki275

Probably some very niche things. I know I can't write ASM that's 10x better than C, but I wouldn't assume no one can.

CyberDildonics

It isn't very hard to write C that is 10x better than C, because most programs have too many memory allocations and terrible memory access patterns. Once you sort that out you are already more than 10x ahead, then you can turn on the juice with SIMD, parallelization and possibly optimize for memory bandwidth as well.

1propionyl

It depends on what you're trying to do. I would in general only expect such substantial speedups when considering writing computation kernels (for audio, video, etc).

Compilers today are liable in most circumstances to know many more tricks than you do. Especially if you make use of hints (e.g. "this memory is almost always accessed sequentially", "this branch is almost never taken", etc) to guide it.

jki275

Oh I definitely agree that in the vast majority of cases the compiler will probably win.

But I suspect there are cases where the super experts exist who can do things better.

astrange

Mm, those hints don't matter on modern CPUs. There's no good way for the compiler to pass it down to them either. There are some things like prefetch instructions, but unless you know the exact machine you're targeting, you won't know when to use them.

warble

I highly doubt it's true. I can usually approach the same speed in C if I'm working with a familiar compiler. Sometimes I can do significantly better in assembly but it's rare.

I work on bare metal embedded systems though, so maybe there's some nuance when working with bigger OS libs?

umanwizard

The difference is probably that you don’t work in an environment that supports SIMD or your code can’t benefit from it.

warble

You're correct, I don't use SIMD instructions much, but I can, and with a C compiler. So still, not sure the advantage of ASM.

bob1029

This gets even more complex once you start looking at dynamic compilations. Some of the JIT compilers have the ability to hot patch functions based upon runtime statistics. In very large, enterprisey applications with unknowns regarding how they will actually be used at build time, this can make a difference.

You can go nuclear option with your static compilations and turn on all the optimizations everywhere, but this kills inner loop iteration speed. I believe there are aspects of some dynamic compiling runtimes that can make them superior to static compilations - even if we don't care how long the build takes.

astrange

Statistics aren't magic and it's not going to find superoptimizing cases like this by using them. I think this is only helpful when you get a lot of incoming poorly written/dynamic code needing a lot of inlining, that maybe just got generated in the first place. So basically serving ads on websites.

In ffmpeg's case you can just always be the correct thing.

epolanski

I remember a series of lectures from an Intel engineer that went into how difficult it was writing assembly code for x86. He basically stated that the number of cases you can really write code that is faster than what a compiler would do is close to none.

Essentially people think they are writing low level code, in reality that's not how CPUs interpret that code, so he explained how writing manual assembly kills performance pretty much always (at least on modern x86).

iforgotpassword

That's for random "I know asm so it must be faster".

If you know it really well, have already optimized everything on an algorithmic level and have code that can benefit from simd, 10x is real.

FarmerPotato

You have to consider that modern CPUs don't execute code in-order, but speculatively, in multiple instruction pipelines.

I've used Intel's icc compiler and profiler tools in an iterative fashion. A compiler like Intel's might be made to profile cache misses, pipeline utilization, branches, stalls, and supposedly improve in the next compilation.

The assembly programmer has to consider those factors. Sure would be nice to have a computer check those things!

In the old days, we only worried about cycle counts, wait states, and number of instructions.

saagarjha

That's assembly by people who learned it in 1990. Intel very much does want you writing assembly for their processors and in many ways the only way to push them hard is by doing so.

xuhu

"Assembly language of FFmpeg" leads me to think of -filter_complex. It's not for human consumption even once you know many of its gotchas (-ss and keyframes, PTS, labeling and using chain outputs, fading, mixing resolutions etc).

But then again no-one is adjusting timestamps manually in batch scripts, so a high-level script on top of filter_complex doesn't have much purpose.

chgs

I use filter-complex all the time, often in batch scripts

pdyc

what do you mean by no purpose? you can adjust them programmatically in batch scripts.

agumonkey

I remember kempf saying most of recent development on codecs is in raw asm. Only logical that they can write some tutorials :)