Skip to content(if available)orjump to list(if available)

Introducing architecture variants

Introducing architecture variants

105 comments

·October 30, 2025

mobilio

Announce was here: https://discourse.ubuntu.com/t/introducing-architecture-vari...

and key point: "Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that."

ninkendo

> show that most packages show a slight (around 1%) performance improvement

This takes me back to arguing with Gentoo users 20 years ago who insisted that compiling everything from source for their machine made everything faster.

The consensus at the time was basically "theoretically, it's possible, but in practice, gcc isn't really doing much with the extra instructions anyway".

Then there's stuff like glibc which has custom assembly versions of things like memcpy/etc, and selects from them at startup. I'm not really sure if that was common 20 years ago but it is now.

It's cool that after 20 years we can finally start using the newer instructions in binary packages, but it definitely seems to not matter all that much, still.

Amadiro

It's also because around 20 years ago there was a "reset" when we switched from x86 to x86_64. When AMD introduced x86_64, it made a bunch of the previously optional extension (SSE up to a certain version etc) a mandatory part of x86_64. Gentoo systems could already be optimized before on x86 using those instructions, but now (2004ish) every system using x86_64 was automatically always taking full advantage of all of these instructions*.

Since then we've slowly started accumulating optional extensions again; newer SSE versions, AVX, encryption and virtualization extensions, probably some more newfangled AI stuff I'm not on top of. So very slowly it might have started again to make sense for an approach like Gentoo to exist**.

* usual caveats apply; if the compiler can figure out that using the instruction is useful etc.

** but the same caveats as back then apply. A lot of software can't really take advantage of these new instructions, because newer instructions have been getting increasingly more use-case-specific; and applications that can greatly benefit from them will already have alternative code-pathes to take advantage of them anyway. Also a lot of the stuff happening in hardware acceleration has moved to GPUs, which have a feature discovery process independent of CPU instruction set anyway.

slavik81

The llama.cpp package on Debian and Ubuntu is also rather clever in that it's built for x86-64-v1, x86-64-v2, x86-64-v3, and x86-64-v4. It benefits quite dramatically from using the newest instructions, but the library doesn't have dynamic instruction selection itself. Instead, ld.so decides which version of libggml.so to load depending on your hardware capabilities.

mikepurvis

> AVX, encryption and virtualization

I would guess that these are domain-specific enough that they can also mostly be enabled by the relevant libraries employing function multiversioning.

ploxiln

FWIW the cool thing about gentoo was the "use-flags", to enable/disable compile-time features in various packages. Build some apps with GTK or with just the command-line version, with libao or pulse-audio, etc. Nowadays some distro packages have "optional dependencies" and variants like foobar-cli and foobar-gui, but not nearly as comprehensive as Gentoo of course. Learning about some minor custom CFLAGS was just part of the fun (and yeah some "funroll-loops" site was making fun of "gentoo ricers" way back then already).

I used Gentoo a lot, jeez, between 20 and 15 years ago, and the install guide guiding me through partitioning disks, formatting disks, unpacking tarballs, editing config files, and running grub-install etc, was so incredibly valuable to me that I have trouble expressing it.

mpyne

I still use Gentoo for that reason, and I wish some of those principles around handling of optional dependencies were more popular in other Linux distros and package ecosystems.

There's lots of software applications out there whose official Docker images or pip wheels or whatever bundle everything under the sun to account for all the optional integrations the application has, and it's difficult to figure out which packages can be easily removed if we're not using the feature and which ones are load-bearing.

suprjami

I somehow have the memory that there was an extremely narrow time window where the speedup was tangible and quantifiable for Gentoo, as they were the first distro to ship some very early gcc optimisation. However it's open source software so every other distro soon caught up and became just as fast as Gentoo.

oivey

This should build a lot more incentive for compiler devs to try and use the newer instructions. When everyone uses binaries compiled without support for optional instruction sets, why bother putting much effort into developing for them? It’ll be interesting to see if we start to see more of a delta moving forward.

Seattle3503

And application developers to optimize with them in mind?

harha

Would it make a difference if you compile the whole system vs. just the programs you want optimized?

As in, are there any common libraries or parts of the system that typically slow things down, or was this more targeting a time when hardware was more limited so improving all would have made things feel faster in general.

juujian

Are there any use cases where that 1% is worth any hassle whatsoever?

wongarsu

If every computer built in the last decade gets 1% faster and all we have to pay for that is a bit of one-off engineering effort and a doubling of the storage requirement of the ubuntu mirrors that seems like a huge win

If you aren't convinced by your ubuntu being 1% faster, consider how many servers, VMs and containers run ubuntu. Millions of servers using a fraction of a percent less energy multiplies out to a lot of energy

vladms

Don't have a clear opinion, but you have to factor in all the issues that can be due to different versions of software. Think of unexposed bugs in the whole stack (that can include compiler bugs but also software bugs related to numerical computation or just uninitialized memory). There are enough heisenbugs without worrying that half the servers run on a slightly different software.

It's not for nothing that some time ago "write once, run everywhere" was a selling proposition (not that it was actually working in all cases, but definitely working better than alternatives).

sumtechguy

That comes out to about 1.5 hours faster per week for many tasks. If you are running full tilt. But that seems like an ok easy win.

Aissen

You need 100 servers. Now you need to only buy 99. Multiply that by a million, and the economies of scale really matter.

iso1631

1% is less than the difference between negotiating with a hangover or not.

PeterStuer

A lott of improvements are very incremental. In agregate, they often compound and are vey significant.

If you would only accept 10x improvements, I would argue progress would be very small.

ilaksh

They did say some packages were more. I bet some are 5%, maybe 10 or 15. Maybe more.

Well one example could be llama.cpp . It's critical for them to use every single extension the CPU has move more bits at a time. When I installed it I had to compile it.

This might make it more practical to start offering OS packages for things like llama.cpp

I guess people that don't have newer hardware aren't trying to install those packages. But maybe the idea is that packages should not break on certain hardware.

Blender might be another one like that which really needs the extensions for many things. But maybe you so want to allow it to be used on some oldish hardware anyway because it still has uses that are valid on those machines.

locknitpicker

> Are there any use cases where that 1% is worth any hassle whatsoever?

I don't think this is a valid argument to make. If you were doing the optimization work then you could argue tradeoffs. You are not, Canonical is.

Your decision is which image you want to use, and Canonical is giving you a choice. Do you care about which architecture variant you use? If you do, you can now pick the one that works best for you. Do you want to win an easy 1% performance gain? Now you have that choice.

godelski

  > where that 1% is worth any hassle
You'll need context to answer your question, but yes there are cases.

Let's say you have a process that takes 100hrs to run and costs $1k/hr. You save an hour and $1k every time you run the process. You're going to save quite a bit. You don't just save the time to run the process, you save literal time and everything that that costs (customers, engineering time, support time, etc).

Let's say you have a process that takes 100ns and similarly costs $1k/hr. You now run in 99ns. Running the process 36 million times is going to be insignificant. In this setting even a 50% optimization probably isn't worthwhile (unless you're a high frequency trader or something)

This is where the saying "premature optimization is the root of all evil" comes from! The "premature" part is often disregarded and the rest of the context goes with it. Here's more context to Knuth's quote[0].

  There is no doubt that the holy grail of efficiency leads to abuse. Programmers waste enormous amounts of time thinking about, or worrying about, the speed of noncritical parts of their programs, and these attempts at efficiency actually have a strong negative impact when debugging and maintenance are considered. We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil.

  Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.
Knuth said: "Get a fucking profiler and make sure that you're optimizing the right thing". He did NOT say "don't optimize".

So yes, there are plenty of times where that optimization will be worthwhile. The percentages don't mean anything without the context. Your job as a programmer is to determine that context. And not just in the scope of your program, but in the scope of the environment you expect a user to be running on. (i.e. their computer probably isn't entirely dedicated to your program)

[0] https://dl.acm.org/doi/10.1145/356635.356640 (alt) https://sci-hub.se/10.1145/356635.356640

adgjlsfhk1

it's very no uniform. 99% see no change, but 1% see 1.5-2x better performance

2b3a51

I'm wondering if 'somewhat numerical in nature' relates to lpack/blas and similar libraries that are actually dependencies of a wide range of desktop applications?

Insanity

I read it as, across the board a 1% performance improvement. Not that only 1% of packages get a significant improvement.

gwbas1c

> some packages, mostly those that are somewhat numerical in nature, improve more than that

Perhaps if you're doing CPU-bound math you might see an improvement?

dang

Thanks - we've merged the comments from https://news.ycombinator.com/item?id=45772579 into this thread, which had that original source.

pizlonator

That 1% number is interesting but risks missing the point.

I bet you there is some use case of some app or library where this is like a 2x improvement.

theandrewbailey

A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

jsheard

> x86-64-v3 is AVX2-capable CPUs.

Which unfortunately extends all the way to Intels newest client CPUs since they're still struggling to ship their own AVX512 instructions, which are required for v4. Meanwhile AMD has been on v4 for two generations already.

theandrewbailey

At least Intel and AMD have settled on a mutually supported subset of AVX-512 instructions.

wtallis

The hard part was getting Intel and Intel to agree on which subset to keep supporting.

null

[deleted]

zozbot234

What are the changes to dpkg and apt? Are they being shared with Debian? Could this be used to address the pesky armel vs. armel+hardfloat vs. armhf issue, or for that matter, the issue of i486 vs. i586 vs. i686 vs. the many varieties of MMX and SSE extensions for 32-bit?

(There is some older text in the Debian Wiki https://wiki.debian.org/ArchitectureVariants but it's not clear if it's directly related to this effort)

Denvercoder9

Even if technically possible, it's unlikely this will be used to support any of the variants you mentioned in Debian. Both i386 and armel are effectively dead: i386 is reduced to a partial architecture only for backwards compatibility reasons, and armel has been removed entirely from development of the next release.

zozbot234

What you said is correct wrt. official support, but Debian also has an unofficial ports infrastructure that could be repurposed towards enabling Debian for older architecture variants.

bobmcnamara

This would allow mixing armel and softvfp ABIs, but not hard float ABIs, at least across compilation unit boundaries (that said, GCC never seems to optimize ABI bottlenecks within a compilation unit anyway)

Hasz

Getting a 1% across the board general purpose improvement might sound small, but is quite significant. Happy to see Canonical invest more heavily in performance and correctness.

Would love to see which packages benefited the most in terms of percentile gain and install base. You could probably back out a kWh/tons of CO2 saved metric from it.

brucehoult

So now they can support RISC-V RVA20 and RVA23 in the same distro?

All the fuss about Ubuntu 25.10 and later being RVA23 only was about nothing?

rock_artist

So if it got it right, This is mostly a way to have branches within a specific release for various levels of CPUs and their support of SIMD and other modern opcodes.

And if I have it right, The main advantage should come with package manager and open sourced software where the compiled binaries would be branched to benefit and optimize newer CPU features.

Still, this would be most noticeable mostly for apps that benefit from those features such as audio dsp as an example or as mentioned ssl and crypto.

jeffbee

I would expect compression, encryption, and codecs to have the least noticeable benefit because these already do runtime dispatch to routines suited to the CPU where they are running, regardless of the architecture level targeted at compile time.

WhyNotHugo

OTOH, you can remove the runtime dispatching logic entirely if you compile separate binaries for each architecture variant.

Especially the binaries for the newest variant, since they can entirely conditionals/branching for all older variants.

jeffbee

That's a lot of surgery. These libraries do not all share one way to do it. For example zstd will switch to static BMI2 dispatch if it was targeting Haswell or later at compile time, but other libraries don't have that property and will need defines.

dfc

> you will not be able to transfer your hard-drive/SSD to an older machine that does not support x86-64-v3. Usually, we try to ensure that moving drives between systems like this would work. For 26.04 LTS, we’ll be working on making this experience cleaner, and hopefully provide a method of recovering a system that is in this state.

Does anyone know what the plans are to accomplish this?

dmoreno

If I were them I would make sure the V3 instructions are not used until late in the boot process, and some apt command that makes sure all installed programs are in the right subarchitecture for the running system, reinstalling as necessary.

But that does not sound like a simple for non technical users solution.

Anyway, non technical users using an installation on another lower computer? That sounds weird.

null

[deleted]

theandrewbailey

A reference for x86-64 microarchitecture levels: https://en.wikipedia.org/wiki/X86-64#Microarchitecture_level...

x86-64-v3 is AVX2-capable CPUs.

physicsguy

This is quite good news but it’s worth remembering that it’s a rare piece of software in the modern scientific/numerical world that can be compiled against the versions in distro package managers, as versions can significantly lag upstream months after release.

If you’re doing that sort of work, you also shouldn’t use pre-compiled PyPi packages for the same reason - you leave a ton of performance on the table by not targeting the micro-architecture you’re running on.

PaulHoule

My RSS reader trains a model every week or so and takes 15 minutes total with plain numpy, scikit-learn and all that. Intel MKL can do the same job in about half the time as the default BLAS. So you are looking at a noticeable performance boost but zero bullshit install with uv is worth a lot. If I was interested in improving the model than yeah I might need to train 200 of them interactively and I’d really feel the difference. Thing is the model is pretty good as it is and to make something better I’d have to think long and hard about what ‘better’ means.

ciaranmca

Out of interest, what reader is this? Sounds interesting

zipy124

Yup, if you're using OpenCV for instance compiling instead of using pre-built binaries can result in 10x or more speed-ups once you take into account avx/threading/math/blas-libraries etc...

oofbey

Yup. The irony is that the packages which are difficult to build are the ones that most benefit from custom builds.

niwtsol

Thanks for sharing this. I'd love to learn more about micro-architectures and instruction sets - would you have any recommendations for books or sources that would be a good starting place?

colechristensen

Most of the scientific numerical code I ever used had been in use for decades and would compile on a unix variant released in 1992, much less the distribution version of dependencies that were a year or two behind upstream.

owlbite

Very true, but a lot of stuff builds on a few core optimized libraries like BLAS/LAPACK, and picking up a build of those targeted at a modern microarchitecture can give you 10x or more compared to a non-targeted build.

That said, most of those packages will just read the hardware capability from the OS and dispatch an appropriate codepath anyway. You maybe save some code footprint by restricting the number of codepaths it needs to compile.

jeffbee

I wonder who downvoted this. The juice you are going to get from building your core applications and libraries to suit your workload are going to be far larger than the small improvements available from microarchitectural targeting. For example on Ubuntu I have some ETL pipelines that need libxml2. Linking it statically into the application cuts the ETL runtime by 30%. Essentially none of the practices of Debian/Ubuntu Linux are what you'd choose for efficiency. Their practices are designed around some pretty old and arguably obsolete ideas about ease of maintenance.

smlacy

I presume the motivation is performance optimization? It would be more compelling to include some of the benefits in the announcement?

embedding-shape

They do mention it in the linked announcement, although not really highlighted, just as a quick mention:

> As a result, we’re very excited to share that in Ubuntu 25.10, some packages are available, on an opt-in basis, in their optimized form for the more modern x86-64-v3 architecture level

> Previous benchmarks we have run (where we rebuilt the entire archive for x86-64-v3 57) show that most packages show a slight (around 1%) performance improvement and some packages, mostly those that are somewhat numerical in nature, improve more than that.

stabbles

Seems like this is not using glibc's hwcaps (where shared libraries were located in microarch specific subdirs).

To me hwcaps feels like a very unfortunate feature creep of glibc now. I don't see why it was ever added, given that it's hard to compile only shared libraries for a specific microarch, and it does not benefit executables. Distros seem to avoid it. All it does is causing unnecessary stat calls when running an executable.

ElijahLynn

I clicked on this article expecting an M series variant for Apple hardware...