Elementary Functions and Not Following IEEE754 Floating-Point Standard (2020)
32 comments
·February 11, 2025jcranmer
magicalhippo
From the referenced slides[1]:
The 2012 discovery of the Higgs boson at the ATLAS experiment in the LHC relied crucially on the ability to track charged particles with exquisite precision (10 microns over a 10m length) and high reliability (over 99% of roughly 1000 charged particles per collision correctly identified).
In an attempt to speed up the calculation, researchers found that merely changing the underlying math library (which should only affect at most the last bit) resulted in some collisions being missed or misidentified.
I was a user contributing to the LHC@Home BOINC project[2], where they ran into similar problems. They simulated beam stability, so iterated on the position of the simulated particles for millions of steps. As normal in BOINC each work unit is computed at least three times and if the results don't match the work unit is queued for additional runs.
They noticed that they got a lot of work units that failed the initial check compared to other BOINC projects. Digging into it they noticed that if a work unit was computed by the same CPU manufacturer, ie all Intel CPUs, then they passed as expected. But if the work unit had been processed by mixed CPUs, ie at least one run on Intel and one run on AMD, they very often disagreed.
That's when they discovered[3] this very issue about how the rounding of various floating point functions differed between vendors.
After switching to crlibm[4] for the elementary functions they used the mixed-vendor problem went away.
[1]: https://www.davidhbailey.com/dhbtalks/dhb-icerm-2020.pdf
[2]: https://en.wikipedia.org/wiki/LHC@home
[3]: https://accelconf.web.cern.ch/icap06/papers/MOM1MP01.pdf
[4]: https://ens-lyon.hal.science/ensl-01529804/file/crlibm.pdf
zokier
That Zimmermann paper is far more useful than the article. Notably LLVM libm is correctly rounded for most single precision ops.
Notable omission are crlibm/rlibm/core-math etc libs which claim to be more correct, but I suppose we can already be pretty confident about them.
lifthrasiir
CORE-MATH is working directly with LLVM devs to get their routines to the LLVM libm, so no additional column is really required.
AlotOfReading
I've found that LLVM is slightly worse than GCC on reproducibility once you get past the basic issues in libm. For example, LLVM will incorrectly round sqrt on ppc64el in some unusual circumstances:
https://github.com/J-Montgomery/rfloat/blob/8a58367db32807c8...
dzaima
On the ppc64 thing, some ways to avoid it: https://godbolt.org/z/fx88rYz5v
dzaima
...on -ffast-math. Of course you'll have arbitrary behavior of (at least) a couple ULPs on -ffast-math, but it'll be faster (hopefully)! That's, like, the flag's whole idea.
pclmulqdq
The current correctly rounded libraries work well with 32-bit float, but only provably produce correct rounding with 64-bit float in some cases. It turns out it's rather difficult to prove that you will produce a correctly rounded result in all cases, even if you have an algorithm that works. That means that each of these libraries has only a sunset of operations.
lifthrasiir
> All the standard Maths libraries which claim to be IEEE complaint I have tested are not compliant with this constraint.
It is possible to read the standard in the way that they still remain compliant. The standard, as of IEEE 754-2019, does not require recommended operations to be implemented in accordance to the standard in my understanding; implementations are merely recommended to ("should") define recommended operations with the required ("shall") semantics. So if an implementation doesn't claim that given recommended operation is indeed compliant, the implementation remains complaint in my understanding.
One reason for which I think this might be the case is that not all recommended operations have a known correctly rounded algorithm, in particular bivariate functions like pow (in fact, pow is the only remaining one at the moment IIRC). Otherwise no implementations would ever be complaint as long as those operations are defined!
stephencanon
Right; IEEE 754 (2019) _recommends_ that correctly rounded transcendental functions be provided, but does not require it. The next revision of the standard may require that a subset of correctly-rounded transcendental functions be provided (this is under active discussion), but would still not require how they are bound in the language (i.e. a C stdlib might provide both `pow` and `cr_pow` or similar), and might not require that all functions be correctly rounded over their entire range (e.g. languages might require only that `cr_sin` be correctly rounded on [-π,π]).
adgjlsfhk1
It's really hard for me to think that this is a good solution. Normal users should be using Float64 (where there is no similar solution), and Float32 should only be used when Float64 computation is too expensive (e.g. GPUs). In such cases, it's hard to believe that doing the math in Float64 and converting will make anyone happy.
jefftk
> Float32 should only be used when Float64 computation is too expensive
Or when you're bottlenecked on memory and want to store each number in four bytes instead of eight.
saagarjha
Single-precision is too expensive for GPUs, unfortunately.
pclmulqdq
They may be too expensive for ML (or really not pareto-optimal for ML), but people use GPUs for a lot of things.
winocm
Off topic but tangentially related, here’s a fun fact, DEC Alpha actually ends up transparently converting IEEE single precision floats (S-float) to double precision floats (T-float, or register format) when performing registers loads and operations.
dzaima
For 32-bit float single-operand ops it's simple enough to rely on a brute-force check. For 64-bit floats, though, while the goal of "sin(x) in library A really should match sin(x) in library B" is nice, it essentially ends up as "sin(x) in library A really should match... ...well, there's no option other than library A if I want sin(x) to not take multiple microseconds of bigint logic".
Though, a potentially-useful note is that for two-argument functions is that a correctly-rounded implementation means that it's possible to specialize certain constant operands to a much better implementations while preserving the same result (log(2,x), log(10,x), pow(x, 0.5), pow(x, 2), pow(x, 3), etc; floor(log(int,x)) being potentially especially useful if an int log isn't available).
vogu66
Would working with unums side-step the issue for math problems?
jcranmer
No, the fundamental issue is the difficulty of proving correctly-rounded results, which means implementations end up returning different results in practice. Unums do nothing to address that issue, except possibly by virtue of not having multiple implementations in the first place.
stncls
Floating-point is hard, and standards seem like they cater to lawyers rather than devs. But a few things are slightly misleading in the post.
1. It correctly quotes the IEEE754-2008 standard:
> A conforming function shall return results correctly rounded for the applicable rounding direction for all operands in its domain
and even points out that the citation is from "Section 9. *Recommended* operations" (emphasis mine). But then it goes on to describes this as a "*requirement*" of the standard (it is not). This is not just a mistype, the post actually implies that implementations not following this recommendation are wrong:
> [...] none of the major mathematical libraries that are used throughout computing are actually rounding correctly as demanded in any version of IEEE 754 after the original 1985 release.
or:
> [...] ranging from benign disregard for the standard to placing the burden of correctness on the user who should know that the functions are wrong: “It is following the specification people believe it’s following.”
As far as I know, IEEE754 mandates correct rounding for elementary operations and sqrt(), and only for those.
2. All the mentions of 1 ULP in the beginning are a red herring. As the article itself mentions later, the standard never cares about 1 ULP. Some people do care about 1 ULP, just because it is something that can be achieved at a reasonable cost for transcendentals, so why not do it. But not the standard.
3. The author seems to believe that 0.5 ULP would be better than 1 ULP for numerical accuracy reasons:
> I was resounding told that the absolute error in the numbers are too small to be a problem. Frankly, I did not believe this.
I would personally also tell that to the author. But there is a much more important reason why correct rounding would be a tremendous advantage: reproducibility. There is always only one correct rounding. As a consequence, with correct rounding, different implementations return bit-for-bit identical results. The author even mentions falling victim to FP non-reproducibility in another part of the article.
4. This last point is excusable because the article is from 2020, but "solving" the fp32 incorrect-rounding problem by using fp64 is naive (not guaranteed to always work, although it will with high probability) and inefficient. It also does not say what to do for fp64. We can do correct rounding much faster now [1, 2]. So much faster that it is getting really close to non-correctly-rounded, so some libm may one day decide to switch to that.
SuchAnonMuchWow
3. >> I was resounding told that the absolute error in the numbers are too small to be a problem. Frankly, I did not believe this.
> I would personally also tell that to the author. But there is a much more important reason why correct rounding would be a tremendous advantage: reproducibility.
This is also what the author want from his own experiences, but failed to realize/state explicitly: "People on different machines were seeing different patterns being generated which meant that it broke an aspect of our multiplayer game."
So yes, the reasons mentioned as a rationale for more accurate functions are in fact rationale for reproducibility across hardware and platforms. For example going from 1 ulp errors to 0.6 ulp errors would not help the author at all, but having reproducible behavior would (even with an increased worst case error).
Correctly rounded functions means the rounding error is the smallest possible, and as a consequence every implementation will always return exactly the same results: this is the main reason why people (and the author) advocates for correctly rounded implementations.
jefftk
> "solving" the fp32 incorrect-rounding problem by using fp64 is naive (not guaranteed to always work, although it will with high probability)
The key thing is there are only 2*32 float32s so you can check all of them. It sounds to me like the author did this, and realized they needed some tweaks for correct answers with log.
It's worth noting that the C standard explicitly disclaims correct rounding for these IEEE 754 functions (C23§F.3¶20).
Also, there's a group of people who have been running tests on common libms, reporting their current accuracy states here: https://members.loria.fr/PZimmermann/papers/accuracy.pdf (that paper is updated ~monthly).