C and C++ prioritize performance over correctness (2023)
126 comments
·March 28, 2025meling
I love the last sentence: “…, if you set yourself the goal of crossing an 8-lane freeway blindfolded, it does make sense to focus on doing it as fast as you possibly can.”
ptsneves
Most comments like this, fall into a form of scarecrow fallacy. They assume performance in C/C++ can only come at the cost of correctness and then go on to show examples of such failures to prove the point, and there are so many. On the other hand the event space also has sets of cases where you can be correct and get faster.
There are also sets of events where the failing risk is an acceptable tradeoff. Even the joke can come up empty, if you want to cross the 8bit lane the fastest possible and dont mind failing/dying some times it might be worth it.
Also all the over emphasis on security is starting to be a pet peeve of mine. It sounds like all software should be secure by default and that is also false. When I develop a private project or a project with a threat scenario that is irrelevant i dont want to pay the security setup price, but it seems nowadays security became a tax. Cases in point:
I cannot move my hard disk from one computer to another because secure boot was enabled by default without jumping hoops.
I cannot install self signed certificates for my localhost without jumping hoops.
I cannot access many browser APIs from an HTTP endpoint even if that endpoint is localhost. In that case i cannot do anything about it, the browser just knows better for my safety.
I cannot have a localhost server serving mixed content. I mean come on why should i care about CORS locally for some google font.
I cannot use docker build kit with a private registry with HTTP but to use a self signed certificat I need to rebuild the intermediate container.
I must be nagged to use the latest compatibility breaking version library version for my local picture server because of a new DoS vulnerability.
[...] On and on, and being a hacker/tinkerer is a nightmare of proselytizing tools and communities. I am a build engineer at heart and even I sometimes just want to develop and hack, not create the next secure thing that does not even start up
This is like being in my home and the contractor forcing me to use keys to open every door to the kitchen, bedroom or toilet. The threat model is just not applicable, let me be.
mananaysiempre
That is more glib than insightful, I think: the programming equivalent of “as fast as you can” in this metaphor would likely be measured in lines of code, not CPU-seconds.
z3phyr
Always think from the users perspective. For a lot of applications, when a user does something, it should happen with minimal latency.
skavi
in certain situations, latency is an aspect of correctness. HFT and robotics come to mind.
dang
Discussed at the time:
C and C++ prioritize performance over correctness - https://news.ycombinator.com/item?id=37178009 - Aug 2023 (543 comments)
gavinhoward
As a pure C programmer [1], let me post my full agreement: https://gavinhoward.com/2023/08/the-scourge-of-00ub/ .
[1]: https://gavinhoward.com/2023/02/why-i-use-c-when-i-believe-i...
muldvarp
To quote your article:
> The question is: should compiler authors be able to do whatever they want? I argue that they should not.
My question is: I see so many C programmers bemoaning the fact that modern compilers exploit undefined behavior to the fullest extent. I almost never see those programmers actually writing a "reasonable"/"friendly"/"boring" C compiler. Why is no one willing to put their ~money~ time where their mouth is?
gavinhoward
I was willing to, but the consensus was that people wouldn't use it.
C and C++ programmers complain about UB, but they don't really care.
TCC is probably the closest thing we have to that, and for me personally, I made all of my stuff build on it. I even did extra work to add a C99 (instead of C11) mode to make TCC work.
muldvarp
Shouldn't your blog post then condemn the C community at large for failing to use a more "reasonable" C compiler instead of complaining about compiler authors that "despite holding the minority world view, [...] have managed to force it on us by fiat because we have to use their compilers"?
You don't have to use their compilers. Most people do, because they either share this "minority" world view or don't care.
bsder
> I almost never see those programmers actually writing a "reasonable"/"friendly"/"boring" C compiler. Why is no one willing to put their ~money~ time where their mouth is?
Because it is not much harder to simply write a new language and you can discard all the baggage? Lots of verbiage gets spilled about undefined behavior, but things like the preprocessor and lack of "slices" are way bigger faults of C.
Proebsting's Law posits that compiler optimizations double performance every 20 years. That means that you can implement the smallest handful of compiler optimizations in your new language and still be within a factor of 2 of the best compilers. And people are doing precisely that (see: Zig, Jai, Odin, etc.).
WalterGillman
I'm willing to write a C compiler that detects all undefined behavior but instead of doing something sane like reporting it or disallowing it just adds the code to open a telnet shell with root privileges. Can't wait to see the benchmarks.
pcwalton
I was disappointed that Russ didn't mention the strongest argument for making arithmetic overflow UB. It's a subtle thing that has to do with sign extension and loops. The best explanation is given by ryg here [1].
As a summary: The most common way given in C textbooks to iterate over an array is "for (int i = 0; i < n; i++) { ... array[i] ... }". The problem comes from these three facts: (1) i is a signed integer; (2) i is 32-bit; (3) pointers nowadays are usually 64-bit. That means that a compiler that can't prove that the increment on "i" won't overflow (perhaps because "n" was passed in as a function parameter) has to do a sign extend on every loop iteration, which adds extra instructions in what could be a hot loop, especially since you can't fold a sign extending index into an addressing mode on x86. Since this pattern is so common, compiler developers are loath to change the semantics here--even a 0.1% fleet-wide slowdown has a cost to FAANG measured in the millions.
Note that the problem goes away if you use pointer-width indices for arrays, which many other languages do. It also goes away if you use C++ iterators. Sadly, the C-like pattern persists.
[1]: https://gist.github.com/rygorous/e0f055bfb74e3d5f0af20690759...
AlotOfReading
There's half a dozen better ways that could have been addressed anytime in the past decade.
Anything from making it implementation defined to unspecified behavior to just throwing a diagnostic warning or having a clang-tidy performance rule.
I'm also incredibly suspicious of the idea that FAANG in particular won't accept minor compiler slowdowns for useful safety. Google and Apple for example have both talked publicly about how they're pushing bounds checking by default internally and you can see that in the Apple Buffer hardening RFC and the Abseil hardening modes.
pcwalton
> Anything from making it implementation defined to unspecified behavior to just throwing a diagnostic warning or having a clang-tidy performance rule.
To be clear, you're proposing putting a warning on "for (int i = 0; i < n; i++)"? The most common textbook way to write a loop in C?
> I'm also incredibly suspicious of the idea that FAANG in particular won't accept minor compiler slowdowns for useful safety.
I worked on compilers at FAANG for quite a while and know quite well how these teams justify their existence. Telling executives "we cost the company $1M a quarter, but good news, we made the semantics of the language easier for programming language nerds to understand" instead of "we saved the company $10M last quarter" is an excellent strategy for getting the team axed next time downsizing comes around.
UncleMeat
I know the team at Google that is doing exactly this. They've very explicitly accepted a small but significant performance overhead in order to improve safety.
AlotOfReading
No, I'm saying that there could be anything from a one word change to the standard that doesn't affect compilers at all to safety by default with a clang tidy performance warning.
Clang tidy and the core guidelines have already broken the textbook Hello, World! with performance-avoid-endl warning, so I don't see why the common textbook way to write things should be our guiding principle here. Of course, the common textbook way to write things would continue working regardless, it'd just have a negligible performance cost.
frumplestlatz
Even if it is the most common method in text books (I’m not sure that’s true), it’s also almost always wrong. The index must always be sized to fit what you’re indexing over.
As for your compiler statement — yes. At least at Apple, there is ongoing clang compiler work, focused on security, that actively makes things slower, and there has been for years.
agwa
> I worked on compilers at FAANG for quite a while and know quite well how these teams justify their existence. Telling executives "we cost the company $1M a quarter, but good news, we made the semantics of the language easier for programming language nerds to understand" instead of "we saved the company $10M last quarter" is an excellent strategy for getting the team axed next time downsizing comes around.
And yet, Google is willing to take a performance hit of not 0.1% but 0.3% for improved safety: https://security.googleblog.com/2024/11/retrofitting-spatial...
And obviously there are better justifications for this than "we made the semantics of the language easier for programming language nerds to understand".
dooglius
As mentioned in the linked post, the compiler could in fact prove the increment on i wont overflow, and in my testing, -fwrapv does produce identical assembly (YMMV). The post talks about hypothetical more complex cases where the compiler would not prove the loop bound. But if -fwrapv semantics were mandated by spec, then presumably compilers would at least hardcode a check for such a common optimization (if they are not doing so already).
pcwalton
> But if -fwrapv semantics were mandated by spec, then presumably compilers would at least hardcode a check for such a common optimization (if they are not doing so already).
I don't know what this means. The optimization becomes invalid if fwrapv is mandated, so compilers can't do it anymore.
dooglius
The optimization is still valid under -fwrapv semantics. To see this, observe the invariant (0 <= i && i < count) when entering the loop body, and (i==0 || (0 < i && i <= count)) when doing the loop test -- in particular, 0<=i<INT_MAX when entering the loop body (since count <= INT_MAX), so wraparound cannot occur.
dcrazy
The C language does not specify that `int` is 32-bits. That is a choice made by compiler developers to make compiling non-portable code written for 32-bit platforms easier, because most codebases wind up baking in assumptions about variable sizes.
In Swift, for example, `Int` is 64 bits wide on 64-bit targets. If we ever move to 128-bit CPUs, the Swift project will be forced to decide to stick to their guns or make `Int` 64-bits on 128-bit targets.
pcwalton
> The C language does not specify that `int` is 32-bits. That is a choice made by compiler developers to make compiling non-portable code written for 32-bit platforms easier, because most codebases wind up baking in assumptions about variable sizes.
Making int 32-bit also results in not-insignificant memory savings.
bobmcnamara
And even wastes cycles on 16bit size_t MCUs.
sapiogram
Thank you so much for this comment. I think Russ Cox (along with many others) is way too quick to declare that removing one source of UB is worth a (purportedly) minuscule performance reduction. While I'm sure that's sometimes true, he hasn't measured it, and even a 1% slowdown of all C/C++ would have huge costs globally.
Someone
> Sadly, the C-like pattern persists.
I think that’s the main problem: C-style “arrays are almost identical to pointers” and C-style for loops may be good ideas for the first version of your compiler, but once you’ve bootstrapped your compiler, you should ditch them.
tmoravec
size_t has been in the C standard since C89. "for (int i = 0..." might have it's uses so it doesn't make sense to disallow it. But I'd argue that it's not really a common textbook way to iterate over an array.
pcwalton
The first example program that demonstrates arrays in The C Programming Language 2nd edition (page 22) uses signed integers for both the induction variable and the array length (the literal 10 becomes int).
frumplestlatz
The language has evolved significantly, and we’ve learned a lot about how to write safer C, since that was published in 1988.
Maxatar
From what I see, that book was published in 1988.
uecker
You can implement C in completely different ways. For example, I like that signed overflow is UB because it is trivial to catch it, while unsigned wraparound - while defined - leads to extremely difficult to find bugs.
AlotOfReading
There's 3 reasonable choices for what to do with unsigned overflow: wraparound, saturation, and trapping. Of those, I find wrapping behavior by far the most intuitive and useful.
Saturation breaks the successor relation S(x) != x. Sometimes you want that, but it's extremely situational and rarely do you want saturation precisely at the type max. Saturation is better served by functions in C.
Trapping is fine conceptually, but it means all your arithmetic operations can now error. That's a severe ergonomic issue, isn't particularly well defined for many systems, and introduces a bunch of thorny issues with optimizations. Again, better as functions in C.
On the other hand, wrapping is the mathematical basis for CRCs, Error correcting codes, cryptography, bitwise math, and more. There's no wasted bits, it's the natural implementation in hardware, it's familiar behavior to students from a young age as "clock arithmetic", compilers can easily insert debug mode checks for it (the way rust does when you forget to use Wrapping<T>), etc.
It's obviously not perfect either, as it has the same problem of all fixed size representations in diverging from infinite math people are actually trying to do, but I don't think the alternatives would be better.
jcranmer
> There's 3 reasonable choices for what to do with unsigned overflow: wraparound, saturation, and trapping.
There's a 4th reasonable choice: pretend it doesn't happen. Now, before you crucify me for daring to suggest that undefined behavior can be a good thing, let me explain:
When you start working on a lot of peephole optimizations, you quickly come to the discovery that there are quite a few cases where two pieces of code are almost equivalent, except that they end up giving different answers if someone overflowed (or some other edge case you don't really care about). Rather interestingly, even if you put a lot of effort into a compiler to make it aggressively infer that code can't overflow, you still run into problems because those assumptions don't really compose well (e.g., knowing that (A + (B + C)) can't overflow doesn't mean that ((A + B) + C) can't overflow--imagine B = INT_MAX and C = INT_MIN to see why).
And sure, individual peephole optimizations don't make much of a performance effect. But they can sometimes have want-of-a-nail side effects, where a failure because of inability to assume nonoverflow in one place causes another optimization to fail to kick in and the domino effect results in measurable slowdowns. In one admittedly extreme example, I've seen a single this-might-overflow result in a 10× slowdown, since it alone was responsible for the autoparallelization framework to fail to kick in. This is happened enough to me that there are times I just want to shake the computer and scream "I DON'T FUCKING CARE ABOUT EDGE CASES, JUST GIVE ME THE DAMN FASTEST CODE."
The problem with undefined behavior isn't that it risks destroying your code if you hit it (that's a good thing!); the problem is that it too frequently comes without a way to opt-out of it. And there is room to argue if it should be opt-in or opt-out, but completely absent is a step too far for me.
(Slight apologies for the rant, I'm currently in the middle of tracking down a performance hit caused by... inability to infer non-overflow of an operation.)
amavect
I feel like this all motivates for a very expressive type system for integers. Add different types for wraparound, saturation, trapping, and undefined. Probably require theorem proving in the language to provably never overflow for undefined overflow integers.
>knowing that (A + (B + C)) can't overflow doesn't mean that ((A + B) + C) can't overflow
Here, the associative property works for unsigned integers, but those don't get the optimizations for assuming overflow can't happen, which feels very disappointing. Again, adding more types would make this an option.
AlotOfReading
You can get exactly the same "benefits" without the side effects by simply making signed overflow unspecified rather than undefined. There are better alternatives of course, but this is the one that has essentially no tradeoffs.
I don't consider destroying code semantics if you hit it a good thing, especially when there's no reliable and automatic way to observe it.
itishappy
Does wrapping not break the successor relationship as well? I suppose it's a different problem than saturation, but the symptoms are the same: the relationship between a number and it's successor is no longer intuitive (not injective for saturation, not ordered for wrapping).
cwzwarich
> it's the natural implementation in hardware
The natural implementation in hardware is that addition of two N-bit numbers produces an N+1-bit number. Most architectures even expose this extra bit as a carry bit.
AlotOfReading
Addition of two 1-bit numbers produces a 1-bit number, which is simple and fundamental enough that we call it XOR. If you take that N-bit adder and drop the final carry (a.k.a use a couple XORs instead of the full adder), you get wrapping addition. It's a pretty natural implementation, especially for fixed width circuits where a carry flag to hold the "Nth+1" bit may not exist, like RISC-V.
uecker
I am fine with unsigned wraparound, I just think one should avoid using these types for indices and sizes, and use them only for the applications where modulo arithmetic makes sense mathematically.
tsimionescu
Signed overflow being UB does not make it easier to find in any way. Any detection for signed overflow you can write give that it's UB could be found if it were defined. There are plenty of sanitizers for behaviors that are not UB, at least for other languages, so it's not even an ecosystem advantage.
uecker
One can have sanitizers also for defined behavior. The issue is that a sanitizer that has no false positives is about 100x more useful than a sanitizer that has false positives. You can treat each case where a sanitizer detects signed overflow as an error, even in production. You can not do this same when the behavior is defined and not an error. (you can make it an error and still define it, but there is not much of a practical difference)
tsimionescu
If you think signed overflow is a mistake, you could forbid it from your code base, even if it weren't UB, and then any instance of it that a sanitizer finds would not be a true positive, because your code style forbids signed integer overflow.
Strilanc
Could you provide the code you use to trivially catch signed overflows? My impression is the opposite: unsigned is trivial (just test `a+b < b`) while signed is annoying (especially because evaluating a potentially-overflowing expression would cause the UB I'm trying to avoid).
uecker
Oh, I meant it is trivial to catch bugs caused by overflow by using a sanitizer and difficult to find bugs caused by wraparound.
But checking for signed overflow is also simply with C23: https://godbolt.org/z/ebKejW9ao
amavect
>unsigned is trivial (just test `a+b < b`)
Nitpicking, the test itself should avoid overflowing. Instead, test "a <= UINT_MAX - b" to prove no overflow occurs.
For signed integers, we need to prove the following without overflowing in the test: "a+b <= INT_MAX && a+b >= INT_MIN". The algorithm follows: test "b >= 0", which implies "INT_MAX-b <= INT_MAX && a+b >= INT_MIN", so then test "a <= INT_MAX-b". Otherwise, "b < 0", which implies "INT_MIN-b >= INT_MIN && a+b <= INT_MAX", so then test "a >= INT_MIN-b".
lelanthran
> Nitpicking, the test itself should avoid overflowing.
Why? Overflowing is well defined for unsigned.
null
steveklabnik
You're right that this test would be UB for signed integers.
See here for that in action, as well as one way to test it that does work: https://godbolt.org/z/sca6hxer4
If you're on C23, uercker's advice to use these standardized functions is the best, of course.
o11c
Avoiding UB for performing the signed addition/subtraction/multiplication is trivial - just cast to unsigned, do the operation, cast back. In standard C23 or GNU C11, you can write a `make_unsigned` and `make_signed` macro using `typeof` and `_Generic`.
ndiddy
Compiling with -ftrapv will cause your program to trap on signed overflow/underflow, so when you run it in a debugger you can immediately see where and why the overflow/underflow occurred.
AlotOfReading
It's worth mentioning that GCC's ftrapv has been unreliable and partially broken for 20+ years at this point. It's recommended that you use the fsanitize traps instead, and there's an open ticket to switch the ftrapv implementation over to using it under the hood:
dehrmann
Some version of ints doing bad things plagues lots of other languages. Java, Kotlin, C#, etc. silently overflow, and Javascript numbers can look and act like ints until they don't. Python is the notable exception.
gblargg
The infinite loops example doesn't make sense. If count and count2 are volatile, I don't see how the compiler could legally merge the loops. If they aren't volatile, it can merge the loops because the program can't tell the difference (it doesn't even have to update count or count2 in memory during the loops). Only code executing after the loops could even see the values in those variables.
indigoabstract
After perusing the article, I'm thinking that maybe Ferraris should be more like Volvos, because crashing at high speed can be dangerous.
But if one doesn't find that exciting, at least they'd better blaze through the critical sections as fast as possible. And double check that O2 is enabled (/LTCG too if on Windows).
agentultra
If you don't write a specification then any program would suffice.
We're at C23 now and I don't think that section has changed? Anyone know why the committee won't revisit it?
Is it purely, "pragmatism," or dogma? (Are they even distinguishable in our circles...)
dataflow
It's not really that they prioritize performance over correctness (your code becomes no more correct if out-of-bounds write was well-defined to reboot the machine...), it's that they give unnecessary latitude to UB instead of constraining the valid behaviors to the minimal set that are plausibly useful for maximizing performance. E.g. it is just complete insanity to allow signed integer overflow to format your drive. Simply reducing it to "produces an undefined result" would seem plenty adequate for performance.
hn-acct
The author points out near the bottom that “performance” was not one of the original justifications for its UB decisions, afatct.
Your example is a slippery slope but I get your point. I agree that there needs to be a “reasonable UB”
But I’ve moved on from c++.
tialaramex
> I agree that there needs to be a “reasonable UB”
What could you possibly want "reasonable UB" for? If what you want is actually just Implementation Defined that already exists and is fine, no need to invent some notion of "reasonable UB".
netbioserror
There's a way I like to phrase this:
In C and C++, it's easy to write incorrect code, and difficult to write correct code.
In Rust, it's also difficult to write correct code, but near-impossible to write incorrect code.
The new crop of languages that assert the inclusion of useful correctness-assuring features such as iterators, fat-pointer collections, and GC/RC (Go, D, Nim, Crystal, etc.) make incorrect code hard, but correct code easy. And with a minimal performance penalty! In the best-case scenarios (for example, Nim with its RC and no manual heap allocations, which is very easy to achieve since it defaults to hidden unique pointers), we're talking about only paying a 20% penalty for bounds-checking compared to raw C performance. For the ease of development, maintenance, and readability, that's easy to pay.
grandempire
> but near-impossible to write incorrect code.
Except most bugs are about unforeseen states (solved by limiting code paths and states) or a disconnect between the real world and the program.
So it’s very possible to write incorrect code in rust…
kstrauser
> Except most bugs are about unforeseen states
Study after study shows that's not true, unless you include buffer overflows and the various "reading and writing memory I didn't mean to because this pointer is wrong now" classes of bugs.
It's possible to write logic errors, of course. You have to try much harder to write invalid state or memory errors.
gpderetta
Parent said bugs, not security bugs.
grandempire
Imagine a backlog full of buffer overflows and nil pointers.
jayd16
They should have said safe/unsafe. Correct implies it also hits the business need, among other baggage.
netbioserror
True, but I think errors in real-world modeling logic are part of our primary problem domain, while managing memory and resources are a secondary domain that obfuscates the primary one. Tools such as exceptions and contract programming go a long way towards handling the issues we run into while modeling our domains.
grandempire
Indeed, but you said something more extreme in the first comment.
null
spacechild1
> but near-impossible to write incorrect code.
Rust makes it near-impossible to make typos in strings or errors in math formulas? That's amazing! So excited to try this out!
ajross
This headline is badly misunderstanding things. C/C++ date from an era where "correctness" in the sense the author means wasn't a feasible feature. There weren't enough cycles at build time to do all the checking we demand from modern environments (e.g. building medium-scale Rust apps on a Sparcstation would be literally *weeks* of build time).
And more: the problem faced by the ANSI committee wasn't something where they were tempted to "cheat" by defining undefined behavior at all. It's that there was live C code in the world that did this stuff, for real and valid reasons. And they knew if they published a language that wasn't compatible no one would use it. But there were also variant platforms and toolchains that didn't do things the same way. So instead of trying to enumerate them all individually (which probably wasn't possible anyway), they identified the areas where they knew they could define firm semantics and allowed the stuff outside that boundary to be "undefined", so existing environments could continue to implement them compatibly.
Is that a good idea for a new language? No. But ANSI wasn't writing a new language. They were adding features to the language in which Unix was already written.
rocqua
> So instead of trying to enumerate them all individually (which probably wasn't possible anyway), they identified the areas where they knew they could define firm semantics and allowed the stuff outside that boundary to be "undefined", so existing environments could continue to implement them compatibly.
These things didn't become undefined behavior. They became implementation defined behavior. The distinction is that for implementation defined behavior, a compiler has to make a decision consistently.
The big point of implementation defined behavior is 1s vs 2s complement. I believe shifting bits off the end of an unsigned int is also considered implementation defined.
For implementation defined behavior, the optimization of "assume it never happens" isn't allowed by the standard.
bluGill
They did have implementation defined behavior, but a large part of undefined behavior was exactly that: never define anywhere and could have always been raised to implementation defined if they had thought to mention it.
moefh
I don't doubt what you're saying is true, I have heard similar things many many times over the years. The problem is that it's always stated somewhat vaguely, never with concrete examples, and it doesn't match my (perhaps naive) reading of any of the standards.
For example, I just checked C99[1]: it says in many places "If <X>, the behavior is undefined". It also says in even more places "<X> is implementation-defined" (although from my cursory inspection, most -- but not all -- of these seem to be about the behavior of library functions, not the compiler per se).
So it seems to me that the standards writers were actually very particular about the difference between implementation-defined behavior and undefined behavior.
bgirard
Did anything prevent them from transitioning undefined behavior towards defined behavior over time?
> It's that there was live C code in the world that did this stuff, for real and valid reasons.
If you allow undefined behavior, then you can move towards a more strictly defined behavior without any forward compatibility risk without breaking all live C code. For instance in the `EraseAll` example you can define the behavior in a more useful way rather than saying 'anything at all is allowed'.
bluGill
No, and that has been happening over time. C++26 for example looked at uninitialized variables and defined them. The default is intentionally unreasonable for all cases where this would happen just forcing everyone to initialize (and also because the value is unreasonable makes it easy for runtime tools to detect the issue when the compiler cannot)
pjmlp
The author is a famous compiler writer, including C and C++ compilers as GCC contributor, regardless of how Go is designed, he does know what he is talking about.
ajross
It's still a bad headline. UB et. al. weren't added to the language for "performance" reasons, period. They were then and remain today compatibility features.
fooker
You are wrong. The formalized concept of UB was introduced exactly because of this.
Let's take something as simple as divide by zero. Now, suppose you have a bunch of code with random arithmetic operations.
A compiler can not optimize this code at all without somehow proving that all denominators are non zero. What UB brings you is that you can optimize the program based on the assumption that UB never occurs. If it actually does, who cares, the program would have done something bogus anyway.
Now think about pointer dereferences, etc etc.
pjmlp
That is what implementation defined were supposed to be.
jayd16
But we write new code in C and C++ today. We make these tradeoffs today. So its not some historical oddity. That is the tradeoff we make.
VWWHFSfQ
I don't think the headline is misunderstanding anything.
These things are both true:
> C and C++ Prioritize Performance over Correctness
> C/C++ date from an era where "correctness" in the sense the author means wasn't a feasible feature.
So correctness wasn't feasible, and therefore wasn't a priority. The machines were limited, and so performance was the priority.
/* I can't help but remember a joke on the topic. One guy says: "I can operate on big numbers with insane speed!" The other says: "Really? Compute me 97408743 times 738423". The first guy, immediately: "987958749583333". The second guy takes out a calculator, checks the answer, and says: "But it's incorrect!". The first guy objects: "Despite that, it was very fast!" */