Skip to content(if available)orjump to list(if available)

Compiler Options Hardening Guide for C and C++

mid-kid

While all of these are very useful, you'll find that a lot of these are already enabled by default in many distributions of the gcc compiler. Sometimes they're embedded in the compiler itself through a patch or configure flag, and sometimes they're added through CFLAGS variables during the compilation of distribution packages. I can only really speak of gentoo, but here's a non-exhaustive list:

* -fPIE is enabled with --enable-default-pie in GCC's ./configure script

* -fstack-protector-strong is enabled with --enable-default-ssp in GCC's ./configure script

* -Wl,-z,relro is enabled with --enable-relro in Binutils' ./configure script

* -Wp,-D_FORTIFY_SOURCE=2, -fstack-clash-protection, -Wl,-z,now and -fcf-protection=full are enabled by default through patches to GCC in Gentoo.

* -Wl,--as-needed is enabled through the default LDFLAGS

For reference, here's the default compiler flags for a few other distributions. Note that these don't include GCC patches:

* Arch Linux: https://gitlab.archlinux.org/archlinux/packaging/packages/pa...

* Alpine Linux: https://gitlab.alpinelinux.org/alpine/abuild/-/blob/master/d...

* Debian: It's a tiny bit more obscure, but running `dpkg-buildflags` on a fresh container returns the following: CFLAGS=-g -O2 -Werror=implicit-function-declaration -ffile-prefix-map=/home/<myuser>=. -fstack-protector-strong -fstack-clash-protection -Wformat -Werror=format-security -fcf-protection

westurner

From https://news.ycombinator.com/item?id=38505448 :

> There are default gcc and/or clang compiler flags in distros' default build tools; e.g. `make` specifies additional default compiler flags (that e.g. cmake, ninja, gn, or bazel/buck/pants may not also specify for you).

Is there a good reference for comparing these compile-time build flags and their defaults with Make, CMake, Ninja Build, and other build systems, on each platform and architecture?

From https://news.ycombinator.com/item?id=41306658 :

> From "Building optimized packages for conda-forge and PyPI" at EuroSciPy 2024: https://pretalx.com/euroscipy-2024/talk/JXB79J/ :

>> Since some time, conda-forge defines multiple "cpu-levels". These are defined for sse, avx2, avx512 or ARM Neon. On the client-side the maximum CPU level is detected and the best available package is then installed. This opens the doors for highly optimized packages on conda-forge that support the latest CPU features.

But those are per-arch performance flags, not security flags.

dapperdrake

Further reading:

Fabien Sanglard in driving compilers: https://fabiensanglard.net/dc/

GNU binutils with their own take on how to process static archives (libfoo.a) https://sourceware.org/bugzilla/show_bug.cgi?id=32006

Linkers: Mold: https://news.ycombinator.com/item?id=26233244

Wild: https://news.ycombinator.com/item?id=42814683

List of FOSS C linkers:

GNU ld

GNU gold

LLVM lld mold (by LLVM lld author)

wild

EDIT: typesetting

plainOldText

The 5-part series by Fabien Sanglard is really good. Thanks for sharing!

fuhsnn

If you are taking notes, add `-fzero-init-padding-bits=all` to the list, without this flag, GCC 15 onwards will not zero-initialize a full union if you wrote pre-C23 style ={0} and the largest member is not the first one. `-ftrivial-auto-var-init` cannot help this case. https://godbolt.org/z/7zKccfnea

naitgacem

I have been always stuck with C99, what is the "post" C23 way that will zero initialize a full union?

Or am I misunderstanding this?

dzaima

`={}` in place of `={0}` is the new option in C23.

derriz

Sane defaults should be table stakes for toolchains but C++ has "history".

All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization. This stuff is so brittle, who knows when (with which release of the compiler or linker) a particular combination of optimization flags were actually beneficial? How do you regression test this stuff? So everyone is afraid to touch this stuff.

Other compiled languages have similar issues but none to the extent of C++ that I've experienced.

motorest

> Sane defaults should be table stakes for toolchains but C++ has "history".

Yes, it has. By "history" you actually mean "production software that is expected to not break just because someone upgrades a compiler". Yes, C++ does have a lot of that.

> All significant C++ code-bases and projects I've worked on have had 10s of lines (if not screens) of compiler and linker options - a maintenance nightmare particularly with stuff related to optimization.

No, not really. That is definitely not the norm, at all. I can tell you as a matter of fact that release builds of some production software that's even a household name is built with only a couple of basic custom compiler flags, such as specifying the exact version of the target language.

Moreover, if your project uses a build system such as CMake and your team is able to spend 5 minutes reading an onboarding guide onto modern CMake, you do not even need or care to set compiler flags. You set a few high-level target properties and you never look at it ever again.

rowanG077

> Yes, it has. By "history" you actually mean "production software that is expected to not break just because someone upgrades a compiler". Yes, C++ does have a lot of that.

I disagree. Disproportionately in my career random C and C++ code bases failed to build because some new warning was introduced. And this is precisely because compiler options are so bad in that a lot of projects do Wall, Wextra and Werror.

Also the way undefined behavior is exploited means that you don't really know of your software that worked fine 10 years ago will actually work fine today, unless you have exhaustive tests.

nly

I've rarely seen more than a handful of compiler options even on very large codebase

If anything there's tonnes people should be using more of.

The problem with all these hardening options though is they noticeably reduce performance

grandempire

> The problem with all these hardening options though is they noticeably reduce performance

Yep. What I would really like is 2 lists, one for debug/checked mode and one for release.

rollcat

It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

I've been eyeing Zig recently. It makes a lot of choices straightforward yet explicit, e.g. you choose between four optimisation strategies: debug, safety, size, perf. Individual programs/libraries can have a default or force one (for the whole program or a compilation unit), but it's customary to delegate that choice to the person actually building from source.

Even simpler story with Go. It's been designed by people who favour correctness over performance, and most compiler flags (like -race, -asan, -clobberdead) exist to help debug problems.

I've been observing a lot of people complain about declining software quality; yearly update treadmills delivering unwanted features and creating two bugs for each one fixed. Simplicity and correctness still seem to be a niche thing; I salute everyone who actually cares.

nayuki

> It's because the UB must be continuously exploited by compilers for that extra 1% perf gain.

Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid. Drawing from math analogies where one wrong step leads to an absurd conclusion:

* https://en.wikipedia.org/wiki/All_horses_are_the_same_color

* https://en.wikipedia.org/wiki/Principle_of_explosion

* https://proofwiki.org/wiki/False_Statement_implies_Every_Sta...

* https://en.wikipedia.org/wiki/Mathematical_fallacy#Division_...

Back to programming, hopefully this example will not be controversial: If a program contains at least one write to an arbitrary address (e.g. `*(char*)0x123 = 0x456;`), the overall behavior will be unpredictable and effectively meaningless. In this case, I would fully agree with a compiler deleting, reordering, and manipulating code as a result of that particular UB.

You could argue that C shouldn't have been designed so that reading out of bounds is UB. Instead, it should read some arbitrary value without crashing or cleanly segfault at that instruction, with absolutely no effects on any surrounding code.

You could argue that C/C++ shouldn't have made it UB to dereference a null pointer for reading, but I fully agree that dereferencing a null pointer for a method call or writing a field must be UB.

Another analogy in programming is, let's forget about UB. Let's say you're writing a hash table in Java (in the normal safe subset without using JNI or Unsafe). If you get even one statement wrong in the data structure implementation, there still might be arbitrarily large consequences like dropping values when you shouldn't, miscounting how many values exist, duplicating values when you shouldn't, having an incorrect state that causes subtle failures far in the future, etc. The consequences are not as severe and pervasive as UB at the language level, but it will still result in corrupt data and/or unpredictable behavior for the user of that library code, which can in turn have arbitrarily large consequences. I guess the only difference compared to C/C++ UB is that for C/C++, there is more "spooky action at a distance", where some piece of UB can have very non-local consequences. But even incorrect code in safe Java can produce large consequences, maybe just not as large on average.

I am not against compilers "exploiting" UB for performance gain. But these are the ways forward that I believe in, for any programming language in general:

* In the language specification, reduce the number of cases/places that are undefined. Not only does it reduce the chances of bad things happening, but it also makes the rules easier to remember for humans, thus making it easier to avoid triggering these cases.

* Adding to that point, favor compile-time errors over run-time UB. For example, reading from an uninitialized local variable is a compile error in Java but UB in C. Rust's whole shtick about lifetimes and borrowing is one huge transformation of run-time problems into compile-time problems.

* Overwhelmingly favor safety by default. For example, array accesses should be bounds-checked using the convenient operator like `array[index]`, whereas the unsafe unchecked version should be something obnoxious and ugly like `unsafe { array.get_unchecked(index) }`. Make the safe way easy and make the unsafe way hard - the exact opposite of C/C++.

* Provide good (and preferably complete) sanitizer tools to check that UB isn't triggered at run time. C/C++ did not have these for the first few decades of their lives, and you were flying blind when triggering UB.

motorest

> Your framing of a compiler exploiting UB in programs to gain performance, has an undeserved negative connotation. The fact is, programs are mathematical structures/arguments, and if any single step in the program code or execution is wrong, no matter how small, it can render the whole program invalid.

You're failing to understand the problem domain, and consequently you're oblivious to how UB is actually a solution to problems.

There are two sides to UB: the one which is associated with erroneous programs, because clueless developers unwittingly do things that the standards explicitly states that lead to unknown and unpredictable behavior, and the one which leads to valid programs, because developers knowingly adopted an implementation that specifies exactly what behavior they should expect from doing things that the standards specify as UB.

Somehow, those who mindlessly criticize UB only parrot the simplistic take on UB, the "nasal demons" blurb. They don't even stop to think about what is undefined behavior and why would a programming language specification purposely leave specific behavior as undefined instead of unspecified or even implementation-defined. They do not understand what they are discussing and don't invest any moment trying to understand why things are the way they are, and what problems are solved by them. The just parrot cliches.

duped

I mean if you emit compiler commands from any build system they're going to be completely illegible due to the number of -L,-l,-I,-i,-D flags which are mostly generated by things like pkg-config and your build configuration.

There's not many optimization flags that people get fine grained with, the exception being floating point because -ffast-math alone is extremely inadvisable

dapperdrake

It goes even further.

Technically, the compilers can choose to make undefined-behavior implementation-defined-behavior instead. But they don't.

That's kind of also how C++ std::span wound up without overflow checks in practice. And my_arr.at(i) just isn't really being used by anybody.

Seems very user-hostile to me.

dapperdrake

-ffast-math and -Ofast are inadvisable on principle:

Tl;dr: python gevent messes up your x87 float registers (yes.)

https://moyix.blogspot.com/2022/09/someones-been-messing-wit...

duped

I disagree with "on principle." There are flaws in the design of IEEE 754 and omitting strict adherence for the purposes of performance is fine, if not required for some applications.

For example, recursive filters (even the humble averaging filter) will suffer untold pain without enabling DAZ/FTZ mode.

fwiw the linked issue has been remedied in recent compilers and isn't a python problem, it's a gcc problem. Even that said, if your algorithm requires subnormal numbers, for the love of numeric stability, guard your scopes and set the mxcsr register accordingly!

bobmcnamara

"what kind of math does the compile usually do without this funsafemath flag? Sad dangerous math?"

vkaku

It is hard that people have to remember these options on a per-compiler basis. I'd rather prefer people use easy to remember flags like `-O2` than the word soup mentioned here.

Compiler writers should revisit their option matrices and come up with easy defaults today.

Disclaimer: Used to work on the GCC code for option handling back in the day. Options like -O2 map to a whole bunch of individual options, and people only needed to remember adding -O2, which corresponded to different values in every era and yet subjectively meant: decently optimized code.

teo_zero

> I'd rather prefer people use easy to remember flags

Like -fhardened?

vkaku

Sure.

-f is technically machine independent.

-m should be used when having it implemented as machine dependent options.

So if you are telling me all these security features are only developed without requiring to implement per machine level support then it makes sense.

dapperdrake

The interactions between different optimization passes may have surprising consequences.

Endless loops are technically undefined behavior, can be dropped, except for their assembly jump tag entry point, and collide with the next function's assembly jump tag.

All because of UB.

Huge headache. Try debugging that.

And interaction loops on games are sometimes endlessly waiting for input.

stabbles

> The keyword $ORIGIN in rpath is expanded by the dynamic loader to the path of the directory where the object is found, which may be set by an attacker (e.g., via hard links) to a directory with a malicious dependency. On Linux, the fs.protected_hardlinks sysctl can help prevent this attack.

This has nothing to do with hardlinks, the same applies to symlinks. On linux the status quo is that the dynamic loader finds the library by symlink, the convention is `libfoo.so.x -> libfoo.so.a.b.c` where `x` is the ABI version and `a.b.c` the full version.

But if `libfoo.so.x -> /absolute/path/libfoo.so.a.b.c` and it has `$ORIGIN/libbar.so.y` in DT_NEEDED, those are resolved relative to the dir of the symlink, not to realpath of the symlink.

That makes sense, cause it would be a lot of startup overhead to lstat every path component of every library that uses $ORIGIN.

I don't see the point of including this gotcha in a security overview to be honest.

grandempire

> Our threat model is that all software developers make mistakes, and sometimes those mistakes lead to vulnerabilities

That’s not a threat model. What are the attackers going to do if there are vulnerabilities in your executable? Is it connected to a web server?

Does it have access to privileged resources?

steveklabnik

They're using it in the sense of "the scope of this document covers this scenario," so the answer to all of your questions are out of scope.

dailykoder

Nice, thank you! Saved this. Mastering GCC compiler options feels harder than mastering C++ UB.

dapperdrake

Succinct.

javier_e06

Last week a build broke because there was space after the Wl, some-linker-option The Warning messages can't be very challenging to decipher.

Most importantly: Are the warnings show-stoppers? Not in part of my pay grade.

There is a pragma to ignore specific warnings. This is "#pragma GCC diagnostic ignore "some-compiler-warning" which is useful when dealing with several versions of the GCC compiler.

Yes, it happens.

ryandrake

> Most importantly: Are the warnings show-stoppers? Not in part of my pay grade.

The best places (code quality wise) I've ever worked were the strictest on compiler warnings. Turn on all warnings, turn on extra warnings, treat warnings as errors, and forbid disabling warnings via #pragma. The absolute worst was the one where compiling the software using the compiler's default warning level produced a deluge of 40,000 warnings, and the culture was to disable warnings when they became annoying (vs. you know, fixing them).

My philosophy: Compilers don't issue warnings for fun. Every one of them is a potential problem and they are almost always worth fixing.

I also adhere to this in my personal hobby projects, too. It can be challenging when integrating with third party libraries, where the library maintainer doesn't care as much. I once submitted a patch to an open source project I won't name here, which fixed a bunch of warnings that seem to be only present in macOS builds (XCode's defaults tend to be quite strict). The response was not to merge it because "I don't regularly do macOS builds, and besides, they're just warnings." Alright, bro, sorry I tried to help.

MITSardine

If my C++ project is a simple utility supposed to take some files, crunch numbers, and spit out results, is there still the possibility it can be used for nefarious purposes?

kibwen

It doesn't matter what the tool does, what matters is 1) whether it is ever exposed to untrusted input, 2) what permissions it has.

If you don't ever expose something to untrusted input, then you're probably fine. But be VERY careful, because you should defensively consider anything downloaded off the internet to be untrusted input.

As for permissions, if you run a tool inside of a sandbox inside of a virtual machine on an airgapped computer inside a Faraday cage six stories underground, then you're probably fine.

duped

Read/write access to a filesystem is a pretty large surface area for attack, so yes.

thfuran

How does it get its input files? Where does it run? What's the output used for?

rramadass

It depends on what exactly your program does and equally important, where it is deployed and used. Security is a matter of degree based on context i.e. there are levels of Security. It is not a all or nothing proposition.

If your program is going to be used for some non-critical work internally you don't have to bother much about attack surface/vectors etc. Just use some standard "healthy" compiler options and you are good.

If you would like to know more on this subject, i recommend reading the classic The Art of Software Security Assessment: Identifying and Preventing Software Vulnerabilities by Mark Dowd et al.

dapperdrake

Related: Rob Pike on programming style, especially his note in include files: http://doc.cat-v.org/bell_labs/pikestyle

See also: SQLites amalgamation. Others (iirc Philippe Gaultier) have called this a Unity build: https://sqlite.org/amalgamation.html

Rob Pike on systems software research: http://doc.cat-v.org/bell_labs/utah2000/utah2000.html

EDIT: typo

z_open

His opinions on include files have fallen out of favor because compiling is faster and it adds needless work. Are there organizations that still do this? All the style guides I've seen do not.

csb6

I believe clang and gcc avoid reading in and re-processing include files that are already included, so his advice is unnecessary and creates a lot of maintenance burden, especially for C++ where a lot more code is in header files. It may still be useful for old compilers, though.

kevin_thibedeau

They recognize include guards and skip any further inclusions for those cases. There are scenarios where you may want multiple inclusion and you can still have that.

dapperdrake

If your filesystem and disks are fast enough, then maybe Rob's assumptions don't apply.

ryandrake

I still adhere to this for personal hobby projects, more out of a sense of craftsmanship than anything practical at this point.