No-Panic Rust: A Nice Technique for Systems Programming
142 comments
·February 3, 2025wongarsu
Groxx
>and the compiler will prove to you that your functions are upholding the invariants
From the article and only vague background Rust knowledge, I'm under the impression that the opposite is true: the compiler does not prove that. Hence why it's "assert_unchecked" - you are informing the compiler that you know more than it does.
You do get panics during debug, which is great for checking your assumptions, but that relies on you having adequate tests.
tga_d
That's my understanding as well. The thing I was wondering as I read it: how difficult would it be for someone to make an extension or fork of Rust that allows annotating sufficient type information to prove these kinds of invariants, like F*?
saghm
This isn't quite the same, but it reminds me of something a bit less clever (and a lot less powerful) I came up with a little while back when writing some code to handle a binary format that used a lot of 32-bit integers that I needed to use for math on indexes in vectors. I was fairly confident that the code would never need to run on 16-bit platforms, but converting from a 32-bit integer to a usize in Rust technically is considered fallible due to the fact that you can't necessarily assume that a usize is more 16 bits, and frustrating `usize` only implements `TryFrom<u32>` rather than conditionally implementing `From<u32>` on 32-bit and 64-bit platforms. I wanted to avoid having to do any casting that could silently get messed up if I happened to switch any of the integer types I used later, but I also was irrationally upset at the idea of having to check at runtime for something that should be obvious at compile time. The solution I came up with was putting a static assertion that the target pointer width was either 32 or 64 bits inside the error-handling path, followed by marking the code path as `unreachable!()` that would never get executed (because either the error handling path wouldn't be taken, or the static assertion would stop the code from having been compiled in the first place. Even though this wasn't meaningfully different from just conditionally compiling to make sure the platform was suitable and then putting `unreachable!()` unconditionally in the error handling path, having the compile-time insertion locally in the spot where the error was being handled felt like I magically turned the runtime error into a compile-time one; it was quite literally possible to write it as a function that could be dropped into any codebase without having to make any other changes to ensure it was used safely.
lilyball
What about just doing something like
#[cfg(any(target_pointer_width = "32", target_pointer_width = "64"))]
#[inline(always)]
const fn usize_to_u32(x: usize) -> u32 {
x as u32
}
and this way you can just call this function and you'll get a compile-time error (no such function) if you're on a 16-bit platform.saghm
Even though I can visually verify that it's safe in this context, I really don't like casting integers as a rule when there's a reasonable alternative. The solution I came up with is pretty much equally readable in my opinion but has the distinction of not having code that might in other contexts look like it could silently have issues (compared to an `unreachable!()` macro which might also look sketchy but certainly wouldn't be quiet if it accidentally was used in the wrong spot). I also prefer having a compiler error explaining the invariant that's expected rather than a missing function (which could just as easily be due to a typo or something). You could put a `compile_error!()` invocation to conditionally compile when the pointer width isn't at least 32, but I'd argue that tilts the balance even more in favor of the solution I came up with; having a single item instead of two defined is more readable in my opinion.
This wasn't a concern for me but I could also imagine some sort of linting being used to ensure that potentially lossy casts aren't done, and while it presumably could be manually suppressed, that also would just add to the noisiness.
lilyball
Oops, I flipped the conversion here, that should have been `u32_to_usize()`.
jwatte
For systems where correctness is actually important, not just a nice-to-have (in most systems, it's nice-to-have,) we have had an increasing number of options over the years.
From tools like "spin" and "tla+" to proof assistants like Coq to full languages like Idris and Agda.
Some of the stronger-typed languages already give us some of those benefits (Haskell, OCaml) and with restricted effects (like Haskell) we can even make the compiler do much of this work without it leaking into other parts of the program if we don't want it to.
haberman
While I would love this to be true, I'm not sure that this design can statically prove anything. For an assert to fail, you would have to actually execute a code sequence that causes the invariant to be violated. I don't see how the compiler could prove at compile time that the invariants are upheld.
wongarsu
An assert is just a fancy `if condition { panic(message) }`. If the optimizer can show that condition is always false, the panic is declared as dead code and eliminated. The post uses that to get the compiler to remove all panics to reduce the binary size. But you can also just check if panic code was generated (or associated code linked in), and if it was then the optimizer wasn't able to show that your assert can't happen.
Of course this doesn't prove that the assert will happen. You would have to execute the code for that. But you can treat the fact that the optimizer couldn't eliminate your assert as failure, showing that either your code violates the assert, your preconditions in combination with the code aren't enough to show that the assert isn't violated, or the whole thing was too complicated for the optimizer to figure out and you have to restructure some code
haberman
Ah, I see what you are saying. Yes, if the optimizer is able to eliminate the postcondition check, I agree that it would constitute a proof that the code upholds the invariant.
The big question is how much real-world code the optimizer would be capable of "solving" in this way.
I wonder if most algorithms would eventually be solvable if you keep breaking them down into smaller pieces. Or if some would have some step of irreducible complexity that the optimizer cannot figure out, now matter how much you break it down.
vlovich123
To be clear as there’s a lot of nuance. Assert unchecked is telling the compiler the condition must always hold. The optimizer and compiler don’t make any assumption about the assert. That information is then used by the compiler to optimize away checks it otherwise would have to do (eg making sure an Option is Some if you call unwrap).
If you have an assumption that gives unhelpful information, the optimizer will emit panic code. Worse, if the assumption is incorrect, then the compiler can easily miscompile the code (both in terms of UB because of an incorrectly omitted panic path AND because it can miscompile surprising deductions you didn’t think of that your assumption enables).
I would use the assume crate for this before this got standardized but very carefully in carefully profiled hotspots. Wrapping it in a safe call as in this article would have been unthinkable - the unsafe needs to live exactly where you are making the assumption, there’s no safety provided by the wrapper. Indeed I see this a lot where the safety is spuriously added at the function call boundary instead of making the safety the responsibility of the caller when your function wrapper doesn’t actually guarantee any of the safety invariants hold.
rtpg
I've had an unpleasant amount of crashes with Rust software because people are way too quick to grab `panic!` as an out.
This was most shocking to me in some of the Rust code Mozilla had integrated into Firefox (the CSS styling code). There was some font cache shenanigans that was causing their font loading to work only semi-consistently, and that would outright crash this subsystem, and tofu-ify CJK text entirely as a result.
And the underlying panic was totally recoverable in theory if you looked at the call stack! Just people had decided to not Result-ify a bunch of falliable code.
ninetyninenine
Sometimes the program is in an invalid state. You don't want to keep running the program. Better to fail spectacularly and clearly then to fail silently and try to hobble along.
jwatte
The thing with functional programming (specifically, immutable data,) is that as long as the invalid state is immutable, you can just back up to some previous caller, and they can figure out whether to deal with it or whether to reject up the its previous caller.
This is why Result (or Maybe, or runExceptT, and so on in other languages) is a perfectly safe way of handling unexpected or invalid data. As long as you enforce your invariants in pure code (code without side effects) then failure is safe.
This is also why effects should ideally be restricted and traceable by the compiler, which, unfortunately, Rust, ML, and that chain of the evolution tree didn't quite stretch to encompass.
duped
Say a function has some return type Result<T, E>. If our only error handling mechanism is Err(e) then were restricted to E representing the set of errors due to invalid arguments and state, and the set of errors due to the program itself being implemented incorrectly.
In a good software architecture (imo) panics and other hard failure mechanisms are there for splitting E into E1 and E2, where E1 is the set of errors that can happen due to the caller screwing up and E2 being the set of errors that the caller screwed up. The caller shouldn't have to reason about the callee possibly being incorrect!
Functional programming doesn't really come into the discussion here - oftentimes this crops up in imperative or object oriented code where function signatures are lossy because code relies on side effects or state that the type system can't/won't capture (for example, a database or file persisted somewhere). Thats where you'll drop an assert or panic - not as a routine part of error handling.
ninetyninenine
The program can detect invalid state, but your intention was to never get to that state in the first place. The fact that the program arrived there is a Logic error in your program. No amount of runtime shenanigans can repair it because the error exists without your knowledge of where it came from. You just know it's invalid state and you made a mistake in your code.
The best way to handle this is to crash the program. If you need constant uptime, then restart the program. If you absolutely need to keep things running then, yeah try to recover then. The last option isn't as bad for something like an http server where one request caused it to error and you just handle that error and keep the other threads running.
But for something like a 3D video game. If you arrive at erroneous state, man. Don't try to keep that thing going. Kill it now.
cardanome
> This is why Result (or Maybe, or runExceptT, and so on in other languages) is a perfectly safe way of handling unexpected or invalid data.
They are great for handling expected errors that make sense to handle explicitly.
If you try to wrap up any possible error that could ever happen in them you will generate horrendous code, always having to unwrap things, everything is a Maybe. No thanks.
I know it is tempting to think "I will write the perfect program and handle all possible errors and it will never crash" but that just results in overly complex code that ends up having more bugs and makes debugging harder. Let it crash. At the point where the error happened, don't just kick the bucket down the road. Just log the the problem and call it a day. Exceptions are an amazing tool to have for things that are.. exceptions.
rtpg
I understand this belief abstractly. In the cases I was hitting, there would have been easy recovery mechanisms possible (that would have been wanted because there are many ways for the system to hit the error!), but due to the lowest level "key lookup" step just blowing up rather than Result (or Option)-ing their lookup, not only would the patch have been messy, but it would have required me to make many decisions in "unrelated" code in the meanwhile.
I understand your point in general, I just find that if you're writing a program that is running on unconstrained environments, not panic'ing (or at least not doing it so bluntly at a low level) can at the very least help with debugging.
At least have the courtesy to put the panic at a higher level to provide context beyond "key not found!"!
ratorx
Without knowing the exact situation, if you follow the guidelines in this article, this is a library bug (documentation or actual code).
Either the library should have enforced the invariant of the key existing (and returned an equivalent error, or handled it internally), or documented the preconditions at a higher level function that you could see.
gpderetta
This is correct in a memory unsafe language like C, where you can't make any assumption on the state of the program, so the only safe escape is to the address separation boundary (i.e. the program). In a safe language you can usually make reasonable assumptions [1] regarding the blast radius of an assertion violation. And that's why rust allows catching panics.
[1] even safe languages have unsafe escape hatches and safe abstractions sometimes have bugs, but on most cases you can assume that the runtime is not compromised.
dralley
At least sources of panic! are easily greppable. Cutting corners on error handling is usually pretty obvious
haberman
I don't think grepping for panics is practical, unless you are trying to depend on exclusively no-panic libraries.
Even if you are no_std, core has tons of APIs like unwrap(), index slicing, etc. that can panic if you violate the preconditions. It's not practical to grep for all of them.
wongarsu
There is panic-analyzer [1] that searches for code that needlessly panics. You can also use the no-panic macro [2] to turn possible panics in a specific function (including main) into a compile error
sophacles
There are panics that aren't greppable that way. For instance `some_array[past_bounds]` causes a panic.
rtpg
It is interesting to consider how `panic!` serves as some documentation of explicitly giving up. Easy to see in a pull request. And having the string packed alongside it is nice.
Still miffed, but we'll get there.
pluto_modadic
I mean... rust modules aren't typically in your CWD, no? they're not in some node_modules that you can grep, but in a cargo folder with /all of the libraries you ever used/, not just the ones you have for this one project.
gpm
Putting them all in the project root takes just a single `cargo vendor` command.
But I would assume that for mozilla their entire CSS subsystem is pulled in as a git (hg?) submodule or something anyways.
eru
For what it's worth, eg vscode can jump to definition even when your code is in a different crate that's not in your repository.
est31
If you run cargo vendor, they end up in a neat directory.
duped
While sure, more things could be baked as results, most of the time when you see a panic that's not the case. It's a violation of the callee's invariants that the caller fucked up.
Essentially an error means that the caller failed in a way that's expected. A panic means the caller broke some contract that wasn't expressed in the arguments.
A good example of this is array indexing. If you're using it you're saying that the caller (whoever is indexing into the array) has already agreed not to access out of bounds. But we still have to double check if that's the case.
And if you were to say that hey, that implies that the checks and branches should just be elided - you can! But not in safe rust, because safe code can't invoke undefined behavior.
rtpg
I understand the value of panic when your invariants really are no longer holding. What I have seen is many cases of "oh a micro-invariant I kind of half believe to be true isn't being held, and so I will panic".
Obviously context-free this is very hand wave-y, but would you want Firefox to crash every time a website prematurely closes its connection to your browser for whatever reason? No, right? You would want Firefox to fail gracefully. That is what I wanted.
null
dathinab
> fn check_invariant(&self) { > unsafe { assert_unchecked(self.ofs < > self.data.len()) } > }
Is fundamentally unsound `check_invariant` needs to be unsafe as it doesn't actually check the invariant but tells the compiler to blindly assume they hold. Should probably also be named `assume_invariant_holds()` instead of `check_invariant()`.
vlovich123
I personally would have used the assume crate but I guess this got standardized more recently. They call out the safeness requirement and that it’s a sharp edge but like you I think they understate the sharpness and danger.
andyferris
This seems to obviate a lot of Rust's advantages (like a good std library). I wonder what it would take to write a nopanic-std library?
Panics really seem bad for composability. And relying on the optimzer here seems like a fragile approach.
(And how is there no -nopanic compiler flag?)
gpm
Rust doesn't want to add any proof-system that isn't 100% repeatable, reliable, and forwards compatible to the language. The borrow checker is ok, because it meets those requirements. The optimizer based "no panic" proof system is not. It will break between releases as LLVM optimizations change, and there's no way to avoid it.
Trying to enforce no-panics without a proof system helping out is just not a very practical approach to programming. Consider code like
some_queue.push_back("new_value");
process(some_queue.pop_front().unwrap());
This code is obviously correct. It never panics. There's no better way to write it. The optimizer will instantly see that and remove the panicing branch. The language itself doesn't want to be in the business of trying to see things like that.Or consider code like
let mut count: usize = 0;
for item in some_vec {
// Do some stuff with item
if some_cond() {
count += 1;
}
}
This code never panics. Integer arithmetic contains a hidden panic path on overflow, but that can't occur here because the length of a vector is always less than usize::MAX.Or so on.
Basically every practical language has some form of "this should never happen" root. Rust's is panics. C's is undefined behavior. Java's is exceptions.
Finally consider that this same mechanism is used for things like stack overflows, which can't be statically guaranteed to not occur short of rejecting recursion and knowledge of the runtime environment that rustc does not have.
---
Proof systems on top of rust like creusot or kani do tend to try to prove the absence of panics, because they don't have the same compunctions about not approving code today that they aren't absolutely sure they will approve tomorrow as well.
RainyDayTmrw
To add to this, I believe that there will always be some amount of "should never happen but I can't prove it" due to Rice's Theorem[1].
swiftcoder
> This code is obviously correct. It never panics
It doesn't panic within the code you typed, but it absolutely still can panic on OOM. Which is sort of the problem with "no panic"-style code in any language - you start hitting fundamental constructs that can can't be treated as infallible.
> Basically every practical language has some form of "this should never happen" root.
99% of "unrecoverable failures" like this, in pretty much every language, are because we treat memory allocation as infallible when it actually isn't. It feels like there is room in the language design space for one that treats allocation as a first-class construct, with suitable error-handling behaviour...
deschutes
Assuming memory is infinite for the purposes of a program is a very reasonable assumption for the vast majority of programs. In the rare contexts where you need to deal with the allocation failure it comes at a great engineering cost.
It's not really what this is about IMV. The vast majority of unrecoverable errors are simply bugs.
A context free example many will be familiar with is a deadlock condition. The programmer's mental model of the program was incomplete or they were otherwise ignorant. You can't statically eliminate deadlocks in an arbitrary program without introducing more expensive problems. In practice programmers employ a variety of heuristics to avoid them and just fix the bugs when they are detected.
wongarsu
The standard library is slowly adding non-panicking options. The article shows some of them (like vec.push_within_capacity()) and ignores some others (vec.get_unchecked()). There is still a lot of work to do, but it is an area where a lot of work gets done. The issue is just that a) Rust is still a fairly young language, barely a decade old counting from 1.0 release, and b) Rust is really slow and methodical in adding anything to stdlib because of how hard/impossible it is to reverse bad decisions in the stdlib.
The same article written a couple years in the future would look very different
swiftcoder
To be slightly pedantic, Vec::get() is the non-panicking version. Vec::get_unchecked() is just the version thereof that elides the bounds check.
hathawsh
What I would like to see is a reliable distinction of different types of panics. In the environments where software I write is typically run, panics due to heap allocation failure are generally acceptable and rarely an indication of fragility. (By the time a heap allocation failure occurs, the computer is probably already thrashing and needs to be rebooted.) On the other hand, other kinds of panics are a bad sign. For example, I would frown on any library that panics just because it can't reach the Internet.
In other environments, like embedded or safety-critical devices, I would need a guarantee that even heap allocation failure can not cause a panic.
dathinab
> Unrecoverable
panics are very much designed to be recoverable at some well defined boundaries (e.g. the request handler of a web server, a thread in a thread pool etc.)
this is where most of it's overhead comes from
you can use panic=abort setting to abort on panics and there is a funny (but unpractical) hack with which somewhat can make sure that no not-dead-code-eliminated code path can hit a panic (you link the panic->abort handler to a invalid symbol)
nicce
I would say that you are using them incorrectly if you assume them as recoverable. You should make everything you can so that they never happen.
However, since it is still possible to have them in a place where the exiting the process is not okay, it was beneficial to add a way to recover from them. It does not mean that they are designed to be recoverable.
> this is where most of it's overhead comes from
Overhead comes from the cleaning process. If you don't clean properly, you might leak information or allocate more resources than you should.
dathinab
> I would say that you are using them incorrectly if you assume them as recoverable.
no it's them being recoverable at well defined boundaries is a _fundamental_ design aspect of rust
> Overhead comes from the cleaning process. If you don't clean properly, you might leak information or allocate more resources than you should.
and that same cleanup process makes it recoverable
nicce
Even the book uses the word "unrecoverable". The wording is communication. The intention is not to recover them, while it could be possible.
https://doc.rust-lang.org/book/ch09-01-unrecoverable-errors-...
sunshowers
There is a social norm to treat panics as unrecoverable (in most cases — some do use panics to perform cancellation in non-async code).
staunton
This website makes by browser freeze... No idea why. Not able to read the article.
haberman
Author here -- that is surprising. What browser/OS are you on? I haven't had anyone else report this problem before.
TallonRain
I’m seeing the same problem, the page crashes on Safari on iOS, saying a problem repeatedly occurred. Haven’t seen a webpage do that in quite a while.
faitswulff
Yep, same experience, same platform. I guess straight to reader mode, it is.
EDIT - shockingly, reader mode also fails completely after the page reloads itself
flohofwoe
Same here: Chrome on a Google Pixel 4a. Page freezes while scrolling down and eventually oh-snaps.
IX-103
I'm also seeing this on Android Chrome. When I opened the page on my Linux desktop, I also saw the crashes (though they only affected the godbolt iframes).
Note that on Android process separation is not usually as good, so a crashing iframe can bring down the whole page.
anymouse123456
same for me. Chrome on Pixel 8
arijun
This happened to me too (Safari on iOS).
Here’s a archived link:
https://web.archive.org/web/20250204050500/https://blog.reve...
DemetriousJones
Same, the web view in my Android client crashed after a couple seconds.
haberman
I wonder if it's all the Godbolt iframes. Do you have the same problem on other pages, like https://blog.reverberate.org/2025/01/27/an-ode-to-header-fil... ?
IX-103
Yeah, I think it's all those iframes. I'm seeing something weird on my Linux desktop - all the godbolt iframes crash on reload unless I have another tab with godbolt open. I didn't see anything obvious in Chrome's log.
I can't replicate the crash at all on my Linux cloud VM though. Usually the only difference there is that advertisers tend to not buy ads for clients on cloud IPs.
wavemode
Other pages on the site work fine for me yeah. But the OP blog post is crashing my Android browser, like the other commenters have mentioned.
DemetriousJones
Yeah no problem on other pages
nektro
OP sounds like they'd be very interested in Zig to tackle this particular problem. they'd get to a very similar place and not have to fight the language or the standard library to get there.
pedromsrocha
This blog post is very interesting, using Rust’s compiler optimizer as a theorem prover. This makes me wonder: are there any formal specifications on the complexity of this "optimizer as theorem prover"?
Specifically, how does it handle recursion? Consider, for example, the following function, which decrements a number until it reaches zero. At each step, it asserts that the number is nonzero before recursing:
fn recursive_countdown(n: u32) -> u32 { assert!(n > 0, "n should always be positive"); if n == 1 { return 1; } recursive_countdown(n - 1) }
Can the compiler prove that the assertion always holds and possibly optimize it away? Or does the presence of recursion introduce limitations in its ability to reason about the program?
jerf
> This makes me wonder: are there any formal specifications on the complexity of this "optimizer as theorem prover"?
Basically, the promise here "We formally promise not to promise anything other than the fact that optimized code should have 'broadly' the same effects and outputs as non-optimized code, and if you want to dig into exactly what 'broadly' means prepare to spend a lot of time on it". Not only are there no promises about complexity, there's no promises that it will work the same on the same code in later versions, nor that any given optimization will continue firing the same way as you add code.
You can program this way. Another semi-common example is taking something like Javascript code and carefully twiddling with it such that a particular JIT will optimize it in a particular way, or if you're some combination of insane and lucky, multiple JITs (including multiple versions) will do some critical optimization. But it's the sort of programming I try very, very hard to avoid. It is a path of pain to depend on programming like this and there better be some darned good reason to start down that path which will, yes, forever dominate that code's destiny.
saagarjha
Compilers can typically reason fairly well about tail recursion like this. In this case the compiler cannot remove the assertion because you could pass in 0. But if you change the assert to be a >= 0 (which is vacuously true, as the warning indicates) it will optimize the code to "return 1" and eliminate the recursive call: https://godbolt.org/z/jad3Eh9Pf
davisp
Does anyone know if there's an obvious reason that adding a `no_panic` crate attribute wouldn't be feasible? It certainly seems like an "obvious" thing to add so I'm hesitant to take the obvious nerd snipe bait.
hathawsh
The standard library has a significant amount of code that panics, so a `no_panic` crate attribute would currently only work for crates that don't depend on the standard library. I imagine most interesting crates depend on the standard library.
davisp
What caught my eye in the article was the desire to have something that doesn't panic with a release profile, while allowing for panics in dev profiles. Based on other comments I think the general "allow use of std, but don't panic" seems like something that could be useful purely on the "Wait, why doesn't that exist?" reactions.
7e
You could do it, but I would prefer guarantees on a per-call chain basis using a sanitizer. It should be quite easy to write.
davisp
I'm no rustc expert, but from what little I know it seems like disabling panics for a crate would be an obvious first step. You make a great point though. Turning that into a compiler assertion of "this function will never panic" would also be useful.
7e
It’s a good first step, but half of the crates in crates.io have at least 40 transitive dependencies. Some have hundreds or thousands. A big effort.
7e
It should be possible to write a sanitizer which verifies no panic behavior on a call graph, just as you can to verify no blocking, or no races.
alkonaut
Why worry about the code size if the code size is up to the library consumer (through their choice of panic handler)? If the consumer worries about code size, then their application has a minimal panic handler. If the consuming application does not have a minimal panic handler, then it must not worry about code size?
Is there some context I'm missing here? Is this to be used from non-Rust applications for example?
hkwerf
As the author mentions, `panic!` is also not an acceptable failure mode in some applications. If you're developing safety-critical software and a process stopping is part of your risk analysis, many frameworks will ask you about the frequency of that happening. In that analysis, you may be required to set all systematic contributions to that frequency to zero. This happens, for example, if you try to control the associated risk using redundancy. If there is a code path that may panic, you may not be able to do this at all as you maybe just cannot conclude that your code does not panic systematically.
alkonaut
Yes that condition I understand. But that seems orthogonal to the code size issue. Having no panics in code where the stdlib is riddled with panics for exceptional situations (allocation failure, for example) seems like a situation where you would just always go with no_std?
hkwerf
It is orthogonal, yes. To your question, I have an example from the same domain, where it is reasonable to mix unrolling panic with code that never panics.
Typically, safety-related processes are set up in two phases. First they set up, then they indicate readiness and perform their safe operation. A robot, for example, may have some process checking the position of the robot against a virtual fence. If the probability for passing through that fence passes some limit, this requires the process to engage breaks. The fence will need to be loaded from a configuration, communication with the position sensors will need to be established, the fence will generally need to be transformed into coordinates that can be guaranteed to be checked safely, taking momentum and today's breaking performance in account, for example. The breaks itself may need to be checked. All that is fine to do in an unsafe state with panics that don't just abort but unroll and full std. Then that process indicates readiness to the higher-level robot control process.
Once that readiness has been established, the software must be restricted to a much simpler set of functions. If libraries can guarantee that they won't call panic!, that's one item off our checklist that we can still use them in that state.
vollbrecht
Most people are using a prebuild standard library. That comes with the problem that it comes with the features it was build for. Most of the bloat around panic for example can be eliminated by just compiling the std library yourself. This is done via the `-Zbuild-std` flag.
Using this flag one than can use `panic_abort`. This will eliminate the unwinding part but would still give a "nice" printout on a panic itself. This reduces, in most cases, the mention bloat by a lot. Though nice printouts also cost binary space. For eliminating that `panic_immidiate_abort` exists.
But yeah the above is only about bloat and not the core goal to eliminate potential path's in your program, that would lead to a panic condition itself.
Also currently building the std library yourself needs a nightly compiler. There is afaik work on bringing this to a stable compiler but how exactly is still work in progress.
The approach at the end of declaring invariants to the compiler so the compiler can eliminate panics seems accidentally genius. You can now add the same invariants as panicking asserts at the end of each function, and the compiler will prove to you that your functions are upholding the invariants. And of course you can add more panicking asserts to show other claims to be true, all tested at compile time. You've basically built a little proof system.
Sure, Rust is hardly the first language to include something like that and adoption of such systems tends to be ... spotty. But if it was reliable enough and had a better interface (that preferably allowed the rest of your program to sill have panics) this might be very useful for writing correct software.