Skip to content(if available)orjump to list(if available)

Things Zig comptime won't do

Things Zig comptime won't do

85 comments

·April 20, 2025

pron

Yes!

To me, the uniqueness of Zig's comptime is a combination of two things:

1. comtpime replaces many other features that would be specialised in other languages with or without rich compile-time (or runtime) metaprogramming, and

2. comptime is referentially transparent [1], that makes it strictly "weaker" than AST macros, but simpler to understand; what's surprising is just how capable you can be with a comptime mechanism with access to introspection yet without the referentially opaque power of macros.

These two give Zig a unique combination of simplicity and power. We're used to seeing things like that in Scheme and other Lisps, but the approach in Zig is very different. The outcome isn't as general as in Lisp, but it's powerful enough while keeping code easier to understand.

You can like it or not, but it is very interesting and very novel (the novelty isn't in the feature itself, but in the place it has in the language). Languages with a novel design and approach that you can learn in a couple of days are quite rare.

[1]: In short, this means that you get no access to names or expressions, only the values they yield.

paldepind2

I was a bit confused by the remark that comptime is referentially transparent. I'm familiar with the term as it's used in functional programming to mean that an expression can be replaced by its value (stemming from it having no side-effects). However, from a quick search I found an old related comment by you [1] that clarified this for me.

If I understand correctly you're using the term in a different (perhaps more correct/original?) sense where it roughly means that two expressions with the same meaning/denotation can be substituted for each other without changing the meaning/denotation of the surrounding program. This property is broken by macros. A macro in Rust, for instance, can distinguish between `1 + 1` and `2`. The comptime system in Zig in contrast does not break this property as it only allows one to inspect values and not un-evaluated ASTs.

[1]: https://news.ycombinator.com/item?id=36154447

pron

Yes, I am using the term more correctly (or at least more generally), although the way it's used in functional programming is a special case. A referentially transparent term is one whose sub-terms can be replaced by their references without changing the reference of the term as a whole. A functional programming language is simply one where all references are values or "objects" in the programming language itself.

The expression `i++` in C is not a value in C (although it is a "value" in some semantic descriptions of C), yet a C expression that contains `i++` and cannot distinguish between `i++` and any other C operation that increments i by 1, is referentially transparent, which is pretty much all C expressions except for those involving C macros.

Macros are not referentially transparent because they can distinguish between, say, a variable whose name is `foo` and is equal to 3 and a variable whose name is `bar` and is equal to 3. In other words, their outcome may differ not just by what is being referenced (3) but also by how it's referenced (`foo` or `bar`), hence they're referentially opaque.

deredede

Those are equivalent, I think. If you can replace an expression by its value, any two expressions with the same value are indistinguishable (and conversely a value is an expression which is its own value).

cannabis_sam

Regarding 2. How are comptime values restricted to total computations? Is it just by the fact that the compiler actually finished, or are there any restrictions on comptime evaluations?

pron

They don't need to be restricted to total computation to be referentially transparent. Non-termination is also a reference.

User23

Has anyone grafted Zig style macros into Common Lisp?

Conscat

The Scopes language might be similar to what you're asking about. Its notion of "spices" which complement the "sugars" feature is a similar kind of constant evaluation. It's not a Common Lisp dialect, though, but it is sexp based.

toxik

Isn’t this kind of thing sort of the default thing in Lisp? Code is data so you can transform it.

fn-mote

There are no limitations on the transformations in lisp. That can make macros very hard to understand. And hard for later program transformers to deal with.

The innovation in Zig is the restrictions that limit the power of macros.

Zambyte

There isn't really as clear of a distinction between "runtime" and "compile time" in Lisp. The comptime keyword is essentially just the opposite of quote in Lisp. Instead of using comptime to say what should be evaluated early, you use quote to say what should be evaluated later. Adding comptime to Lisp would be weird (though obviously not impossible, because it's Lisp), because that is essentially the default for expressions.

Conscat

The truth of this varies between Lisp based languages.

ephaeton

zig's comptime has some (objectively: debatable? subjectively: definite) shortcomings that the zig community then overcomes with zig build to generate code-as-strings to be lateron @imported and compiled.

Practically, "zig build"-time-eval. As such there's another 'comptime' stage with more freedom, unlimited run-time (no @setEvalBranchQuota), can do IO (DB schema, network lookups, etc.) but you lose the freedom to generate zig types as values in the current compilation; instead of that you of course have the freedom to reduce->project from target compiled semantic back to input syntax down to string to enter your future compilation context again.

Back in the day, where I had to glue perl and tcl via C at one point in time, passing strings for perl generated through tcl is what this whole thing reminds me of. Sure it works. I'm not happy about it. There's _another_ "macro" stage that you can't even see in your code (it's just @import).

The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

bsder

> The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

That should be 100% the job of a build system.

Now, you can certainly argue that generating a text file may or may not be the best way to reify the result back into the compiler. However, what the compiler gets and generates should be completely deterministic.

ephaeton

> Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

What is "itself" here, please? Access a static 'external' source? Access a dynamically generated 'external' source? If that file is generated in the build system / build process as derived information, would you put it under version control? If not, are you as nuts as I am?

Some processes require sharp tools, and you can't always be afraid to handle one. If all you have is a blunt tool, well, you know how the saying goes for C++.

> However, what the compiler gets and generates should be completely deterministic.

The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

Why would it be. Generating an interface is something that you want to be part of a streamline process. Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

bmacho

They are not advocating for IO in the compiler, but everything else that other languages can do with macros: run commands comptime, generate code, read code, modify code. It's proven to be very useful.

User23

Learning XS (maybe with Swig?) was a great way to actually understand Perl.

hiccuphippo

The quote in Spanish about a Norse god is from a story by Jorge Luis Borges, here's an English translation: https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...

_emacsomancer_

And in Spanish here: https://www.poeticous.com/borges/el-disco?locale=es

(Not having much Spanish, I at first thought "Odin's disco(teque)" and then "no, that doesn't make sense about sides", but then, surely primed by English "disco", thought "it must mean Odin's record/lp/album".)

wiml

Odin's records have no B-sides, because everything Odin writes is fire!

tialaramex

Back when things really had A and B sides, it was moderately common for big artists to release a "Double A" in which both titles were heavily promoted, e.g. Nirvana's "All Apologies" and "Rape Me" are a double A, the Beatles "Penny Lane" and "Strawberry Fields Forever" likewise.

kruuuder

If you have read the story and, like me, are still wondering which part of the story is the quote at the top of the post:

"It's Odin's Disc. It has only one side. Nothing else on Earth has only one side."

pyrolistical

What makes comptime really interesting is how fluid it is as you work.

At some point you realize you need type information, so you just add it to your func params.

That bubbles all the way up and you are done. Or you realize in certain situation it is not possible to provide the type and you need to solve a arch/design issue.

Zambyte

If the type that you're passing as an argument is the type of another argument, you can keep the API simpler by just using @TypeOf(arg) internally in the function instead.

karmakaze

> Zig’s comptime feature is most famous for what it can do: generics!, conditional compilation!, subtyping!, serialization!, ORM! That’s fascinating, but, to be fair, there’s a bunch of languages with quite powerful compile time evaluation capabilities that can do equivalent things.

I'm curious what are these other languages that can do these things? I read HN regularly but don't recall them. Or maybe that's including things like Java's annotation processing which is so clunky that I wouldn't classify them to be equivalent.

foobazgt

Yeah, I'm not a big fan of annotation processing either. It's simultaneously heavyweight and unwieldy, and yet doesn't do enough. You get all the annoyance of working with a full-blown AST, and none of the power that comes with being able to manipulate an AST.

Annotations themselves are pretty great, and AFAIK, they are most widely used with reflection or bytecode rewriting instead. I get that the maintainers dislike macro-like capabilities, but the reality is that many of the nice libraries/facilities Java has (e.g. transparent spans), just aren't possible without AST-like modifications. So, the maintainers don't provide 1st class support for rewriting, and they hold their noses as popular libraries do it.

Closely related, I'm pretty excited to muck with the new class file API that just went GA in 24 (https://openjdk.org/jeps/484). I don't have experience with it yet, but I have high hopes.

pron

Java's annotation processing is intentionally limited so that compiling with them cannot change the semantics of the Java language as defined by the Java Language Specification (JLS).

Note that more intrusive changes -- including not only bytecode-rewriting agents, but also the use of those AST-modifying "libraries" (really, languages) -- require command-line flags that tell you that the semantics of code may be impacted by some other code that is identified in those flags. This is part of "integrity by default": https://openjdk.org/jeps/8305968

awestroke

Rust, D, Nim, Crystal, Julia

elcritch

Definitely, you can do most of those things in Nim without macros using templates and compile time stuff. It’s preferable to macros when possible. Julia has fantastic compile time abilities as well.

It’s beautiful to implement an incredibly fast serde in like 10 lines without requiring other devs to annotate their packages.

I wouldn’t include Rust on that list if we’re speaking of compile time and compile time type abilities.

Last time I tried it Rust’s const expression system is pretty limited. Rust’s macro system likewise is also very weak.

Primarily you can only get type info by directly passing the type definition to a macro, which is how derive and all work.

tialaramex

Rust has two macro systems, the proc macros are allowed to do absolutely whatever they please because they're actually executing in the compiler.

Now, should they do anything they please? Definitely not, but they can. That's why there's a (serious) macro which runs your Python code, and a (joke, in the sense that you should never use it, not that it wouldn't work) macro which replaces your running compiler with a different one so that code which is otherwise invalid will compile anyway...

int_19h

> Rust’s macro system likewise is also very weak.

How so? Rust procedural macros operate on token stream level while being able to tap into the parser, so I struggle to think of what they can't do, aside from limitations on the syntax of the macro.

null

[deleted]

rurban

Perl BEGIN blocks

ephaeton

well, the lisp family of languages surely can do all of that, and more. Check out, for example, clojure's version of zig's dropped 'async'. It's a macro.

ww520

This is a very educational blog post. I knew ‘comptime for’ and ‘inline for’ were comptime related, but didn’t know the difference. The post explains the inline version only knows the length at comptime. I guess it’s for loop unrolling.

hansvm

The normal use case for `inline for` is when you have to close over something only known at compile time (like when iterating over the fields of a struct), but when your behavior depends on runtime information (like conditionally assigning data to those fields).

Unrolling as a performance optimization is usually slightly different, typically working in batches rather than unrolling the entire thing, even when the length is known at compile time.

The docs suggest not using `inline` for performance without evidence it helps in your specific usage, largely because the bloated binary is likely to be slower unless you have a good reason to believe your case is special, and also because `inline` _removes_ optimization potential from the compiler rather than adding it (its inlining passes are very, very good, and despite having an extremely good grasp on which things should be inlined I rarely outperform the compiler -- I'm never worse, but the ability to not have to even think about it unless/until I get to the microoptimization phase of a project is liberating).

no_wizard

I like the Zig language and tooling. I do wish there was a safety mode that give the same guarantees as Rust, but it’s a huge step above C/C++. I am also extremely impressed with the Zig compiler.

Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible

ksec

>but I’d love the best of both worlds if it were possible

I am just going to quote what pcwalton said the other day that perhaps answer your question.

>> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

> That exists; it's called garbage collection.

>If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.

[1] https://news.ycombinator.com/item?id=43726315

the__alchemist

Maybe this is a bad place to ask, but: Those experienced in manual-memory langs: What in particular do you find cumbersome about the borrow system? I've hit some annoyances like when splitting up struct fields into params where more than one is mutable, but that's the only friction point that comes to mind.

I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.

sgeisenh

Lifetime annotations can be burdensome when trying to avoid extraneous copies and they feel contagious (when you add a lifetime annotation to a frequently used type, it bubbles out to anything that uses that type unless you're willing to use unsafe to extend lifetimes). The solutions to this problem (tracking indices instead of references) lose a lot of benefits that the borrow checker provides.

The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.

rc00

> What in particular do you find cumbersome about the borrow system?

The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")

When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.

Starlevel004

Lifetimes add an impending sense of doom to writing any sort of deeply nested code. You get this deep without writing a lifetime... uh oh, this struct needs a reference, and now you need to add a generic parameter to everything everywhere you've ever written and it feels miserable. Doubly so when you've accidentally omitted a lifetime generic somewhere and it compiles now but then you do some refactoring and it won't work anymore and you need to go back and re-add the generic parameter everywhere.

skybrian

Yes, but I’m not hoping for that. I’m hoping for something like a scripting language with simpler lifetime annotations. Is Rust going to be the last popular language to be invented that explores that space? I hope not.

hyperbrainer

I was quite impressed with Austral[0], which used Linear Types and avoids the whole Rust-like implementation in favour of a more easily understandable system, albeit slightly more verbose.

[0]https://borretti.me/article/introducing-austral

Philpax

You may be interested in https://dada-lang.org/, which is not ready for public consumption, but is a language by one of Rust's designers that aims to be higher-level while still keeping much of the goodness from Rust.

Ygg2

> Is Rust going to be the last popular language to be invented that explores that space? I hope not.

Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.

People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.

spullara

With Java ZGC the performance aspect has been fixed (<1ms pause times and real world throughput improvement). Memory usage though will always be strictly worse with no obvious way to improve it without sacrificing the performance gained.

xedrac

I like Zig as a replacement for C, but not C++ due to its lack of RAII. Rust on the other hand is a great replacement for C++. I see Zig as filling a small niche where allocation failures are paramount - very constrained embedded devices, etc... Otherwise, I think you just get a lot more with Rust.

rastignack

Compile times and painful to refactor codebase are rust’s main drawbacks for me though.

It’s totally subjective but I find the language boring to use. For side projects I like having fun thus I picked zig.

To each his own of course.

nicce

> refactor codebase are rust’s main drawbacks

Hard disagree about refactoring. Rust is one of the few languages where you can actually do refactoring rather safely without having tons of tests that just exist to catch issues if code changes.

xmorse

Even better than RAII would be linear types, but it would require a borrow checker to track the lifetimes of objects. Then you would get a compiler error if you forget to call a .destroy() method

throwawaymaths

no you just need analysis with a dependent type system (which linear types are a subset of). it doesn't have to be in the compiler. there was a proof of concept here a few months ago:

https://news.ycombinator.com/item?id=42923829

https://news.ycombinator.com/item?id=43199265

throwawaymaths

in principle it should be doable, possibly not in the language/compiler itself, there was this POC a few months ago:

https://github.com/ityonemo/clr

hermanradtke

I wish for “strict” mode as well. My current thinking:

TypeScript is to JavaScript

as

Zig is to C

I am a huge TS fan.

rc00

Is Zig aiming to extend C or extinguish it? The embrace story is well-established at this point but the remainder is often unclear in the messaging from the community.

PaulRobinson

It's improved C.

C interop is very important, and very valuable. However, by removing undefined behaviours, replacing macros that do weird things with well thought-through comptime, and making sure that the zig compiler is also a c compiler, you get a nice balance across lots of factors.

It's a great language, I encourage people to dig into it.

yellowapple

The goal rather explicitly seems to be to extinguish it - the idea being that if you've got Zig, there should be no reason to need to write new code in C, because literally anything possible in C should be possible (and ideally done better) in Zig.

Whether that ends up happening is obviously yet to be seen; as it stands there are plenty of Zig codebases with C in the mix. The idea, though, is that there shouldn't be anything stopping a programmer from replacing that C with Zig, and the two languages only coexist for the purpose of allowing that replacement to be gradual.

dooglius

Zig is open source, so the analogy to Microsoft's EEE [0] seems misplaced.

[0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...

paldepind2

This is honestly really cool! I've heard praises about Zig's comptime without really understanding what makes it tick. It initially sounds like Rust's constant evaluation which is not particularly capable. The ability to have types represented as values at compilation time, and _only_ at compile time, is clearly very powerful. It approximates dynamic languages or run-time reflection without any of the run-time overhead and without opening the Pandora's box that is full blown macros as in Lisp or Rust's procedural macros.

forrestthewoods

> When you execute code at compile time, on which machine does it execute? The natural answer is “on your machine”, but it is wrong!

I don’t understand this.

If I am cross-compiling a program is it not true that comptime code literally executes on my local host machine? Like, isn’t that literally the definition of “compile-time”?

If there is an endian architecture change I could see Zig choosing to emulate the target machine on the host machine.

This feels so wrong to me. HostPlatform and TargetPlatform can be different. That’s fine! Hiding the host platform seems wrong. Can aomeone explain why you want to hide this seemingly critical fact?

Don’t get me wrong, I’m 100% on board the cross-compile train. And Zig does it literally better than any other compiled language that I know. So what am I missing?

Or wait. I guess the key is that, unlike Jai, comptime Zig code does NOT run at compile time. It merely refers to things that are KNOWN at compile time? Wait that’s not right either. I’m confused.

int_19h

The point is that something like sizeof(pointer) should have the same value in comptime code that it has at runtime for a given app. Which, yes, means that the comptime interpreter emulates the target machine.

The reason is fairly simple: you want comptime code to be able to compute correct values for use at runtime. At the same time, there's zero benefit to not hiding the host platform in comptime, because, well, what use case is there for knowing e.g. the size of pointer in the arch on which the compiler is running?

null

[deleted]

mootptr

[dead]