Skip to content(if available)orjump to list(if available)

Things Zig comptime won't do

Things Zig comptime won't do

251 comments

·April 20, 2025

pron

Yes!

To me, the uniqueness of Zig's comptime is a combination of two things:

1. comtpime replaces many other features that would be specialised in other languages with or without rich compile-time (or runtime) metaprogramming, and

2. comptime is referentially transparent [1], that makes it strictly "weaker" than AST macros, but simpler to understand; what's surprising is just how capable you can be with a comptime mechanism with access to introspection yet without the referentially opaque power of macros.

These two give Zig a unique combination of simplicity and power. We're used to seeing things like that in Scheme and other Lisps, but the approach in Zig is very different. The outcome isn't as general as in Lisp, but it's powerful enough while keeping code easier to understand.

You can like it or not, but it is very interesting and very novel (the novelty isn't in the feature itself, but in the place it has in the language). Languages with a novel design and approach that you can learn in a couple of days are quite rare.

[1]: In short, this means that you get no access to names or expressions, only the values they yield.

paldepind2

I was a bit confused by the remark that comptime is referentially transparent. I'm familiar with the term as it's used in functional programming to mean that an expression can be replaced by its value (stemming from it having no side-effects). However, from a quick search I found an old related comment by you [1] that clarified this for me.

If I understand correctly you're using the term in a different (perhaps more correct/original?) sense where it roughly means that two expressions with the same meaning/denotation can be substituted for each other without changing the meaning/denotation of the surrounding program. This property is broken by macros. A macro in Rust, for instance, can distinguish between `1 + 1` and `2`. The comptime system in Zig in contrast does not break this property as it only allows one to inspect values and not un-evaluated ASTs.

[1]: https://news.ycombinator.com/item?id=36154447

pron

Yes, I am using the term more correctly (or at least more generally), although the way it's used in functional programming is a special case. A referentially transparent term is one whose sub-terms can be replaced by their references without changing the reference of the term as a whole. A functional programming language is simply one where all references are values or "objects" in the programming language itself.

The expression `i++` in C is not a value in C (although it is a "value" in some semantic descriptions of C), yet a C expression that contains `i++` and cannot distinguish between `i++` and any other C operation that increments i by 1, is referentially transparent, which is pretty much all C expressions except for those involving C macros.

Macros are not referentially transparent because they can distinguish between, say, a variable whose name is `foo` and is equal to 3 and a variable whose name is `bar` and is equal to 3. In other words, their outcome may differ not just by what is being referenced (3) but also by how it's referenced (`foo` or `bar`), hence they're referentially opaque.

deredede

Those are equivalent, I think. If you can replace an expression by its value, any two expressions with the same value are indistinguishable (and conversely a value is an expression which is its own value).

WalterBright

It's not novel. D pioneered compile time function execution (CTFE) back around 2007. The idea has since been adopted in many other languages, like C++.

One thing it is used for is generating string literals, which then can be fed to the compiler. This takes the place of macros.

CTFE is one of D's most popular and loved features.

pron

It is novel to the point of being revolutionary. As I wrote in my comment, "the novelty isn't in the feature itself, but in the place it has in the language". It's one thing to come up with a feature. It's a whole other thing to position it within the language. Various compile-time evaluations are not even remotely positioned in D, Nim, or C++ as they are in Zig. The point of Zig's comptime is not that it allows you to do certain computations at compile-time, but that it replaces more specialised features such as templates/generics, interfaces, macros, and conditional compilation. That creates a completely novel simplicity/power balance.

If the presence of features is how we judge design, then the product with the most features would be considered the best design. Of course, often the opposite is the case. The absence of features is just as crucial for design as their presence. It's like saying that a device with a touchscreen and a physical keyboard has essentially the same properties as a device with only a touchscreen.

If a language has a mechanism that can do exactly what Zig's comptime does but it also has generics or templates, macros, and/or conditional compilation, then it doesn't have anything resembling Zig's comptime.

WalterBright

> Various compile-time evaluations are not even remotely positioned in D, Nim, or C++ as they are in Zig.

See my other reply. I don't understand your comment.

https://news.ycombinator.com/item?id=43748490

msteffen

If I understand TFA correctly, the author claims that D’s approach is actually different: https://matklad.github.io/2025/04/19/things-zig-comptime-won...

“In contrast, there’s absolutely no facility for dynamic source code generation in Zig. You just can’t do that, the feature isn’t! [sic]

Zig has a completely different feature, partial evaluation/specialization, which, none the less, is enough to cover most of use-cases for dynamic code generation.”

WalterBright

The partial evaluation/specialization is accomplished in D using a template. The example from the link:

    fn f(comptime x: u32, y: u32) u32 {
        if (x == 0) return y + 1;
        if (x == 1) return y * 2;
        return y;
    }
and in D:

    uint f(uint x)(uint y) {
        if (x == 0) return y + 1;
        if (x == 1) return y * 2;
        return y;
    }
The two parameter lists make it a function template, the first set of parameters are the template parameters, which are compile time. The second set are the runtime parameters. The compile time parameters can also be types, and aliased symbols.

baazaa

that's a comically archaic way of using the verb 'to be', not a grammatical error. you see it in phrases like "to be or not to be", or "i think, therefore i am". "the feature isn't" just means it doesn't exist.

sixthDot

Sure, CTFE can be used to generate strings, then later "mixed-in" as source code, but also can be used to execute normal functions and then the result can be stored in a compile-time constant (in D that's the `enum` storage class), for example generating an array using a function literal called at compile-time:

   enum arr = { return iota(5).map!(i => i * 10).array; }();
   static assert(arr == [0,10,20,30,40]);

CRConrad

> the feature isn’t! [sic]

To be, or not to be... The feature is not.

(IOW, English may not be the author's native language. I'm fairly sure it means "The feature doesn't exist".)

az09mugen

A little bit out of context, I just want to thank you and all the contributors for the D programming language.

WalterBright

That means a lot to us. Thanks!

Someone

> D pioneered compile time function execution (CTFE) back around 2007

Pioneered? Forth had that in the 1970s, lisp somewhere in the 1960s (I’m not sure whether the first versions of either had it, so I won’t say 1970 respectively 1960), and there may be other or even older examples.

WalterBright

True, but consider that Forth and Lisp started out as interpreted languages, meaning the whole thing can be done at compile time. I haven't seen this feature before in a language that was designed to be compiled to machine code, such as C, Pascal, Fortran, etc.

BTW, D's ImportC C compiler does CTFE, too!! CTFE is a natural fit for C, and works like a champ. Standard C should embrace it.

throwawaymaths

You're missing the point. If anything D is littered with features and feature bloat (CTFE included). Zig (as the author of the blog mentions) is more than somewhat defined by what it can't do.

WalterBright

I fully agree that the difference is a matter of taste.

All living languages accrete features over time. D started out as a much more modest language. It originally eschewed templates and operator overloading, for example.

Some features were abandoned, too, like complex numbers and the "bit" data type.

baranul

Comptime is often pushed as being something extraordinarily special, when it's not. Many other languages have similar. Jai, Vlang, Dlang, etc...

What could be argued, is if Zig's version of it is comparatively better, but that is a very difficult argument to make. Not only in terms of how different languages are used, but something like an overall comparison of features looks to be needed in order to make any kind of convincing case, beyond hyping a particular feature.

cassepipe

You didn't read the article because that's the argument being made (whether you think these points have merit) :

> My understanding is that Jai, for example, doesn’t do this, and runs comptime code on the host.

> Many powerful compile-time meta programming systems work by allowing you to inject arbitrary strings into compilation, sort of like #include whose argument is a shell-script that generates the text to include dynamically. For example, D mixins work that way:

> And Rust macros, while technically producing a token-tree rather than a string, are more or less the same

baranul

The comment made by me, is a reply to another reader, not of the article directly. The push back was on the nature of their comment.

> the uniqueness of Zig's comptime... > You can like it or not, but it is very interesting and very novel...

While true, such features in Zig can be interesting, they are not particularly novel (as other highly knowledgeable readers have pointed out). Zig's comptime is often marketed or hyped as being special, while overlooking that other languages often do similar, but have their own perspectives and reasoning on how metaprogramming and those type of features fit into their language. Not to mention, metaprogramming has its downsides too. It's not all roses.

The article does seek to make comparisons with other languages, but arguably out of context, as to what those languages are trying to achieve with their feature sets. Comptime should not be looked at in a bubble, but as part of the language as a whole.

A language creator with an interesting take on metaprogramming in general, is Ginger Bill (of Odin). Who often has enthusiasts attempt to pressure him into making more extensive use of it in his language, but he pushes back because of various problems it can cause, and has argued he often comes up with optimal solutions without it. There are different sides to the story, in regards to usage and goals, relative to the various languages being considered.

keybored

I’ve never managed to understand your year-long[1] manic praise over this feature. Given that you’re a language implementer.

It’s very cool to be able to just say “Y is just X”. You know in a museum. Or at a distance. Not necessarily as something you have to work with daily. Because I would rather take something ranging from Java’s interface to Haskell’s typeclasses since once implemented, they’ll just work. With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

That’s not something I want. I just want generics or parametric polymorphism or whatever it is to work once it compiles. If there’s a <T> I want to slot in T without any surprises. And whether Y is just X is a very distant priority at that point. Another distant priority is if generics and whatever else is all just X undernea... I mean just let me use the language declaratively.

I felt like I was on the idealistic end of the spectrum when I saw you criticizing other languages that are not installed on 3 billion devices as too academic.[2] Now I’m not so sure?

[1] https://news.ycombinator.com/item?id=24292760

[2] But does Scala technically count since it’s on the JVM though?

pron

My "manic praise" extends to the novelty of the feature as Zig's design is revolutionary. It is exciting because it's very rare to see completely novel designs in programming languages, especially in a language that is both easy to learn and intended for low-level programming.

I wait 10-15 years before judging if a feature is "good"; determining that a feature is bad is usually quicker.

> With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

But the point is that all that is done at compile time, which is also the time when all more specialised features are checked.

> That’s not something I want. I just want generics or parametric polymorphism or whatever it is to work once it compiles.

Again, everything is checked at compile-time. Once it compiles it will work just like generics.

> I mean just let me use the language declaratively.

That's fine and expected. I believe that most language preferences are aesthetic, and there have been few objective reasons to prefer some designs over others, and usually it's a matter of personal preference or "extra-linguistic" concerns, such as availability of developers and libraries, maturity, etc..

> Now I’m not so sure?

Personally, I wouldn't dream of using Zig or Rust for important software because they're so unproven. But I do find novel designs fascinating. Some even match my own aesthetic preferences.

keybored

> But the point is that all that is done at compile time, which is also the time when all more specialised features are checked.

> ...

> Again, everything is checked at compile-time. Once it compiles it will work just like generics.

No. My compile-time when using a library with a comptime type in Zig is not guaranteed to work because my user experience could depend on if the library writer tested with the types (or compile-time input) that I am using.[1] That’s not a problem in Java or Haskell: if the library works for Mary it will work for John no matter what the type-inputs are.

> That's fine and expected. I believe that most language preferences are aesthetic, and there have been few objective reasons to prefer some designs over others, and usually it's a matter of personal preference or "extra-linguistic" concerns, such as availability of developers and libraries, maturity, etc..

Please don’t retreat to aesthetics. What I brought up is a concrete and objective user experience tradeoff.

[1] based on https://strongly-typed-thoughts.net/blog/zig-2025#comptime-i...

hitekker

Do you have a source for "criticizing other languages not installed on 3 billion devices as too academic" ?

Without more context, this comment sounds like rehashing old (personal?) drama.

keybored

pron has been posting about programming languages for years and years, here, in public, for all to see. I guess reading them makes it personal? (We don’t know each other)

The usual persona is the hard-nosed pragmatist[1] who thinks language choice doesn’t matter and that PL preference is mostly about “programmer enjoyment”.

[1] https://news.ycombinator.com/item?id=16889706

Edit: The original claim might have been skewed. Due to occupation the PL discussions often end up being about Java related things, and the JVM language which is criticized has often been Scala specifically. Here he recommends Kotlin over Scala (not Java): https://news.ycombinator.com/item?id=9948798

keybored

> Because I would rather take something ranging from Java’s interface to Haskell’s typeclasses since once implemented, they’ll just work. With comptime types, according to what I’ve read, you’ll have to bring your T to the comptime and find out right then and there if it will work. Without enough foresight it might not.

This was perhaps a bad comparison and I should have compared e.g. Java generics to Zig’s comptime T.

ww520

I'm sorry but I don't understand what you're complaining about comptime. All the stuff you said you wanted to work (generic, parametric polymorphism, slotting <T>, etc) just work with comptime. People are praising about comptime because it's a simple mechanism that replacing many features in other languages that require separate language features. Comptime is very simple and natural to use. It can just float with your day to day programming without much fuss.

keybored

comptime can’t outright replace many language features because it chooses different tradeoffs to get to where it wants. You get a “one thing to rule all” at the expense of less declarative use.

Which I already said in my original comment. But here’s a source that I didn’t find last time: https://strongly-typed-thoughts.net/blog/zig-2025#comptime-i...

Academics have thought about evaluating things at compile time (or any time) for decades. No, you can’t just slot in eval at a weird place that no one ever thought of (they did) and immediately solve a suite of problems that other languages use multiple discrete features for (there’s a reason they do that).

cannabis_sam

Regarding 2. How are comptime values restricted to total computations? Is it just by the fact that the compiler actually finished, or are there any restrictions on comptime evaluations?

mppm

Yes, comptime evaluation is restricted to a configurable number of back-branches. 1000 by default, I think.

pron

They don't need to be restricted to total computation to be referentially transparent. Non-termination is also a reference.

null

[deleted]

User23

Has anyone grafted Zig style macros into Common Lisp?

pron

That wouldn't be very meaningful. The semantics of Zig's comptime is more like that of subroutines in a dynamic language - say, JavaScript functions - than that of macros. The point is that it's executed, and yields errors, at a different phase, i.e. compile time.

User23

You can do that, I think, with EVAL-WHEN.

Conscat

The Scopes language might be similar to what you're asking about. Its notion of "spices" which complement the "sugars" feature is a similar kind of constant evaluation. It's not a Common Lisp dialect, though, but it is sexp based.

toxik

Isn’t this kind of thing sort of the default thing in Lisp? Code is data so you can transform it.

fn-mote

There are no limitations on the transformations in lisp. That can make macros very hard to understand. And hard for later program transformers to deal with.

The innovation in Zig is the restrictions that limit the power of macros.

TinkersW

Lisp is so powerful, but without static types you can't even do basic stuff like overloading, and have to invent a way to even check the type(for custom types) so you can branch on type.

Zambyte

There isn't really as clear of a distinction between "runtime" and "compile time" in Lisp. The comptime keyword is essentially just the opposite of quote in Lisp. Instead of using comptime to say what should be evaluated early, you use quote to say what should be evaluated later. Adding comptime to Lisp would be weird (though obviously not impossible, because it's Lisp), because that is essentially the default for expressions.

aidenn0

Since we are specifically speaking about Common Lisp, there certainly is; see e.g. https://www.lispworks.com/documentation/HyperSpec/Body/s_eva...

Conscat

The truth of this varies between Lisp based languages.

bunderbunder

Zig has a completely different feature, partial evaluation/specialization, which, none the less, is enough to cover most of use-cases for dynamic code generation.

These kinds of insights are what I love about Zig. Andrew Kelley just might be the patron saint of the KISS principle.

A long time ago I had an enlightenment experience where I was doing something clever with macros in F#, and it wasn't until I had more-or-less finished the whole thing that I realized I could implement it in a lot less (and more readable) code by doing some really basic stuff with partial application and higher order functions. And it would still be performant because the compiler would take care of the clever bits for me.

Not too long after that, macros largely disappeared from my Lisp code, too.

minetest2048

Fortunately its not just you, in Julia community there's a thread that discusses why you shouldn't use metaprogramming as a first solution as multiple dispatch and higher order functions are cleaner and faster: https://discourse.julialang.org/t/how-to-warn-new-users-away...

pyrolistical

What makes comptime really interesting is how fluid it is as you work.

At some point you realize you need type information, so you just add it to your func params.

That bubbles all the way up and you are done. Or you realize in certain situation it is not possible to provide the type and you need to solve a arch/design issue.

Zambyte

If the type that you're passing as an argument is the type of another argument, you can keep the API simpler by just using @TypeOf(arg) internally in the function instead.

ephaeton

zig's comptime has some (objectively: debatable? subjectively: definite) shortcomings that the zig community then overcomes with zig build to generate code-as-strings to be lateron @imported and compiled.

Practically, "zig build"-time-eval. As such there's another 'comptime' stage with more freedom, unlimited run-time (no @setEvalBranchQuota), can do IO (DB schema, network lookups, etc.) but you lose the freedom to generate zig types as values in the current compilation; instead of that you of course have the freedom to reduce->project from target compiled semantic back to input syntax down to string to enter your future compilation context again.

Back in the day, where I had to glue perl and tcl via C at one point in time, passing strings for perl generated through tcl is what this whole thing reminds me of. Sure it works. I'm not happy about it. There's _another_ "macro" stage that you can't even see in your code (it's just @import).

The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

bsder

> The zig community bewilders me at times with their love for lashing themselves. The sort of discussions which new sort of self-harm they'd love to enforce on everybody is borderline disturbing.

Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

That should be 100% the job of a build system.

Now, you can certainly argue that generating a text file may or may not be the best way to reify the result back into the compiler. However, what the compiler gets and generates should be completely deterministic.

ephaeton

> Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

What is "itself" here, please? Access a static 'external' source? Access a dynamically generated 'external' source? If that file is generated in the build system / build process as derived information, would you put it under version control? If not, are you as nuts as I am?

Some processes require sharp tools, and you can't always be afraid to handle one. If all you have is a blunt tool, well, you know how the saying goes for C++.

> However, what the compiler gets and generates should be completely deterministic.

The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

Why would it be. Generating an interface is something that you want to be part of a streamline process. Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

bsder

> The zig community treats 'zig build' as "the compile step", ergo what "the compiler" gets ultimately is decided "at compile, er, zig build time". What the compiler gets, i.e., what zig build generates within the same user-facing process, is not deterministic.

I know of no build system that is completely deterministic unless you go through the process of very explicitly pinning things. Whereas practically every compiler is deterministic (gcc, for example, would rebuild itself 3 times and compare the last two to make sure they were byte identical). Perhaps there needs to be "zigmeson" (work out and generate dependencies) and "zigninja" (just call compiler on static resources) to set things apart, but it doesn't change the fact that "zig build" dispatches to a "build system" and "zig"/"zig cc" dispatches to a "compiler".

> Appeasing C interfaces will be moving to a zig build-time multi-step process involving zig's 'translate-c' whose output you then import into your zig file. You think anybody is going to treat that output differently than from what you'd get from doing this invisibly at comptime (which, btw, is what practically happens now)?

That's a completely different issue, but it illustrates the problem perfectly.

The problem is that @cImport() can be called from two different modules on the same file. What about if there are three? What about if they need different versions? What happens when a previous @cImport modifies how that file translates. How do you do link time optimization on that?

This is exactly why your compiler needs to run on static resources that have already been resolved. I'm fine with my build system calling a SAT solver to work out a Gordian Knot of dependencies. I am not fine with my compiler needing to do that resolution.

throwawaymaths

> What is "itself"

If I understand correctly the zig compiler is sandboxed to the local directory of the project's build file. Except for possibly c headers.

The builder and linker can reach out a bit.

forrestthewoods

> Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

It’s not the compiler per se.

Let’s say you want a build system that is capable of generating code. Ok we can all agree that’s super common and not crazy.

Wouldn’t it be great if the code that generated Zig code also be written in Zig? Why should codegen code be written in some completely unrelated language? Why should developers have to learn a brand new language to do compile time code Gen? Why yes Rust macros I’m staring angrily at you!

eddythompson80

> Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

Why though? F# has this feature called TypeProviders where you can emit types to the compiler. For example, you can do do:

   type DbSchema = PostgresTypeProvider<"postgresql://postgres:...">
   type WikipediaArticle = WikipediaTypeProvider<"https://wikipedia.org/wiki/Hello">

and now you have a type that references that Article or that DB. You can treat it as if you had manually written all those types. You can fully inspect it in the IDE, debugger or logger. It's a full type that's autogenerated in a temp directory.

When I first saw it, I thought it was really strange. Then thought about it abit, played with it, and thought it was brilliant. Literally one of the smartest ideas ever. It's first class codegen framework. There were some limitations, but still.

After using it in a real project, you figure out why it didn't catch on. It's so close, but it's missing something. Just one thing is out of place there. The interaction is painful for anything that's not a file source, like CsvTypeProvider or a public internet url. It does also create this odd dependenciey that your code has that can't be source controlled or reproduced. There were hacks and workarounds, but nothing felt right for me.

It was however, the best attempt at a statically typed language trying to imitate python or javascript scripting syntax. Where you just say put a db uri, and you start assuming types.

panzi

> Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

Yeah, although so can build.rs or whatever you call in your Makefile. If something like cargo would have built-in sandboxing, that would be interesting.

jenadine

You can run cargo in a sandbox.

SleepyMyroslav

>Personally, I find the idea that a compiler might be able to reach outside itself completely terrifying (Access the network or a database? Are you nuts?).

In gamedev code is small part of the end product. "Data-driven" is the term if you want to look it up. Doing an optimization pass that will partially evaluate data+code together as part of the build is normal. Code has like 'development version' that supports data modifications and 'shipping version' that can assume that data is known.

The more traditional example of PGO+LTO is just another example how code can be specialized for existing data. I don't know a toolchain that survives change of PGO profiling data between builds without drastic changes in the resulting binary.

bsder

Is the PGO data not a static file which is then fed into the compiler? That still gives you a deterministic compiler, no?

naasking

> That should be 100% the job of a build system.

What is the primary difference between build system and compiler in your mind? Why not have the compiler know how to build things, and so compile-time codegen you want to put in the build system, happens during compilation?

bmacho

They are not advocating for IO in the compiler, but everything else that other languages can do with macros: run commands comptime, generate code, read code, modify code. It's proven to be very useful.

bsder

I'm going to make you defend that statement that they are "useful". I would counter than macros are "powerful".

However, "macros" are a disaster to debug in every language that they appear. "comptime" sidesteps that because you can generally force it to run at runtime where your normal debugging mechanisms work just fine (returning a type being an exception).

"Macros" generally impose extremely large cognitive overhead and making them hygienic has spawned the careers of countless CS professors. In addition, macros often impose significant compiler overhead (how many crates do Rust's proc-macros pull in?).

It is not at all clear that the full power of general macros is worth the downstream grief that they cause (I also hold this position for a lot of compiler optimizations, but that's a rant for a different day).

CRConrad

Personally, I find the idea of needing something called a "build system" completely terrifying.

fxtentacle

I actually like build-time code generation MUCH MORE than, let's say, run-time JVM bytecode patching. Using an ORM in Java is like playing with magic, you never know what works or how. Using an ORM with code generation is much nicer, suddenly my IDE can show me what each function does, I can debug them and reason about them.

jmull

You're complaining about generating code...

While I agree that's typically a bad idea, this seems to have nothing to do specifically with zig.

I get how you start with the idea that there's something deficient in zig's comptime causing this, but... what?

I also have some doubts about how commonly used free-form code generation is with zig.

rk06

I consider it a feature as similar feature in csharp requires me to dabble in msbuild props and target, which are very unfriendly. Moreover, this kind of support is what makes js special and js ecosystem innovative

User23

Learning XS (maybe with Swig?) was a great way to actually understand Perl.

Cloudef

The zig community cares about compilation speed. Unrestricted comptime would be quite disasterous for that.

ephaeton

I feel that's such a red herring.

You can @setEvalBranchQuota essentially as big as you want, @embedFile an XML file, comptime parse it and generate types based on that (BTDT). You can slow down compilation as much as you want to already. Unrestricting the expressiveness of comptime has as much to do with compile times, as much as the restricted amount, and perceived entanglement of zig build and build.zig has to do with compile times.

The knife about unrestricted / restricted comptime cuts both ways. Have you considered stopping using comptime and generate strings for cachable consumption of portable zig code for all the currently supported comptime use-cases right now? Why wouldn't you? What is it that you feel is more apt to be done at comptime? Can you accept that others see other use-cases that don't align with andrewrk's (current) vision? If I need to update a slow generation at 'project buildtime' your 'compilation speed' argument tanks as well. It's the problem space that dictates the minimal/optimal solution, not the programming language designer's headspace.

pjmlp

It does have share a lot of it with other communities like Odin, Go, Jai,...

Don't really get it, lets go back to the old days because it is cool, kind of vibe.

Ironically nothing this matters in the long term, as eventually LLMs will be producing binaries directly.

hiccuphippo

The quote in Spanish about a Norse god is from a story by Jorge Luis Borges, here's an English translation: https://biblioklept.org/2019/04/02/the-disk-a-very-short-sto...

kruuuder

If you have read the story and, like me, are still wondering which part of the story is the quote at the top of the post:

"It's Odin's Disc. It has only one side. Nothing else on Earth has only one side."

tines

A mobius strip does!

bibanez

A mobius strip made out of paper has 2 sides, the usual one and the edge.

_emacsomancer_

And in Spanish here: https://www.poeticous.com/borges/el-disco?locale=es

(Not having much Spanish, I at first thought "Odin's disco(teque)" and then "no, that doesn't make sense about sides", but then, surely primed by English "disco", thought "it must mean Odin's record/lp/album".)

wiml

Odin's records have no B-sides, because everything Odin writes is fire!

tialaramex

Back when things really had A and B sides, it was moderately common for big artists to release a "Double A" in which both titles were heavily promoted, e.g. Nirvana's "All Apologies" and "Rape Me" are a double A, the Beatles "Penny Lane" and "Strawberry Fields Forever" likewise.

Validark

The story is indeed very short, but hits hard. Odin reveals himself and his mystical disc that he states makes him king as long as he holds it. The Christian hermit (by circumstance) who had previously received him told him he didn't worship Him, that he worshiped Christ instead, and then murdered him for the disc in the hopes he could sell it for a bunch of money. He dumped Odin's body in the river and never found the disc. The man hated Odin to this day for not just handing over the disc to him.

I wonder if there's some message in here. As a modern American reader, if I believed the story was contemporary, I'd think it's making a point about Christianity substituting honor for destructive greed. That a descendant of the wolves of Odin would worship a Hebrew instead and kill him for a bit of money is quite sad, but I don't think it an inaccurate characterization. There's also the element of resentment towards Odin for not just handing over monetary blessings. That's sad to me as well. Part of me hopes that one day Odin isn't held in such contempt.

forrestthewoods

> When you execute code at compile time, on which machine does it execute? The natural answer is “on your machine”, but it is wrong!

I don’t understand this.

If I am cross-compiling a program is it not true that comptime code literally executes on my local host machine? Like, isn’t that literally the definition of “compile-time”?

If there is an endian architecture change I could see Zig choosing to emulate the target machine on the host machine.

This feels so wrong to me. HostPlatform and TargetPlatform can be different. That’s fine! Hiding the host platform seems wrong. Can aomeone explain why you want to hide this seemingly critical fact?

Don’t get me wrong, I’m 100% on board the cross-compile train. And Zig does it literally better than any other compiled language that I know. So what am I missing?

Or wait. I guess the key is that, unlike Jai, comptime Zig code does NOT run at compile time. It merely refers to things that are KNOWN at compile time? Wait that’s not right either. I’m confused.

int_19h

The point is that something like sizeof(pointer) should have the same value in comptime code that it has at runtime for a given app. Which, yes, means that the comptime interpreter emulates the target machine.

The reason is fairly simple: you want comptime code to be able to compute correct values for use at runtime. At the same time, there's zero benefit to not hiding the host platform in comptime, because, well, what use case is there for knowing e.g. the size of pointer in the arch on which the compiler is running?

forrestthewoods

> Which, yes, means that the comptime interpreter emulates the target machine.

Reasonable if that’s how it works. I had absolutely no idea that Zig comptime worked this way!

> there's zero benefit to not hiding the host platform in comptime

I don’t think this is clear. It is possibly good to hide host platform given Zig’s more limited comptime capabilities.

However in my $DayJob an extremely common and painful source of issues is trying to hide host platform when it can not in fact be hidden.

int_19h

Can you give an example of a use case where you wouldn't want comptime behavior to match runtime, but instead expose host/target differences?

ww520

This is a very educational blog post. I knew ‘comptime for’ and ‘inline for’ were comptime related, but didn’t know the difference. The post explains the inline version only knows the length at comptime. I guess it’s for loop unrolling.

hansvm

The normal use case for `inline for` is when you have to close over something only known at compile time (like when iterating over the fields of a struct), but when your behavior depends on runtime information (like conditionally assigning data to those fields).

Unrolling as a performance optimization is usually slightly different, typically working in batches rather than unrolling the entire thing, even when the length is known at compile time.

The docs suggest not using `inline` for performance without evidence it helps in your specific usage, largely because the bloated binary is likely to be slower unless you have a good reason to believe your case is special, and also because `inline` _removes_ optimization potential from the compiler rather than adding it (its inlining passes are very, very good, and despite having an extremely good grasp on which things should be inlined I rarely outperform the compiler -- I'm never worse, but the ability to not have to even think about it unless/until I get to the microoptimization phase of a project is liberating).

no_wizard

I like the Zig language and tooling. I do wish there was a safety mode that give the same guarantees as Rust, but it’s a huge step above C/C++. I am also extremely impressed with the Zig compiler.

Perhaps the safety is the tradeoff with the comparative ease of using the language compared to Rust, but I’d love the best of both worlds if it were possible

ksec

>but I’d love the best of both worlds if it were possible

I am just going to quote what pcwalton said the other day that perhaps answer your question.

>> I’d be much more excited about that promise [memory safety in Rust] if the compiler provided that safety, rather than asking the programmer to do an extraordinary amount of extra work to conform to syntactically enforced safety rules. Put the complexity in the compiler, dudes.

> That exists; it's called garbage collection.

>If you don't want the performance characteristics of garbage collection, something has to give. Either you sacrifice memory safety or you accept a more restrictive paradigm than GC'd languages give you. For some reason, programming language enthusiasts think that if you think really hard, every issue has some solution out there without any drawbacks at all just waiting to be found. But in fact, creating a system that has zero runtime overhead and unlimited aliasing with a mutable heap is as impossible as finding two even numbers whose sum is odd.

[1] https://news.ycombinator.com/item?id=43726315

the__alchemist

Maybe this is a bad place to ask, but: Those experienced in manual-memory langs: What in particular do you find cumbersome about the borrow system? I've hit some annoyances like when splitting up struct fields into params where more than one is mutable, but that's the only friction point that comes to mind.

I ask because I am obvious blind to other cases - that's what I'm curious about! I generally find the &s to be a net help even without mem safety ... They make it easier to reason about structure, and when things mutate.

dzaima

I imagine a large part is just how one is used to doing stuff. Not being forced to be explicit about mutability and lifetimes allows a bunch of neat stuff that does not translate well to Rust, even if the desired thing in question might not be hard to do in another way. (but that other way might involve more copies / indirections, which users of manually-memory langs would (perhaps rightfully, perhaps pointlessly) desire to avoid if possible, but Rust users might just be comfortable with)

This separation is also why it is basically impossible to make apples-to-apples comparisons between languages.

Messy things I've hit (from ~5KLoC of Rust; I'm a Rust beginner, I primarily do C) are: cyclical references; a large structure that needs efficient single-threaded mutation while referenced from multiple places (i.e. must use some form of cell) at first, but needs to be sharable multithreaded after all the mutating is done; self-referential structures are roughly impossible to move around (namely, an object holding &-s to objects allocated by a bump allocator, movable around as a pair, but that's not a thing (without libraries that I couldn't figure out at least)); and refactoring mutability/lifetimes is also rather messy.

sgeisenh

Lifetime annotations can be burdensome when trying to avoid extraneous copies and they feel contagious (when you add a lifetime annotation to a frequently used type, it bubbles out to anything that uses that type unless you're willing to use unsafe to extend lifetimes). The solutions to this problem (tracking indices instead of references) lose a lot of benefits that the borrow checker provides.

The aliasing rules in Rust are also pretty strict. There are plenty of single-threaded programs where I want to be able to occasionally read a piece of information through an immutable reference, but that information can be modified by a different piece of code. This usually indicates a design issue in your program but sometimes you just want to throw together some code to solve an immediate problem. The extra friction from the borrow checker makes it less attractive to use Rust for these kinds of programs.

rc00

> What in particular do you find cumbersome about the borrow system?

The refusal to accept code that the developer knows is correct, simply because it does not fit how the borrow checker wants to see it implemented. That kind of heavy-handed and opinionated supervision is overhead to productivity. (In recent times, others have taken to saying that Rust is less "fun.")

When the purpose of writing code is to solve a problem and not engage in some pedantic or academic exercise, there are much better tools for the job. There are also times when memory safety is not a paramount concern. That makes the overhead of Rust not only unnecessary but also unwelcome.

Starlevel004

Lifetimes add an impending sense of doom to writing any sort of deeply nested code. You get this deep without writing a lifetime... uh oh, this struct needs a reference, and now you need to add a generic parameter to everything everywhere you've ever written and it feels miserable. Doubly so when you've accidentally omitted a lifetime generic somewhere and it compiles now but then you do some refactoring and it won't work anymore and you need to go back and re-add the generic parameter everywhere.

skybrian

Yes, but I’m not hoping for that. I’m hoping for something like a scripting language with simpler lifetime annotations. Is Rust going to be the last popular language to be invented that explores that space? I hope not.

hyperbrainer

I was quite impressed with Austral[0], which used Linear Types and avoids the whole Rust-like implementation in favour of a more easily understandable system, albeit slightly more verbose.

[0]https://borretti.me/article/introducing-austral

Ygg2

> Is Rust going to be the last popular language to be invented that explores that space? I hope not.

Seeing how most people hate the lifetime annotations, yes. For the foreseeable future.

People want unlimited freedom. Unlimited freedom rhymes with unlimited footguns.

Philpax

You may be interested in https://dada-lang.org/, which is not ready for public consumption, but is a language by one of Rust's designers that aims to be higher-level while still keeping much of the goodness from Rust.

spullara

With Java ZGC the performance aspect has been fixed (<1ms pause times and real world throughput improvement). Memory usage though will always be strictly worse with no obvious way to improve it without sacrificing the performance gained.

estebank

IMO the best chance Java has to close the gap on memory utilisation is Project Valhalla[1] which brings value types to the JVM, but the specifics will matter. If it requires backwards incompatible opt-in ceremony, the adoption in the Java ecosystem is going to be an uphill battle, so the wins will remain theoretical and be unrealised. If it is transparent, then it might reduce the memory pressure of Java applications overnight. Last I heard was that the project was ongoing, but production readiness remained far in the future. I hope they pull it off.

1: https://openjdk.org/projects/valhalla/

no_wizard

I have zero issue with needing runtime GC or equivalent like ARC.

My issue is with ergonomics and performance. In my experience with a range of languages, the most performant way of writing the code is not the way you would idiomatically write it. They make good performance more complicated than it should be.

This holds true to me for my work with Java, Python, C# and JavaScript.

What I suppose I’m looking for is a better compromise between having some form of managed runtime vs non managed

And yes, I’ve also tried Go, and it’s DX is its own type of pain for me. I should try it again now that it has generics

neonsunset

Using spans, structs, object and array pools is considered fairly idiomatic C# if you care about performance (and many methods now default to just spans even outside that).

What kind of idiomatic or unidiomatic C# do you have in mind?

I’d say if you are okay with GC side effects, achieving good performance targets is way easier than if you care about P99/999.

xedrac

I like Zig as a replacement for C, but not C++ due to its lack of RAII. Rust on the other hand is a great replacement for C++. I see Zig as filling a small niche where allocation failures are paramount - very constrained embedded devices, etc... Otherwise, I think you just get a lot more with Rust.

rastignack

Compile times and painful to refactor codebase are rust’s main drawbacks for me though.

It’s totally subjective but I find the language boring to use. For side projects I like having fun thus I picked zig.

To each his own of course.

nicce

> refactor codebase are rust’s main drawbacks

Hard disagree about refactoring. Rust is one of the few languages where you can actually do refactoring rather safely without having tons of tests that just exist to catch issues if code changes.

xmorse

Even better than RAII would be linear types, but it would require a borrow checker to track the lifetimes of objects. Then you would get a compiler error if you forget to call a .destroy() method

throwawaymaths

no you just need analysis with a dependent type system (which linear types are a subset of). it doesn't have to be in the compiler. there was a proof of concept here a few months ago:

https://news.ycombinator.com/item?id=42923829

https://news.ycombinator.com/item?id=43199265

throwawaymaths

in principle it should be doable, possibly not in the language/compiler itself, there was this POC a few months ago:

https://github.com/ityonemo/clr

hermanradtke

I wish for “strict” mode as well. My current thinking:

TypeScript is to JavaScript

as

Zig is to C

I am a huge TS fan.

rc00

Is Zig aiming to extend C or extinguish it? The embrace story is well-established at this point but the remainder is often unclear in the messaging from the community.

PaulRobinson

It's improved C.

C interop is very important, and very valuable. However, by removing undefined behaviours, replacing macros that do weird things with well thought-through comptime, and making sure that the zig compiler is also a c compiler, you get a nice balance across lots of factors.

It's a great language, I encourage people to dig into it.

dooglius

Zig is open source, so the analogy to Microsoft's EEE [0] seems misplaced.

[0] https://en.m.wikipedia.org/wiki/Embrace,_extend,_and_extingu...

yellowapple

The goal rather explicitly seems to be to extinguish it - the idea being that if you've got Zig, there should be no reason to need to write new code in C, because literally anything possible in C should be possible (and ideally done better) in Zig.

Whether that ends up happening is obviously yet to be seen; as it stands there are plenty of Zig codebases with C in the mix. The idea, though, is that there shouldn't be anything stopping a programmer from replacing that C with Zig, and the two languages only coexist for the purpose of allowing that replacement to be gradual.

pjmlp

Most of Zig's safety was already available in 1978's Modula-2, but apparently languages have to come in curly brackets for adoption.

chongli

languages have to come in curly brackets for adoption

Python and Ruby are two very popular counterexamples.

pjmlp

Not really, Ruby has plenty of curly brackets, e.g. 5.times { puts "hello!" }.

In both cases, while it wasn't curly brackets that drove their adoption, it was unavoidable frameworks.

Most people only use Ruby when they have Rails projects, and what made Python originally interesting was Zope CMS.

And nowadays AI/ML frameworks, that are actually written in C, C++ and Fortran, making Python relevant because scientists decided on picking Python for their library bindings, it could have been Tcl just as well, as choices go.

So yeah, maybe not always curly brackets, but definitly something that makes it unavoidable, sadly Modula-2 lacked that, an OS vendor pushing it no matter what, FAANG style.

karmakaze

> Zig’s comptime feature is most famous for what it can do: generics!, conditional compilation!, subtyping!, serialization!, ORM! That’s fascinating, but, to be fair, there’s a bunch of languages with quite powerful compile time evaluation capabilities that can do equivalent things.

I'm curious what are these other languages that can do these things? I read HN regularly but don't recall them. Or maybe that's including things like Java's annotation processing which is so clunky that I wouldn't classify them to be equivalent.

foobazgt

Yeah, I'm not a big fan of annotation processing either. It's simultaneously heavyweight and unwieldy, and yet doesn't do enough. You get all the annoyance of working with a full-blown AST, and none of the power that comes with being able to manipulate an AST.

Annotations themselves are pretty great, and AFAIK, they are most widely used with reflection or bytecode rewriting instead. I get that the maintainers dislike macro-like capabilities, but the reality is that many of the nice libraries/facilities Java has (e.g. transparent spans), just aren't possible without AST-like modifications. So, the maintainers don't provide 1st class support for rewriting, and they hold their noses as popular libraries do it.

Closely related, I'm pretty excited to muck with the new class file API that just went GA in 24 (https://openjdk.org/jeps/484). I don't have experience with it yet, but I have high hopes.

pron

Java's annotation processing is intentionally limited so that compiling with them cannot change the semantics of the Java language as defined by the Java Language Specification (JLS).

Note that more intrusive changes -- including not only bytecode-rewriting agents, but also the use of those AST-modifying "libraries" (really, languages) -- require command-line flags that tell you that the semantics of code may be impacted by some other code that is identified in those flags. This is part of "integrity by default": https://openjdk.org/jeps/8305968

foobazgt

Just because something mucks with a program's AST doesn't mean that it's introducing a new "language". You wouldn't call using reflection, "creating a new language", either, and many of these libraries can be implemented either way. (Usually a choice between adding an additional build step, runtime overhead, and ease of implementation). It just really depends upon the details of the transform.

The integrity by default JEPs are really about trying to reduce developers depending upon JDK/JRE implementation details, for example, sun.misc.Unsafe. From the JEP:

"In short: The use of JDK-internal APIs caused serious migration issues, there was no practical mechanism that enabled robust security in the current landscape, and new requirements could not be met. Despite the value that the unsafe APIs offer to libraries, frameworks, and tools, the ongoing lack of integrity is untenable. Strong encapsulation and the restriction of the unsafe APIs — by default — are the solution."

If you're dependent on something like ClassFileTransformer, -javaagent, or setAccessible, you'll just set a command-line flag. If you're not, it's because you're already doing this through other means like a custom ClassLoader or a build step.

awestroke

Rust, D, Nim, Crystal, Julia

elcritch

Definitely, you can do most of those things in Nim without macros using templates and compile time stuff. It’s preferable to macros when possible. Julia has fantastic compile time abilities as well.

It’s beautiful to implement an incredibly fast serde in like 10 lines without requiring other devs to annotate their packages.

I wouldn’t include Rust on that list if we’re speaking of compile time and compile time type abilities.

Last time I tried it Rust’s const expression system is pretty limited. Rust’s macro system likewise is also very weak.

Primarily you can only get type info by directly passing the type definition to a macro, which is how derive and all work.

tialaramex

Rust has two macro systems, the proc macros are allowed to do absolutely whatever they please because they're actually executing in the compiler.

Now, should they do anything they please? Definitely not, but they can. That's why there's a (serious) macro which runs your Python code, and a (joke, in the sense that you should never use it, not that it wouldn't work) macro which replaces your running compiler with a different one so that code which is otherwise invalid will compile anyway...

int_19h

> Rust’s macro system likewise is also very weak.

How so? Rust procedural macros operate on token stream level while being able to tap into the parser, so I struggle to think of what they can't do, aside from limitations on the syntax of the macro.

null

[deleted]

rurban

Perl BEGIN blocks

tmtvl

PPR + keyword::declare (shame that Damien didn't actually call it keyword::keyword).

ephaeton

well, the lisp family of languages surely can do all of that, and more. Check out, for example, clojure's version of zig's dropped 'async'. It's a macro.

paldepind2

This is honestly really cool! I've heard praises about Zig's comptime without really understanding what makes it tick. It initially sounds like Rust's constant evaluation which is not particularly capable. The ability to have types represented as values at compilation time, and _only_ at compile time, is clearly very powerful. It approximates dynamic languages or run-time reflection without any of the run-time overhead and without opening the Pandora's box that is full blown macros as in Lisp or Rust's procedural macros.

null

[deleted]