Skip to content(if available)orjump to list(if available)

Resizable structs in Zig

Resizable structs in Zig

50 comments

·July 26, 2025

mananaysiempre

> Zig does not, and will not, have VLAs in the language spec. Instead, you can allocate a slice on the heap. If you want to have the data on the stack, use an array as a bounded backing store, and work with a slice into it[.]

Too bad, aligned byte-typed VLAs (and a license to retype them as a struct) are what you need to get stack allocation across ABI boundaries the way Swift does it. (A long long time ago, SOM, IBM’s answer to Microsoft’s COM, did this in C with alloca instead of VLAs, but that’s the same thing.) I guess I’ll have to use something else instead.

AndyKelley

Note that not having runtime-known stack allocations is a key piece of the puzzle in Zig's upcoming async I/O strategy because it allows the compiler to calculate upper bound stack usage for a given function call.

At a fundamental level, runtime-known stack allocation harms code reusability.

Edit: commenters identified 2 more puzzle pieces below, but there's still one that didn't get asked about yet :P

travisgriggs

> Note that not having runtime-known stack allocations is a key piece of the puzzle in Zig's upcoming async I/O strategy because it allows the compiler to calculate upper bound stack usage for a given function call.

Sigh. So I have to choose between something I think might be useful, for something that too many languages have already soiled themselves with. Hopes that Zig has a better solution, but not optimistic.

Our stack compels me to work in Swift, Kotlin, Elixir, and Python. I use the async feature of Swift and Kotlin when some library forces me to. I actually preferred working with GCD before Swift had to join the async crowd. Elixir of course just has this problem solved already.

I frequently ask others who work in these languages how often they themselves reach for the async abilities of their languages, and the best I ever get from the more adventurous type is “I did a play thing to experiment with what I could do with it”.

dnautics

RE: Elixir I have a feeling that the zig's i/o strategy will enable me to bring back the zig-async-dependent yielding nifs in zigler. I'm really hopeful io interface will have a yield() function, that would be even better!

https://www.youtube.com/watch?v=lDfjdGva3NE&t=1819s

AshamedCaptain

How does this work given... recursion?

Even on languages without VLAs one can implement a simulacra of them with recursion.

AndyKelley

All Zig code is in one compilation unit, so the compiler has access to the entire function call graph. Cycles in the graph (recursion) cause an error. To break cycles in the graph, one must use a language builtin to call a function using a different stack (probably obtained via heap allocation).

do_not_redeem

A comptime_int-bounded alloca would achieve those goals, plus would be more space-efficient on average than the current strategy of always pessimistically allocating for the worst case scenario.

  @alloca(T: type, count: usize, upper_bound_count: comptime_int)
with the added bonus that if `count` is small, you can avoid splitting the stack around a big chunk of unused bytes. Don't underestimate the important of memory locality on modern CPUs.

AndyKelley

2017: https://github.com/ziglang/zig/issues/225

when I had been only thinking about zig for 2 years, I thought the same.

omnicognate

> there's still one that didn't get asked about yet :P

C libraries?

NobodyNada

Or function pointers (especially given that Zig's been moving towards encouraging vtables over static dispatch)?

mananaysiempre

> Note that not having runtime-known stack allocations is a key piece of the puzzle in Zig's upcoming async I/O strategy because it allows the compiler to calculate upper bound stack usage for a given function call.

That’s a genuinely interesting point. I don’t think known sizes for locals are a hard requirement here, though threading this needle in a lower-level fashion than Swift would need some subtle language design.

Fundamentally, what you want to do is construct an (inevitably) runtime-sized type (the coroutine) out of (by problem statement) runtime-sized pieces (the activation frames, itself composed out of individual, possibly runtime-sized locals). It’s true that you can’t then allow the activations to perform arbitrary allocas. You can, however, allow them to do allocas whose sizes (and alignments) are known at the time the coroutine is constructed, with some bookkeeping burden morally equivalent to maintaining a frame pointer, which seems fair. (In Swift terms, you can construct a generic type if you know what type arguments are passed to it.) And that’s enough to have a local of type of unknown size pulled in from a dynamic library, for example.

Again, I’m not sure how a language could express this constraint on allocas without being Swift (and hiding the whole thing from the user completely) or C (and forcing the user to maintain the frames by hand), so thank you for drawing my attention to this question. But I’m not ready to give up on it just yet.

> At a fundamental level, runtime-known stack allocation harms code reusability.

This is an assertion, not an argument, so it doesn’t really have any points I could respond to. I guess my view is this: there are programs that can be written with alloca and can’t be written without (unless you introduce a fully general allocator, which brings fragmentation problems, or a parallel stack, which is silly but was in fact used to implement alloca historically). One other example I can give in addition to locals of dynamically-linked types is a bytecode interpreter that allocates virtual frames on the host stack. So I guess that’s the other side of being opinionated—those whose opinions don’t match are turned away.

Frankly, I don’t even know why I’m defending alloca this hard. I’m not actually happy with the status quo of just yoloing a hopefully maybe sufficiently large stack. I guess the sticking point is that you seem to think alloca is obviously the wrong thing, when it’s not even close to obvious to me what the right thing is.

bobthebuilders

Alloca is a fundmentally insecure way of doing allocations. Languages that promote alloca will find themselves stuck in a morass of security messes and buffer overflows. If Zig were to adopt alloca, it would make the catastrophic mistake that plagued C for over several decades and introduce permanently unfixable security issues for another generation of programming languages.

Conscat

Does anything stop a user from doing this with inline assembly?

rvrb

I think there may be room to expand this implementation to support such a use case. Right now it enforces an `.auto` layout of the struct provided in order to ensure alignment, but its easy to imagine supporting an `extern struct` with a defined layout.

Conceivably, an implementation of this `ResizableStruct` that uses an array buffer as backing rather than a heap allocation, and supports the defined layout of an extern struct, could be used to work across the ABI

throwawaymaths

you can certainly allocate on the stack (like alloca). you just have to overallocate a compile-time known size and have some sort of fallback mechanism or fail if the size requested exceeds the amount created.

moreover since the stack allocator is just an allocator, you can use it with any std (or user) datastructure that takes an allocator.

h4ck_th3_pl4n3t

I am wondering if this is more of an unclearly defined memory ownership problem rather than a problem of what types you have to use to interact with C ABIs or FFI calls.

I mean you could also just abstract the allocation away and handle it after the function pointer to your bridge, right?

atmikemikeb

I thought about dynamically sized types (DSTs) in zig recently. Was considering writing about it. I came to a different conclusion. Why not use zig's opaque?

It's pretty clean at this imo: Less metaprogramming but I think nicer to use in some cases.

  const Connection = opaque {
    pub const Header = struct {
      host_len: usize,
      // add more static fields or dynamic field lengths here
      //buff_len: usize,
    };

    pub fn init(a: std.mem.Allocator, args: struct { host: []const u8 }) *@This() {
      var this = a.allocWithOptions(u8, @sizeOf(Header) + host.len, @alignOf(Header), null);
      @memcpy(this[@sizeOf(Header)..], host);
      return this.ptr; 
    }

    pub fn host(self: *const @This()) []const u8 {
      const bytes: *u8 = @ptrCast(self);
      const header: *Header = @ptrCast(self);
      const data = bytes[@sizeOf(Header)..];
      const host = data[0..header.host_len];
      return host;
    }
  };
going off memory so I expect it to not actually compile, but I've definitely done something like this before.

rvrb

I would describe this approach as 'intrusive' - you're storing the lengths of the arrays behind the pointer, enforcing a certain layout of the memory being allocated.

Because the solution outlined in the article stores the lengths alongside the pointer, instead of behind it, there is room for it to work across an ABI (though it currently does not). It's more like a slice in this way.

You could in theory implement your opaque approach using this as a utility to avoid the headache of alignment calculations. For this reason, I think that makes the approach outlined in the article more suitable as a candidate for inclusion in the standard library.

atmikemikeb

Yeah I think mine is more about being able to provide a `host()` helper function instead of a `.get(.host)` meta function. It is somewhat boilerplate-y. I think it's really a matter of taste haha. Likely yours would be useful regardless if this is done a lot, since it abstracts some of it, if one wants that.

rvrb

I've entertained further expanding this API to expose a comptime generated struct of pointers. From the Connection use-case detailed in the article, it would look something like this:

  pub fn getPtrs(self: Self) struct {
      client: *Client,
      host: []u8,
      read_buffer: []u8,
      write_buffer: []u8,
  } {
      return .{
          client: self.get(.client),
          host: self.get(.host),
          read_buffer: self.get(.read_buffer),
          write_buffer: self.get(.write_buffer),
      }
  }
I haven't done this because I'm not yet convinced it's worth the added complexity

rvrb

I am the author of this post, let me know if you have any questions or feedback :)

azemetre

I sincerely mean this when I write this: please make more Girls on the Beach albums. I really love Splif Tape, reminds me of Julie Ruin during that timeframe as well (2015ish).

rvrb

oh, man! you made my day with this comment. those days are long behind me. we were young, dumb, and inebriated. if you're looking for the bands we were trying to sound like, there was a resurgence of this surf/girl group sound in the early 2010s. look for bands like Shannon and the Clams, Hunx and His Punx, Nobunny, Harlem, Ty Segall, Thee Oh Sees.. basically anyone on the now defunct Burger Records

90s_dev

[flagged]

konstantinua00

one thing I never understood about VLAs - discussion about them always hits a "can't put it on stack safely" and gets halted, forever

why not to make it heap-only type? it seems such a useful addition to type system, why ignore it due to one usecase?

Out_of_Characte

Because arrays simply do not deal with fragmentation. Yes, you could probaly get decent performance on a modern system that has memory overcommit strategy where you could allocate sparse adress ranges where you would probaly never run out of pointers unless you actually write to your variable array.

But its just kind of mediocre and you're better off actually dealing with the stack if you can actually deal with certain fixed sizes.

konstantinua00

...what are you talking about?

array-like storage with dynamic size has existed since forever - it's vector. over or undercommitting is a solved problem

VLA is the way to bring that into type system, so that it can be it's own variable or struct member, with compiler auto-magic-ing size reading to access members after it

uecker

You can also put them safely on the stack. The VLA is discussion is just irrational.

ori_b

Those effectively exist. They're called slices.

null

[deleted]