Skip to content(if available)orjump to list(if available)

Indices, not Pointers

Indices, not Pointers

57 comments

·September 2, 2025

10000truths

There are a couple other advantages that are unstated in the article, yet very important from a software design perspective:

1) Indices are a lot more portable across different environments than pointers. They can be serialized to disk and/or sent over the network, along with the data structure they refer to. Pointers can't even be shared between different processes, since they're local to an address space by design.

2) Indices enable relocation, but pointers restrict it. A struct that stores a pointer to itself cannot be trivially moved/copied, but a struct containing an integer offset into itself can. A live pointer to an object pool element prevents the object pool from being safely moved around in memory, but an index into the object pool does not impose such restriction.

hinkley

Someone regaled me with a story of the distributed computing system on the NeXT machines that utilized 64 bit pointers, where the upper bytes were the machine address and the lower bytes the memory address on that machine.

vanderZwan

The second point is implicitly present in the example given at the end of the "Less Allocation Overhead" section. Copying all nodes from one backing arraylist to a larger one like requires the possibility of relocation.

cma

Data memory-dependent prefetchers like in Apple's newer chips I think only work with full pointers and not offsets, so it could be a decent perf hit.

whstl

Still depends. If the indices are pointing to a dense, flat, contiguous array, it will still be faster than following pointers into scattered heap allocations, with or without prefetching, because of how CPU caching works.

astrange

Indices can be much smaller than pointers (which are 8 bytes), so they have plenty of cache advantages of their own which can make up for that.

null

[deleted]

o11c

Some minor disadvantages:

* Indices are likely to increase register pressure slightly, as unoptimized code must keep around the base as well (and you can't assume optimization will happen). In many cases the base is stored in a struct so you'll also have to pay for an extra load stall.

* With indices, you're likely to give up on type safety unless your language supports zero-overhead types and you bother to define and use appropriate wrappers. Note in particular that "difference between two indices" should be a different type than "index", just like for pointers.

IshKebab

On the latter point, I always use this in Rust: https://github.com/zheland/typed-index-collections

skulk

This is a very tempting and commonly used strategy in Rust to bypass the borrow checker. I've used it to implement tries/DFAs with great success (though I can't find the code anymore)

Animats

The trouble is, you've just replicated the problems of raw pointers. You can have dangling indices if the underlying object is reused or cleared. You can have aliasing, with two indices to the same object.

It's a problem in practice. Of the three times I've ever had to use a debugger on Rust code, two came from code someone had written to do their own index allocation. They'd created a race condition that Rust would ordinary prevent.

IshKebab

You don't replicate all the problems of raw pointers. You can't have type confusion or undefined behaviour. It's totally memory safe. That's a pretty huge difference.

But I agree, it does give up some of the benefits of using native references.

skeezyboy

> This is a very tempting and commonly used strategy in Rust to bypass the borrow checker.

Are you even allowed to publicly disparage the borrow checker like that?

bombela

You don't bypass the borrow checker. Instead you use it the way it wants to be used.

lmm

Often you are bypassing it. E.g. if you rebalance a tree then references into that tree may now be invalid, so the borrow checker will prevent you from doing that while you have live references. But you may also invalidate indices into the tree, and the borrow checker can't help you with that.

skeezyboy

> and the borrow checker can't help you with that.

good thing youve got a brain in your nut then

account42

The title is "Indices, not Pointers" but the main advantages (except size) actually come from using an area allocator, which is implied by using indices but can also be used without them if for some reason you need/want pointers.

pjmlp

This is an old school technique anyone that has been coding since the days writing business applications in Assembly was common, knows this kind of stuff.

Great that in the days of Electron garbage, this kind of stuff gets rediscovered.

kitd

It's also the basis for efficient memory usage with the ECS pattern used extensively by game engines.

skeezyboy

ive seen ECS systems based on pointers only.

zahlman

> There is a pattern I’ve learned while using Zig which I’ve never seen used in any other language.

I've done this in small applications in C (where nodes were already being statically allocated) and/or assembly (hacking on an existing binary).

No idea about the effect on speed in general; I was trying to save a few bytes of storage in a place where that mattered.

adrian_b

While the author has seen this pattern in Zig, this pattern was the normal way of writing programs in FORTRAN, decades before the appearance of the C language.

The early versions of FORTRAN did not have dynamic memory allocation. Therefore the main program pre-allocated one or more work arrays, which were either known globally or they were passed as arguments to all procedures.

Then wherever a C program might use malloc, an item would be allocated in a work array and the references between data structures would use the indices of the allocated items. Items could be freed as described in TFA, by putting them in a free list.

The use of the data items allocated in work arrays in FORTRAN was made easier by the fact that the language allowed the aliasing of any chunk of memory to a variable of any type, either a scalar or an array of any rank and dimensions.

So this suggestion just recommends the return to the old ways. Despite its limitations, when maximum speed is desired, FORTRAN remains unbeatable by any of its successors.

versteegen

> the language allowed the aliasing of any chunk of memory to a variable of any type, either a scalar or an array of any rank and dimensions.

Wait a minute, I've seen it stated many times that a primary reason FORTRAN can be better optimised than C is that it doesn't allow aliasing memory as easily as C does (what that means, maybe you can say), and that's why 'restrict' was added to C. On the other hand, C's "strict aliasing rule" allows compilers to assume that pointers of different types don't alias the same memory, which allows optimisations.

adrian_b

FORTRAN does not allow implicit aliasing between distinct function/procedure arguments, which helps optimization.

It allows explicit aliasing using the EQUIVALENCE statement, which declares that 2 variables of arbitrary types and names are allocated at the same memory address.

The C language has taken the keyword "union" from the language ALGOL 68, but instead of implementing true unions like in the original language (i.e. with tags handled by the compiler and with type safety) it has implemented a version of the FORTRAN EQUIVALENCE, which however is also weaker and less convenient than the FORTRAN declaration (unlike C's union, FORTRAN's EQUIVALENCE also worked for aliasing parts of bigger data structures, e.g. sub-arrays).

shakow

Not the same aliasing.

GP is using aliasing as a synonym for casting; the aliasing you're thinking of is the one where, in C, pointer function arguments can refer to identical or overlapping memory spans.

physicsguy

I just came here to say exactly the same. I've also seen it used in C/C++ for Fast Multipole Method / Barnes Hut methods codes, I don't think this is a forgotten trick at all.

throwawaymaths

> There is a pattern I’ve learned while using Zig which I’ve never seen used in any other language.

yeah, i feel like it's low key ECS (minus object/slot polymorphism)

anonymousiam

But in C, there's not really any difference between pointers and indices. You can access array elements using either method, and they're both equally efficient.

kazinator

There are some differences.

- You can check indices that are out of bounds without running into formal undefined behavior. ISO C does not require pointers to distinct objects to be comparable via inequality, only exact equality. (In practice it works fine in any flat-address-space implementation and may be regarded as a common extension.)

- Indices are implicitlyl scaled. If you have done a range check that index is valid, then it refers to an entry in your array. At worst it is some unoccupied/free entry that the caller shouldn't be using. If you have checked that a pointer points into the array, you don't know that it's valid; you also have to check that its displacement from the base of the array is a multiple of the element size; i.e. it is aligned.

cyber_kinetist

You do have to take care of the ABA problem - if you access memory using an index that became invalid before and another object is using instead, you will have some weird hard-to-debug logic errors (worse than use-after-free, since even Valgrind can't save you). To prevent this you need another generational counter to store along with your id (which is either incremented for every usage or assigned a random hash)

LegionMammal978

Depending on how many elements you have, you can save some space using 32-bit or even 16-bit indices in place of 64-bit pointers. (Just make sure there isn't any route to overflow the index type.)

smadge

The distinction in the article really is between calling malloc for every added node in your data structure (“pointer”) or using the pre-allocated memory in the next element of an array (“index”).

adrian_b

This is only one of the advantages discussed in TFA. The others are those due to using indices instead of pointers (like smaller size, cache locality, range checking, possibility of data exchange between systems with distinct address spaces).

adrian_b

After being translated literally in machine language, they are not equally efficient.

However, a C compiler may choose to use in machine language whichever of indices or pointers is more efficient on the target machine, regardless of whether the source program uses indices or pointers.

anonnon

> No idea about the effect on speed in general; I was trying to save a few bytes of storage in a place where that mattered.

I had a decent sized C library that I could conditionally compile (via macros and ifdefs) to use pointers (64-bit) or indexes (32-bit), and I saw no performance improvement, at least for static allocation.

octoberfranklin

It also is/was common in Java when you need to reduce the pressure on the JVM garbage collector.

adonovan

One other benefit of indices is that they are ordered, whereas pointers in many languages (e.g. Go, but not C) are unordered. So you can binary search over an array of indices, for example, or use the relative sign of two indices to indicate a push or a pop operation in a balanced parenthesis tree.

quantified

Reinventing malloc to some degree.

femto

malloc with NEAR ad FAR pointers (as was used in MSDOS on processors with segmented memory).

nielsbot

Wonder how this compares to combining pointers with indices (effectively):

Allocate your nodes in contiguous memory, but use pointers to refer to them instead of indices. This would remove an indirect reference when resolving node references: dereference vs (storage_base_address + element_size * index) Resizing your storage does become potentially painful: you have to repoint all your inter-node pointers. But maybe an alternative there is to just add another contiguous (memory page-sized?) region for more nodes.

Lots of trade offs to consider :)

munch117

You have just reinvented the slab allocator.

nielsbot

Sure—-But I was specifically thinking in the context of this article

high_na_euv

>in a contiguous buffer is that it makes it harder to free an individual node as removing a single element from an arraylist would involve shifting over all the elements after it

After it? What about before

userbinator

I've done this before for easily relocatable (realloc'able) blocks but the code can get bigger and slower due to the additional address arithmetic. If you know the allocation pattern then allocating in large blocks is useful. It's always a tradeoff, and there is no right or best answer, so keep this in mind as another technique to consider.

nayuki

Managing your own buffer of object slots, allocating from them, and using integer indexes instead of typed object pointers - all of these are a small amount of extra work compared to using native language features.

I'd like to point out that most of the benefits explained in the article are already given to you by default on the Java virtual machine, even if you designed tree object classes the straightforward way:

> Smaller Nodes: A pointer costs 8 bytes to store on a modern 64-bit system, but unless your planning on storing over 4 billion nodes in memory, an index can be stored in just 4 bytes.

You can use the compressed OOPs (ordinary object pointers) JVM option, which on 64-bit JVMs this drops the size of a pointer from 8 bytes to 4 bytes.

> Faster Access: [...] nodes are stored contiguously in memory, the data structure will fit into fewer memory pages and more nodes will fit in the cpu’s cache line, which generally improves access times significantly

If you are using a copying garbage collector (as opposed to reference counting or mark-and-sweep), then memory allocation is basically incrementing a pointer, and consecutively allocated nodes in time are consecutive in memory as well.

> Less Allocation Overhead: [...] make a separate allocation for each individual node, one at a time. This is a very naive way of allocating memory, however, as each memory allocation comes with a small but significant overhead

Also not true for a garbage-collected memory system with bump allocation. The memory allocator only needs to keep a single pointer to keep track of where the next allocation needs to be. The memory system doesn't need to keep track of which blocks are in use or keep free lists - because those are implied by tracing all objects from the known roots. What I'm saying is, the amount of bookkeeping for a C-style malloc()+free() system is completely different than a copying garbage collector.

> Instant Frees: [...] entire structure has to be traversed to find and individually free each node [...] freeing the structure becomes just a single free call

This is very much the key benefit of copying garbage collectors: Unreachable objects require zero effort to free. If you null out the pointer to the root of the tree, then the whole tree is unreachable and no work is needed to traverse or free each individual object.

Now, am I claiming that copying garbage collection is the solution to all problems? No, not at all. But I am pointing out that as evidenced by the article, this style of memory allocation and deallocation is a common pattern, and it fits extremely well with copying garbage collection. I am more than willing to admit that GCs are more complicated to design, less suitable for hard real-time requirements, etc. So, a small number of incredibly smart people design the general GC systems instead of a larger number of ordinary programmers coming up with the tricks described in the article.

unnah

All good points. On the other side of the balance, by using pointers everywhere you give a lot more work for the tracing garbage collector. If it was possible to keep all your data in a single array of value types, the garbage collector would not need to do basically any work at all. Maybe Project Valhalla will one day allow that. At the moment the closest you can get is a structure-of-arrays setup.