Skip to content(if available)orjump to list(if available)

Why GADTs matter for performance (2015)

rbjorklin

Does anyone have some hard numbers on the expected performance uplift when using GADTs? Couldn't see any mentioned in the article.

ackfoobar

The example here is basically an 8-fold memory saving going from `long[]` from `byte[]` - while still retaining polymorphism (whereas in Java the two are unrelated types).

Hard to say exactly how much performance one would get, as that depends on access patterns.

misja111

The reason that a byte array is in reality layed out as a (mostly empty) long array in Java, is actually for performance. Computers tend to have their memory aligned at 8 byte intervals and accessing such an address is faster than accessing an address that's at an offset of an 8 byte interval.

Of course it depends on your use case, in some cases a compact byte array performs better anyway, for instance because now you're able to fit it in your CPU cache.

ackfoobar

> a byte array is in reality layed out as a (mostly empty) long array in Java

Are you saying each byte takes up a word? That is the case in the `char array` in OCaml, but not Java's `byte[]`. AFAIK The size of a byte array is rounded up to words. Byte arrays of length 1-8 all have the same size in a 64-bit machine, then length 7-16 take up one more word.

https://shipilev.net/jvm/objects-inside-out/

john-h-k

But you can load any byte by loading 8 bytes and shift (v cheap)

cosmic_quanta

Interesting, thanks for posting.

I share the author's frustration with the lack of non-compiler-related examples of GADT uses. It seems like such a powerful idea, but I haven't been able to get a feel for when to reach for GADTs in Haskell

null

[deleted]

wyager

I often find them handy for locking down admissible states at compile time. Maybe ~10 years ago in a processor design class, I wrote some CPUs in Haskell/Clash for FPGA usage. A nice thing I could do was write a single top-level instruction set, but then lock down the instructions based on what stages of the processor they could exist at.

For example, something like (not an actual example from my code, just conceptually - may be misremembering details):

  data Instruction stages where
   MovLit :: Word64 -> Register -> Instruction '[Fetch, Decode, Execute, Writeback]
   -- MovReg instruction gets rewritten to MovLit in Execute stage
   MovReg :: Register -> Register -> Instruction '[Fetch, Decode, Execute]
   ...
And then my CPU's writeback handler block could be something like:

  writeback :: (Writeback `member` stages) => Instruction stages -> WritebackState -> WritebackState
  writeback (MovLit v reg) = ...
  -- Compiler knows (MovReg _ _) is not required here
So you can use the type parameters to impose constraints on the allowed values, and the compiler is smart enough to use this data during exhaustiveness checks (cf "GADTs Meet Their Match")

goldchainposse

I know Jane Street love OCaml, but you have to wonder how much it's cost them in velocity and maintenance. This is a quant firm blogging about a programming language they're the most famous user of.

pjmlp

It is thanks to the companies like Jane Street that believe there is something else beyond C, that we can have nice toys.

Remember if OCaml wasn't a mature programming language, maybe Rust would not have happened in first place.

kryptiskt

Why do you assume it's a drag for them and not a competitive advantage? I don't know if it's such a terrible thing to use a slightly out of mainstream language, when the standard in the business is to accumulate tens of millions of lines of C++.

ackfoobar

Agreed, indeed I believe they have mentioned that OCaml gets them to ship quicker because they are more confident with the correctness of changes.

But being outside of the mainstream may mean you need to occasionally debug more esoteric stuff: https://gallium.inria.fr/blog/intel-skylake-bug/ I'm sure Jane Street can afford doing that, but I'm not so sure if a small team can.