Skip to content(if available)orjump to list(if available)

Show HN: Luminal – Open-source, search-based GPU compiler

Show HN: Luminal – Open-source, search-based GPU compiler

23 comments

·August 20, 2025

Hi HN, I’m Joe. My friends Matthew, Jake and I are building Luminal (https://luminalai.com/), a GPU compiler for automatically generating fast GPU kernels for AI models. It uses search-based compilation to achieve high performance.

We take high level model code, like you'd have in PyTorch, and generate very fast GPU code. We do that without using LLMs or AI - rather, we pose it as a search problem. Our compiler builds a search space, generates millions of possible kernels, and then searches through it to minimize runtime.

You can try out a demo in `demos/matmul` on mac to see how Luminal takes a naive operation, represented in our IR of 12 simple operations, and compiles it to an optimized, tensor-core enabled Metal kernel. Here’s a video showing how: https://youtu.be/P2oNR8zxSAA

Our approach differs significantly from traditional ML libraries in that we ahead-of-time compile everything, generate a large search space of logically-equivalent kernels, and search through it to find the fastest kernels. This allows us to leverage the Bitter Lesson to discover complex optimizations like Flash Attention entirely automatically without needing manual heuristics. The best rule is no rule, the best heuristic is no heuristic, just search everything.

We’re working on bringing CUDA support up to parity with Metal, adding more flexibility to the search space, adding full-model examples (like Llama), and adding very exotic hardware backends.

We aim to radically simplify the ML ecosystem while improving performance and hardware utilization. Please check out our repo: https://github.com/luminal-ai/luminal and I’d love to hear your thoughts!

sakras

I see you guys are using Egg/Egglog! I've been mildly interested in egraphs for quite a while, glad to see they're gaining traction!

aleinin

Cool project! How do you think about targeting hardware-specific ISAs directly? There’s an interesting paper from Citadel (https://arxiv.org/pdf/1804.06826) that highlights inefficiencies in nvcc for the Volta architecture. Do you see Luminal’s search-based paradigm eventually extending beyond outperforming handwritten kernels, towards actually competing with NVIDIA’s compiler optimizations at the PTX level?

jafioti

yep! currently we're emitting cuda / metal but once the search is better, i want to directly emit ptx / low-level asm on other hardwares.

Alifatisk

So wait, am I understanding this correctly?

Instead of applying just predetermined optimization rules or patterns, the compiler formulates the problem as searching through many possible configurations or versions of the code. Each possible version can have different arrangements, tiling sizes, thread block configurations, memory access patterns, and instruction sequences, right?

And from my understanding, the “search space” is just a collection of all potential versions of the code (kernels) that the compiler can generate from the original input. So for example, the space might include

- Different ways to partition workloads among GPU threads and blocks

- Varying memory access strategies (using shared memory, global memory)

- Various instruction-level optimizations or reordering

- Alternative loop unroll factors or vectorization strategies

The compiler then programmatically produces a large number of candidate kernels by combining different optimizations and configurations. Among these millions of candidates, the compiler tries to find the one that performs best.

In that case, can the compiler print out which gpu configuration works the best for that computer? And will that configuration be applicable to all computers with the same setup?

This is such an interesting technique.

jakestevens2

Your description is exactly right. We create a search space of all possible kernels and find the best ones based on runtime. The best heuristic is no heuristic.

This obviously creates a combinatorial problem that we mitigate with smarter search.

The kernels are run on the computer the compiler is running on. Since runtime is our gold standard it will search for the best configuration for your hardware target. As long as the setup is mostly the same, the optimizations should carry over, yes.

UncleOxidant

How long does this typically take? It sounds time consuming. Also, it seems like this could be similar to doing a GA?

jakestevens2

That depends on the model architecture and how it was written since that informs the size of the search space.

Think of it as a second training run. It won't be fast but you only have to do it once and then those optimizations are set for every forward pass.

jakestevens2

You can also set a time budget for how long you'd like the search to run for to avoid wasting time on diminishing returns.

null

[deleted]

diggan

> Luminal can run Q8 Llama 3 8B on M-series Macbooks at 15-25 tokens per second. The goal is to become the fastest ML framework for any model on any device.

Great that some numbers are provided, but in isolation, I'm not sure what they provide. It would be helpful to also share what tok/s you'd get with llama.cpp or something else on the same hardware, so we can actually understand if it's faster or not :) Also including the prompt processing would be a bonus!

jafioti

a lot of the search is still being optimized so we don't match super hand-optimized kernels like llama.cpp has, so we def don't match their tps yet, but i want to make a perf tracking page to see improvements over time and prevent regressions

AkashKarnatak

Very cool project. Earlier tinygrad used to have ~25 ops but now it has grown to 86 and I believe it is primarily to support hardware feature like tensor core and tma. I don't think luminal supports tensor cores as of now, how do you think the ops will evolve as the library matures.

jafioti

we do support tensor cores, but the ops are only part of the search space, so there's virtually no overhead for them. the frontend and main ir is only 12 ops, and we can add hardware-specific ops in to the search space and only add in a bit of code in the codegen pass to support them.

UncleOxidant

When you say (in the video) that you can target more exotic hardware, what about things FGPA accelerators (maybe taking advantage of TVM's FPGA backend)?

Also, what about CUDA alternatives like ROCm?

efnx

Cool! How is this project different from the tuning process in TVM?

dvdplm

This is very cool. Do you have any advice on papers to read to understand the details of search based compilation a bit more?

jafioti

a lot of the ideas luminal is built on are here: https://arxiv.org/abs/2304.04332

null

[deleted]

null

[deleted]

null

[deleted]

null

[deleted]

null

[deleted]