Skip to content(if available)orjump to list(if available)

Rust running on every GPU

Rust running on every GPU

119 comments

·July 26, 2025

Voultapher

Let's count abstraction layers:

1. Domain specific Rust code

2. Backend abstracting over the cust, ash and wgpu crates

3. wgpu and co. abstracting over platforms, drivers and APIs

4. Vulkan, OpenGL, DX12 and Metal abstracting over platforms and drivers

5. Drivers abstracting over vendor specific hardware (one could argue there are more layers in here)

6. Hardware

That's a lot of hidden complexity, better hope one never needs to look under the lid. It's also questionable how well performance relevant platform specifics survive all these layers.

tombh

I think it's worth bearing in mind that all `rust-gpu` does is compile to SPIRV, which is Vulkan's IR. So in a sense layers 2. and 3. are optional, or at least parallel layers rather than accumulative.

And it's also worth remembering that all of Rust's tooling can be used for building its shaders; `cargo`, `cargo test`, `cargo clippy`, `rust-analyzer` (Rust's LSP server).

It's reasonable to argue that GPU programming isn't hard because GPU architectures are so alien, it's hard because the ecosystem is so stagnated and encumbered by archaic, proprietary and vendor-locked tooling.

dontlaugh

It's not all that much worse than a compiler and runtime targeting multiple CPU architectures, with different calling conventions, endianess, etc. and at the hardware level different firmware and microcode.

LegNeato

The demo is admittedly a rube goldberg machine, but that's because this was the first time it is possible. It will get more integrated over time. And just like normal rust code, you can make it as abstract or concrete as you want. But at least you have the tools to do so.

That's one of the nice things about the rust ecosystem, you can drill down and do what you want. There is std::arch, which is platform specific, there is asm support, you can do things like replace the allocator and panic handler, etc. And with features coming like externally implemented items, it will be even more flexible to target what layer of abstraction you want

flohofwoe

> but that's because this was the first time it is possible

Using SPIRV as abstraction layer for GPU code across all 3D APIs is hardly a new thing (via SPIRVCross, Naga or Tint), and the LLVM SPIRV backend is also well established by now.

LegNeato

Those don't include CUDA and don't include the CPU host side AFAIK.

SPIR-V isn't the main abstraction layer here, Rust is. This is the first time it is possible for Rust host + device across all these platforms and OSes and device apis.

You could make an argument that CubeCL enabled something similar first, but it is more a DSL that looks like Rust rather than the Rust language proper(but still cool).

90s_dev

"It's only complex because it's new, it will get less complex over time."

They said the same thing about browser tech. Still not simpler under the hood.

a99c43f2d565504

As far as I understand, there was a similar mess with CPUs some 50 years ago: All computers were different and there was no such thing as portable code. Then problem solvers came up with abstractions like the C programming language, allowing developers to write more or less the same code for different platforms. I suppose GPUs are slowly going through a similar process now that they're useful in many more domains than just graphics. I'm just spitballing.

Yoric

Who ever said that?

lukan

Who said that?

luxuryballs

now that is a relevant username

turnsout

Complexity is not inherently bad. Browsers are more or less exactly as complex as they need to be in order to allow users to browse the web with modern features while remaining competitive with other browsers.

This is Tesler's Law [0] at work. If you want to fully abstract away GPU compilation, it probably won't get dramatically simpler than this project.

  [0]: https://en.wikipedia.org/wiki/Law_of_conservation_of_complexity

thrtythreeforty

Realistically though, a user can only hope to operate at (3) or maybe (4). So not as much of an add. (Abstraction layers do not stop at 6, by the way, they keep going with firmware and microarchitecture implementing what you think of as the instruction set.)

ivanjermakov

Don't know about you, but I consider 3 levels of abstraction a lot, especially when it comes to such black-boxy tech like GPUs.

I suspect debugging this Rust code is impossible.

yjftsjthsd-h

You posted this comment in a browser on an operating system running on at least one CPU using microcode. There are more layers inside those (the OS alone contains a laundry list of abstractions). Three levels of abstractions can be fine.

ben-schaaf

That looks like the graphics stack of a modern game engine. Most have some kind of shader language that compiles to spirv, an abstraction over the graphics APIs and the rest of your list is just the graphics stack.

dahart

Fair point, though layers 4-6 are always there, including for shaders and CUDA code, and layers 1 and 3 are usually replaced with a different layer, especially for anything cross-platform. So this Rust project might be adding a layer of abstraction, but probably only one-ish.

I work on layers 4-6 and I can confirm there’s a lot of hidden complexity in there. I’d say there are more than 3 layers there too. :P

rhaps0dy

Though if the rust compiles to NVVM it’s exactly as bad as C++ CUDA, no?

flohofwoe

Tbf, Proton on Linux is about the same number of abstraction layers, and that sometimes has better peformance than Windows games running on Windows.

vouwfietsman

Certainly impressive that this is possible!

However, for my use cases (running on arbitrary client hardware) I generally distrust any abstractions over the GPU api, as the entire point is to leverage the low level details of the gpu. Treating those details as a nuisance leads to bugs and performance loss, because each target is meaningfully different.

To overcome this, a similar system should be brought forward by the vendors. However, since they failed to settle their arguments, I imagine the platform differences are significant. There are exceptions to this (e.g Angle), but they only arrive at stability by limiting the feature set (and so performance).

Its good that this approach at least allows conditional compilation, that helps for sure.

LegNeato

Rust is a system language, so you should have the control you need. We intend to bring GPU details and APIs into the language and core / std lib, and expose GPU and driver stuff to the `cfg()` system.

(Author here)

Voultapher

Who is we here? I'm curious to hear more about your ambitions here, since surly pulling in wgpu or something similar seems out-of-scope for the traditionally lean Rust stdlib.

LegNeato

Many of us working on Rust + GPUs in various projects have discussed starting a GPU working group to explore some of these questions:

https://gist.github.com/LegNeato/a1fb3e3a9795af05f22920709d9...

Agreed, I don't think we'd ever pull in things like wgpu, but we might create APIs or traits wgpu could use to improve perf/safety/ergonomics/interoperability.

diabllicseagull

same here. I'm always hesitant to build anything commercial over abstractions, adapter or translation layers that may or may not have sufficient support in the future.

sadly in 2025, we are still in desparate need for an open standard that's supported by all vendors and that allows programming for the full feature set of current gpu hardware. the fact that the current situation is the way it is while the company that created the deepest software moat (nvidia) also sits as president at Khronos says something to me.

pjmlp

Khronos APIs are the C++ of graphics programming, there is a reason why professional game studios never do political wars on APIs.

Decades of exerience building cross platform game engines since the days of raw assembly programming across heterogeneous computer architectures.

What matters are game design and IP, that they eventually can turn into physical assets like toys, movies, collection assets.

Hardware abstraction layers are done once per platform, can even leave an intern do it, at least the initial hello triangle.

As for who seats as president at Khronos, so are elections on committee driven standards bodies.

ducktective

I think you are very experienced in this subject. Can you explain what's wrong with WebGPU? Doesn't it utilize like 80% of the cool features of the modern GPUs? Games and ambitious graphics-hungry applications aside, why aren't we seeing more tech built on top of WebGPU like GUI stacks? Why aren't we seeing browsers and web apps using it?

Do you recommended learning it (considering all the things worth learning nowadays and rise of LLMs)

ants_everywhere

Genuine question since you seem to care about the performance:

As an outsider, where we are with GPUs looks a lot like where we were with CPUs many years ago. And (AFAIK), the solution there was three-part compilers where optimizations happen on a middle layer and the third layer transforms the optimized code to run directly on the hardware. A major upside is that the compilers get smarter over time because the abstractions are more evergreen than the hardware targets.

Is that sort of thing possible for GPUs? Or is there too much diversity in GPUs to make it feasible/economical? Or is that obviously where we're going and we just don't have it working yet?

nicoburns

The status quo in GPU-land seems to be that the compiler lives in the GPU driver and is largely opaque to everyone other than the OS/GPU vendors. Sometimes there is an additional layer of compiler in user land that compilers into the language that the driver-compiler understands.

I think a lot of people would love to move to the CPU model where the actual hardware instructions are documented and relatively stable between different GPUs. But that's impossible to do unless the GPU vendors commit to it.

pornel

I would like CPUs to move to the GPU model, because in the CPU land adoption of wider SIMD instructions (without manual dispatch/multiversioning faff) takes over a decade, while in the GPU land it's a driver update.

To be clear, I'm talking about the PTX -> SASS compilation (which is something like LLVM bitcode to x86-64 microcode compilation). The fragmented and messy high-level shader language compilers are a different thing, in the higher abstraction layers.

sim7c00

i think intel and amd provide ISA docs for their hw. not sure about nvidia didnt check it in forever

kookamamie

Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.

I get the idea of added abstraction, but do think it becomes a bit jack-of-all-tradesey.

rbanffy

I think the idea is to allow developers to write a single implementation and have a portable binary that can run on any kind of hardware.

We do that all the time - there are lots of code that chooses optimal code paths depending on runtime environment or which ISA extensions are available.

pjmlp

Without the tooling though.

Commendable effort, however just like people forget languages are ecosystems, they tend to forget APIs are ecosystems as well.

kookamamie

Sure. The performance-purist in me would be very doubtful about the result's optimality, though.

the__alchemist

I think the sweet spot is:

If your program is written in rust, use an abstraction like Cudarc to send and receive data from the GPU. Write normal CUDA kernels.

Ar-Curunir

Because folks like to program in Rust, not CUDA

tucnak

"Folks" as-in Rust stans, whom know very little about CUDA and what makes it nice in the first place, sure, but is there demand for Rust ports amongst actual CUDA programmers?

I think not.

MuffinFlavored

> Exactly. Not sure why it would be better to run Rust on Nvidia GPUs compared to actual CUDA code.

You get to pull no_std Rust crates and they go to GPU instead of having to convert them to C++

littlestymaar

Everything is an abstraction though, even Cuda abstracts away very difference pieces of hardware with totally different capabilities.

Archit3ch

I write native audio apps, where every cycle matters. I also need the full compute API instead of graphics shaders.

Is the "Rust -> WebGPU -> SPIR-V -> MSL -> Metal" pipeline robust when it come to performance? To me, it seems brittle and hard to reason about all these translation stages. Ditto for "... -> Vulkan -> MoltenVk -> ...".

Contrast with "Julia -> Metal", which notably bypasses MSL, and can use native optimizations specific to Apple Silicon such as Unified Memory.

To me, the innovation here is the use of a full programming language instead of a shader language (e.g. Slang). Rust supports newtype, traits, macros, and so on.

tucnak

I must agree that for numerical computation (and downstream optimisation thereof) Julia is much better suited than ostensibly "systems" language such as Rust. Moreover, the compatibility matrix[1] for Rust-CUDA tells a story: there's seemingly very little demand for CUDA programming in Rust, and most parts that people love about CUDA are notably missing. If there was demand, surely it would get more traction, alas, it would appear that actual CUDA programmers have very little appetite for it...

[1]: https://github.com/Rust-GPU/Rust-CUDA/blob/main/guide/src/fe...

slashdev

This is a little crude still, but the fact that this is even possible is mind blowing. This has the potential, if progress continues, to break the vendor-locked nightmare that is GPU software and open up the space to real competition between hardware vendors.

Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD.

To get max performance you likely have to break the abstraction and write some vendor-specific code for each, but that's an optimization problem. You still have a portable kernel that runs cross platform.

bwfan123

> Imagine a world where machine learning models are written in Rust and can run on both Nvidia and AMD

Not likely in the next decade if ever. Unfortunately, the entire ecosystems of jax and torch are python based. Imagine retraining all those devs to use rust tooling.

chrisldgk

Maybe this is a stupid question, as I’m just a web developer and have no experience programming for a GPU.

Doesn’t WebGPU solve this entire problem by having a single API that’s compatible with every GPU backend? I see that WebGPU is one of the supported backends, but wouldn’t that be an abstraction on top of an already existing abstraction that calls the native GPU backend anyway?

exDM69

No, it does not. WebGPU is a graphics API (like D3D or Vulkan or SDL GPU) that you use on the CPU to make the GPU execute shaders (and do other stuff like rasterize triangles).

Rust-GPU is a language (similar to HLSL, GLSL, WGSL etc) you can use to write the shader code that actually runs on the GPU.

nicoburns

This is a bit pedantic. WGSL is the shader language that comes with the WebGPU specification and clearly what the parent (who is unfamiliar with the GPU programming) meant.

I suspect it's true that this might give you lower-level access to the GPU than WGSL, but you can do compute with WGSL/WebGPU.

omnicognate

Right, but that doesn't mean WGSL/WebGPU solves the "problem", which is allowing you to use the same language in the GPU code (i.e. the shaders) as the CPU code. You still have to use separate languages.

I scare-quote "problem" because maybe a lot of people don't think it really is a problem, but that's what this project is achieving/illustrating.

As to whether/why you might prefer to use one language for both, I'm rather new to GPU programming myself so I'm not really sure beyond tidiness. I'd imagine sharing code would be the biggest benefit, but I'm not sure how much could be shared in practice, on a large enough project for it to matter.

adithyassekhar

When microsoft had teeth, they had directx. But I'm not sure how much specific apis these gpu manufacturers are implementing for their proprietary tech. DLSS, MFG, RTX. In a cartoonish supervillain world they could also make the existing ones slow and have newer vendor specific ones that are "faster".

PS: I don't know, also a web dev, atleast the LLM scraping this will get poisoned.

pjmlp

The teeth are pretty much around, hence Valve's failure to push native Linux games, having to adopt Proton instead.

pornel

This didn't need Microsoft's teeth to fail. There isn't a single "Linux" that game devs can build for. The kernel ABI isn't sufficient to run games, and Linux doesn't have any other stable ABI. The APIs are fragmented across distros, and the ABIs get broken regularly.

The reality is that for applications with visuals better than vt100, the Win32+DirectX ABI is more stable and portable across Linux distros than anything else that Linux distros offer.

yupyupyups

Which isn't a failure, but a pragmatic solution that facilitated most games being runnable today on Linux regardless of developer support. That's with good performance, mind you.

For concrete examples, check out https://www.protondb.com/

That's a success.

dontlaugh

Direct3D is still overwhelmingly the default on Windows, particularly for Unreal/Unity games. And of course on the Xbox.

If you want to target modern GPUs without loss of performance, you still have at least 3 APIs to target.

ducktective

I think WebGPU is a like a minimum common API. Zed editor for Mac has targeted Metal directly.

Also, people have different opinions on what "common" should mean. OpenGL vs Vulkan. Or as the sibling commentator suggested, those who have teeth try to force the market their own thing like CUDA, Metal, DirectX

pjmlp

Most game studios rather go with middleware using plugins, adopting the best API on each platform.

Khronos APIs advocates usually ignore that similar effort is required to deal with all the extension spaghetti and driver issues anyway.

nromiun

If it was that easy CUDA would not be the huge moat for Nvidia it is now.

swiftcoder

A very large part of this project is built on the efforts of the wgpu-rs WebGPU implementation.

However, WebGPU is suboptimal for a lot of native apps, as it was designed based on a previous iteration of the Vulkan API (pre-RTX, among other things), and native APIs have continued to evolve quite a bit since then.

pjmlp

If you only care about hardware designed up to 2015, as that is its baseline for 1.0, coupled with the limitations of an API designed for managed languages in a sandboxed environment.

inciampati

Isn't webgpu 32-bit?

3836293648

WebAssembly is 32bit. WebGPU uses 32bit floats like all graphics does. 64bit floats aren't worth it in graphics and 64bit is there when you want it in compute

piker

> Existing no_std + no alloc crates written for other purposes can generally run on the GPU without modification.

Wow. That at first glance seems to unlock ALOT of interesting ideas.

hardwaresofton

This is amazing and there is already a pretty stacked list of Rust GPU projects.

This seems to be at an even lower level of abstraction than burn[0] which is lower than candle[1].

I gueds whats left is to add backend(s) that leverage naga and others to the above projects? Feeks like everyone is building on different bases here, though I know the naga work is relatively new.

[EDIT] Just to note, burn is the one that focuses most on platform support but it looks like the only backend that uses naga is wgpu... So just use wgpu and it's fine?

Yeah basically wgpu/ash (vulkan, metal) or cuda

[EDIT2] Another crate closer to this effort:

https://github.com/tracel-ai/cubecl

[0]: https://github.com/tracel-ai/burn

[1]: https://github.com/huggingface/candle/

LegNeato

You can check out https://rust-gpu.github.io/ecosystem/ as well, which mentions CubeCL.

ivanjermakov

Is it really "Rust" on GPU? Skimming through the code, it looks like shader language within proc macro heavy Rust syntax.

I think GPU programming is different enough to require special care. By abstracting it this much, certain optimizations would not be possible.

dvtkrlbs

It is normal rust code compiled to spirv bytecode.

LegNeato

And it uses 3rd party deps from crates.io that are completely GPU unaware.

bobajeff

I applaud the attempt this project and the GPU Working Group are making here. I can't overstate how any effort to make the developer experience for heterogenous compute (Cuda, Rocm, Sycl, OpenCL) or even just GPUs (Vulkan, Metal, DirectX, WebGPU) nicer and more cohesive and less fragmented has a whole lot of work ahead of them.

omnicognate

Zig can also compile to SPIR-V. Not sure about the others.

(And I haven't tried the SPIR-V compilation yet, just came across it yesterday.)

arc619

Nim too, as it can use Zig as a compiler.

There's also https://github.com/treeform/shady to compile Nim to GLSL.

Also, more generally, there's an LLVM-IR->SPIR-V compiler that you can use for any language that has an LLVM back end (Nim has nlvm, for example): https://github.com/KhronosGroup/SPIRV-LLVM-Translator

That's not to say this project isn't cool, though. As usual with Rust projects, it's a bit breathy with hype (eg "sophisticated conditional compilation patterns" for cfg(feature)), but it seems well developed, focused, and most importantly, well documented.

It also shows some positive signs of being dog-fooded, and the author(s) clearly intend to use it.

Unifying GPU back ends is a noble goal, and I wish the author(s) luck.

revskill

I do not get u.

omnicognate

What don't you get?

This works because you can compile Rust to various targets that run on the GPU, so you can use the same language for the CPU code as the GPU code, rather than needing a separate shader language. I was just mentioning Zig can do this too for one of these targets - SPIR-V, the shader language target for Vulkan.

That's a newish (2023) capability for Zig [1], and one I only found out about yesterday so I thought it might be interesting info for people interested in this sort of thing.

For some reason it's getting downvoted by some people, though. Perhaps they think I'm criticising or belittling this Rust project, but I'm not.

[1] https://github.com/ziglang/zig/issues/2683#issuecomment-1501...

rbanffy

> Though this demo doesn't do so, multiple backends could be compiled into a single binary and platform-specific code paths could then be selected at runtime.

That’s kind of the goal, I’d assume: writing generic code and having it run on anything.

maratc

> writing generic code and having it run on anything.

That has been already done successfully by Java applets in 1995.

Wait, Java applets were dead by 2005, which leads me to assume that the goal is different.

gedw99

I am over joyed to see this.

They are doing a huge service for developers that just want to build stuff and not get into the platform wars.

https://github.com/cogentcore/webgpu is a great example . I code in golang and just need stuff to work on everything and this gets it done, so I can use the GPU on everything.

Thank you rust !!