No More Shading Languages: Compiling C++ to Vulkan Shaders [pdf]
9 comments
·June 18, 2025Calavar
raincole
> CPU centric languages.
What does a "GPU centric language" look like?
The most commonly used languages in terms of GPU:
- CUDA: C++ like
- OpenCL: C like
- HLSL/GLSL: C like
arjonagelhout
To add to this list, Apple has MSL, which uses a subset of C++
jcelerier
It's very common to write c++ in a way that will work well for GPUs. Consider that CUDA, the most used GPU language, is just a set of extensions on top of c++. Likewise for Metal shaders, or high-level dogs synthesis systems like Vitis
arjonagelhout
What is the main difference in shading languages vs. programming languages such as C++?
Metal Shading Language for example uses a subset of C++, and HLSL and GLSL are C-like languages.
In my view, it is nice to have an equivalent syntax and language for both CPU and GPU code, even though you still want to write simple code for GPU compute kernels and shaders.
raincole
Yeah, C++ is the peak language design that everyone loves...
reactordev
In game dev they definitely do.
arjonagelhout
I think this is indeed the advantage of this paper taking C++ as the language to compile to SPIR-V.
Game engines and other large codebases with graphics logic are commonly written in C++, and only having to learn and write a single language is great.
Right now, shaders -- if not working with an off-the-shelf graphics abstraction -- are kind of annoying to work with. Cross-compiling to GLSL, HLSL and Metal Shading Language is cumbersome. Almost all game engines create their own shading language and code generate / compile that to the respective shading languages for specific platforms.
This situation could be improved if GPUs were more standardized and didn't have proprietary instruction sets. Similar to how CPUs mainly have x86_64 and ARM64 as the dominant instruction sets.
rgbforge
The section discussing Slang is interesting, I didn't know that function pointers were only available for Cuda targets.
I've seen a few projects along the lines of shader programming in C++, shader programming in Rust, etc., but I'm not sure that I understand the point. There's a huge impedence mismatch between CPU and GPU, and if you port CPU centric code to GPU naively, it's easy to get code that slower than the CPU version thanks to the leaky abstraction. And I'm not sure you can argue pareto principle: Because if you had a scenario where 80% of the code is not performance sensitive, why would you port it to GPU in the first place?
Anyway, there's a good chance that I'm missing something here because there seems to be a lot of interest in writing shaders in CPU centric languages.