WebMonkeys: parallel GPU programming in JavaScript (2016)
19 comments
·May 4, 2025kreetx
Unfortunately this is not maintained since 2017: https://github.com/VictorTaelin/WebMonkeys/issues/26
Are there other projects doing something similar on current browsers?
kaoD
Still a draft, experimental and not widely used[0], but WebGPU[1] will bring support for actual compute shaders[2] to the web.
It's much more low level than these "web monkeys" but I'd say if you really need GPU performance instead of toy examples like squaring a list of numbers, you really need to go low level and understand how GPU threads and work batching works.
[0] https://developer.mozilla.org/en-US/docs/Web/API/WebGPU_API
[1] https://en.m.wikipedia.org/wiki/WebGPU
[2] https://webgpufundamentals.org/webgpu/lessons/webgpu-compute...
butokai
By coincidence I was just having a look at the work by the same author on languages based on Interaction Nets. Incredibly cool work, although the main repos seem to have been silent in the last couple of months? This work however is much older and doesn't seem to follow the same approach.
mattdesl
The author is working on a program synthesizer using interaction nets/calculus, which should be released soon. It sounds quite interesting:
FjordWarden
WebMonkeys feels a bit like array programming, you create buffers and then have a simple language to perform operations on those buffers.
HVM is one of the most interesting developments in programming languages that I know off. I just don't know if it will prove to be relevant for the problem space it is trying to address. It is a very difficult technology that is trying to solve another very complex problem (AI) by seemingly sight stepping the issues. Like you have to know linear algebra and statistics to do ML, and they are saying: yes and you have to know category theory too.
foobarbecue
FYI, just in case you didn't know, it's "side-stepping," not "sight-stepping."
Thanks for introducing me to the concept of higher-order virtual machines.
null
Anduia
The title should say 2016
qoez
Awesome stuff. Btw: "For one, the only way to upload data is as 2D textures of pixels. Even worse, your shaders (programs) can't write directly to them" With webgpu you have atomics so you can actually write to them.
sylware
Maybe the guys here know:
Is there a little 3D/GFX/game engine (plain and simple C written) strapped to a javascript interpreter (like quickjs) without being in apple or gogol gigantic and ultra complex web engines?
Basically, a set of javascript APIs with a runtime for wayland/vulkan3D, freetype2, and input devices.
gr4vityWall
You can use Node.js or Bun with bindings for stuff like raylib or SDL.
Examples:
https://github.com/RobLoach/node-raylib https://github.com/kmamal/node-sdl
afavour
I assume OP mentioned QuickJS specifically because they're looking for a tiny runtime. Node and Bun aren't that.
FjordWarden
You can access the gpu without a browser using Deno[1] (and probably Node too if you search for it).
Not to be patronising here, but if you are looking for something that makes 3D/GFX/game programming easier without all the paralysing complexity, you should recalibrate how hard this is going to be.
jkcxn
You can quite easily make bindings for raylib/sokol-gpu/bgfx from Bun
chirsz
You could use Deno with WebGPU.
null
null
punkpeye
So what are the practical use cases for this?
This is cool but doesn't actually do any heavy lifting, because it runs GLSL 1.0 code directly instead of transpiling Javascript to GLSL internally.
Does anyone know of a Javascript to GLSL transpiler?
My interest in this is that the world abandoned true multicore processing 30 years ago around 1995 when 3D video cards went mainstream. Had it not done that, we could have continued with Moore's law and had roughly 100-1000 CPU cores per billion transistors, along with local memories and data-driven processing using hash trees and copy-on-write provided invisibly by the runtime or even in microcode so that we wouldn't have to worry about caching. Apple's M series is the only mainstream CPU I know of that is attempting to do anything close to this, albeit poorly by still having GPU and AI cores instead of emulating single-instruction-multiple-data (SIMD) with multicore.
So I've given up on the world ever offering a 1000+ core CPU for under $1000, even though it would be straightforward to design and build today. The closest approximation would be some kind of multiple-instruction-multiple-data (MIMD) transpiler that converts ordinary C-style code to something like GLSL without intrinsics, pragmas, compiler-hints, annotations, etc.
In practice, that would look like simple for-loops and other conditionals being statically analyzed to detect codepaths free of side effects and auto-parallelize them for a GPU. We would never deal with SIMD or copying buffers to/from VRAM directly. The code would probably end up looking like GNU Octave, MATLAB or Julia, but we could also use stuff like scatter-gather arrays and higher-order methods like map reduce, or even green threads. Vanilla fork/join code could potentially run thousands of times faster on GPU than CPU if implemented properly.
The other reason I'm so interested in this is that GPUs can't easily do genetic programming with thousands of agents acting and evolving independently in a virtual world. So we're missing out on the dozen or so other approaches to AI which are getting overshadowed by LLMs. I would compare the current situation to using React without knowing how simple the HTML form submit model was in the 1990s, which used declarative programming and idempotent operations to avoid build processes and the imperative hell we've found ourselves in. We're all doing it the hard way with our bare hands and I don't understand why.