Show HN: I built a synthesizer based on 3D physics
128 comments
·May 2, 2025AaronAPU
Glad I’m not the only audio developer around here.
The landing page needs an immediate audio visual demo. Not an embedded YouTube but a videojs or similar. Low friction get the information of what it sounds and feels like immediately.
My 2 cents
kookamamie
Exactly. Had to scroll for ages to find anything to do with demo audio. A good demo song/track should be the first thing on the page, I think.
senbrow
1000% - I had to be able to find something listenable
jahnu
> Glad I’m not the only audio developer around here
There are a few of us :)
This synth is very cool. Highly original. Kudos.
deng
This looks incredible! But to be honest, it also looks incredibly daunting.
As a programmer and former physicist, I'm fascinated. As a musician, I'm not sure. At the moment, my feeling is that your landing page primarily addresses me as a programmer/physicist, and I'll definitely try it. But if you also want to sell this to musicians, what is really missing are more complex sound examples, like a tour of the existing presets and how you can manipulate them. There is your introduction video, but to be perfectly honest, the sounds you feature there do not really impress me. From what I can hear there, it very much sounds like the already existing physical modeling plugins, for instance AAS Chromaphone, and I already have plenty of those and they are much easier to use (also, their product page is a good example on how to sell a product to musicians). I can see of course that your VST allows me to dive much deeper into the weeds, and as a programmer/physicist I'm interested, but the musician in me is doubtful if the invested work will be worth with.
Again, this looks awesome, and I really hope you can make this into a business, so please see my critique above as encouragement.
deng
OK, I've played around with the demo and insta-bought it, if just to support you. This is incredible work.
polotics
Same here, and it is excellent. I am getting a few buffer-drop clicking on an M3 MBP, reducing the polyphony solved it, but just in case, to the author: how much more efficiency you think you can still add to this amazing plugin?
humbledrone
This is a long story, which is still ongoing. The GPU code is very, very heavily optimized (though I do still have some ideas on how to go further). The main problem we're having on Mac hardware is that the OS heuristics for when to turn the clock rate up on the GPU work really poorly for the audio use case. If you want gory details, I've written about it:
https://anukari.com/blog/devlog/waste-makes-haste
If anyone can put me in touch directly with an OS/Metal person at Apple it would be EXTREMELY helpful. I've had limited success so far.
nayuki
This reminds me of the reverse, where music drives 3D animations. I remember Animusic from the early decade of 2000.
https://en.wikipedia.org/wiki/Animusic , https://www.animusic.com/ , https://www.youtube.com/results?search_query=animusic , https://www.youtube.com/@julianlachniet9036/videos
humbledrone
I'm a huge fan of Animusic. I remember seeing it for the first time in some big fancy mall in LA and they had it projected on a wall, and I was blown away. It was absolutely an inspiration! Animusic -type ideas are a big part of why I made the 3D graphics fully user-customizable, for anyone who wants to go deep down that rabbit hole.
omneity
This rings such a vague and distant bell...
I'm several videos in and totally hooked, thank you for sharing. This would be an amazing interactive music app in VR, both to perform and to record trippy music videos.
mjcohen
I have the first two Animusic reels (vhs and dvd) and thought they were great. Unfortunately, the creator scammed people by taking money for Animusic 3 and then not making anything.
Most of them are on youtube.
tarentel
Not sure I'll ever use this as it seems like a lot of work but wanted to say thank you for allowing me to download a demo without giving an email.
Also, even though I said I wouldn't use it, something that would be nice is a master volume, maybe I missed it. I often use VSTs standalone and being able to change the volume without messing with the preset would make it a bit easier to use.
Definitely the most interesting synth I've ever seen.
humbledrone
Thanks, yeah, it really should have master volume -- you didn't miss it, it's just not there yet!
airstrike
Really cool stuff! I would suggest putting a 60-second video at the very top of the page that stitches together short clips of the many ways it is awesome.
humbledrone
For anyone seeing this post a bit late: I need a bit of help from someone inside Apple who works on Metal. If you know someone, it would be great if you could connect me to them:
florilegiumson
Really cool to see GPUs applied to sound synthesis. Didn’t realize that all one needed to do to keep up with the audio thread was to batch computations at the size of the audio thread. I’m fascinated by the idea of doing the same kind of thing for continua in the manner of Stefan Bilbao: https://www.amazon.com/Numerical-Sound-Synthesis-Difference-...
Although I wonder if mathematically it’s the same thing …
sunray2
Thank you for this, it looks very cool!
Remind me of Korg's Berlin branch with their Phase8 instrument: https://korg.berlin/ . Life imitates art imitates life :)
I highly support and encourage this. Is there a way I could contribute to Anukari at all (I'm a physicist by day)? These kinds of advancements are the stuff I would live for! However I should stay rooted in what's possible or helpful: I'm not sure if this is open-source for example. As long as I could help, I'm game.
humbledrone
For the foreseeable future I'm just going to be working on stability/performance, but eventually I will get back to adding more cool physics stuff. It's not open-source, but certainly I'd enjoy talking to a real physicist (I'm something a couple notches below armchair-level). Hit me up at evan@anukari.com sometime if you like!
sunray2
Thanks, will hit you up later!
I was using the demo just now: the sounds you get out of this are actually better than I expected! And I see what you meant in the videos about intuitive editing, rather than abstract.
Although, I was often hitting 100% CPU with some presets, with the sound glitching accordingly. So I could experiment only in part. I'm on an M1 Pro; initially I set 128 buffer sample size in Ableton but most presets were glitching, I then set to 2048 just to check for improvement, which it did, nevertheless it does seem a bit high. Maybe my audio settings are incorrect? I can give more info later if it helps you.
humbledrone
Yeah performance at low buffer sizes is a big challenge, generally I recommend 512 or higher, which I know is not great but right now it's the most practical thing. The issue is that the computation is all done on the GPU, and there's a round-trip latency that has to be amortized. One day I'd like to convince Apple to work on the kernel scheduling latency...
sitkack
I would love to watch (and listen) to a discussion between you and Noah from Audiocube, https://news.ycombinator.com/item?id=42877399 https://main.audiocube.app/ a 3d spatial DAW.
humbledrone
I have been peripherally aware of Audiocube for a while, and it seems ridiculous that he and I have not interacted in any way. Maybe I'll bug him sometime. :)
akomtu
At a glance, this looks like a bunch of coupled oscillators. A natural extension of this idea is strings: a 1d array of oscillators modelling a wave equation. For example, a piano sound can be modelled by attaching a basic oscillator to one end of a string and a mic to the other end of the string. The string and the oscillator push each other, creating the piano tone. Real pianos use 3 such string with different properties.
Another idea. What if you make a circular string and attach 1 or more oscillators at random points? Same idea as above, but more symmetric. This "sound ring" instrument may produce unreal sounds.
humbledrone
> What if you make a circular string and attach 1 or more oscillators at random points?
If your computer meets the system requirements, you could always install the free demo and build this sound ring instrument to find out! Building these kinds of weird ideas and seeing what happens is my favorite thing to do with it.
akomtu
A piano string would have to be made of about 1000 basic oscillators.
humbledrone
My RTX2080Ti (admittedly not a cheap card) supports 768 physics objects (masses) per each of 16 voices. And I think that beefier cards can do 1024. There are other limitations for performance depending on how the system is constructed, and I certainly don't want to claim that I know it can simulate a piano, but...
imhoguy
This is so cool and has unlimited potential, like you could model real instruments, e.g. guitar to experiment with resonant chamber shapes, materials etc. Can't upvote enough on good old perpetual licensing model!
ssfrr
I’m very curious about your experience doing audio on the GPU. What kind of worst-case latency are you able to get? Does it tend to be pretty deterministic or do you need to keep a lot of headroom for occasional latency spikes? Is the latency substantially different between integrated vs discrete GPUs?
humbledrone
Short answer: it has been a big pain in the butt. The GPU hardware is mostly really great, but the drivers/APIs were not designed for such a low-latency use case. There's (for audio) a large overhead latency in kernel execution scheduling. I've had to do a lot of fun optimization in terms of just reducing the runtime of the kernel itself, and a lot of less-fun evil dark magic optimization to e.g. trick macOS into raising the GPU clock speed.
Long answer: I've written a fair bit about this on my devlog. You might check out these tags:
https://anukari.com/blog/devlog/tags/gpu https://anukari.com/blog/devlog/tags/optimization
ssfrr
Thanks for the extra info, I read through some of your entries on GPU optimization and it definitely seems like it's been a journey! Thanks for blazing the trail.
I've been working on the Anukari 3D Physics Synthesizer for a little over two years now. It's one of the earliest virtual instruments to rely on the GPU for audio processing, which has been incredibly challenging and fun. In the end, predictably, the GUI for manipulating the 3D system actually ended up being a lot more work than the physics simulation.
So far I am only selling it direct on my website, which seems to be working well. I hope to turn it into a sustainable business, and ideally I'd have enough revenue to hire folks to help with it. So far it's been 99% a solo project, with (awesome) contractors brought in for some of the stuff that I'm bad at, like the 3D models and making instrument presets/videos.
The official launch announcement video is here: https://www.youtube.com/watch?v=NYX_eeNVIEU
But if you REALLY want to see what it can do, check out what Mick Cormick did with in on the first day: https://x.com/Mick_Gordon/status/1918146487948919222
I've kept a fairly detailed developer log about my progress on the project since October 2023, which might be of interest to the hardcore technical folks here: https://anukari.com/blog/devlog
I also gave a talk at Audio Developer Conference 2023 (ADC23) that goes deep into a couple of the problems I solved for Anukari: https://www.youtube.com/watch?v=lb8b1SYy73Q