Bolt3D: Generating 3D Scenes in Seconds
25 comments
·March 19, 2025diggan
jsheard
They usually don't show the material channels either, which I assume is because there aren't any, and instead the lighting is statically baked into the asset. That works for a demo where you just wiggle the camera in a circle, but it'll immediately fall apart if the lighting environment changes or anything in the scene moves.
Legend2440
Think of it more like a 3D picture than an animation model.
There are no materials channels or wireframe. It’s a volumetric 3D representation, like a picture made up of color blobs.
kookamamie
And thus unusable for most things 3D models and scenes would be used for today.
ajwin
My understanding is that it is not mesh, it’s Gaussian Splatting. There are tools to convert Splats into mesh though.
diggan
Yeah, but isn't still the expected outcome to end up with actual 3D objects, not point clouds? Or did people start integrating point clouds into their 3D workflows already? Besides for stuff like volumes and alike, I think most of us are still stuck working with polygons for 3D.
tracerbulletx
No geometry in the conventional sense. I did a demo of rendering a Gaussian splat in React Three Fiber here, you can open the linked splat file (its hosted on hugging face) if you want to see the data format. https://codesandbox.io/p/sandbox/3d-gaussian-splat-in-react-... I also have this youtube video about creating that demo https://www.youtube.com/watch?v=6tVcCTazmzo
lmpdev
I use pointclouds all the time in Rhino/Lastools/Meshlab
I much prefer pointclouds and nurbs over meshes
Not everything is gamedev
Legend2440
You can convert splats into meshes using a simple marching cubes algorithm.
But the meshes produced are not easy to edit.
text0404
splats augment 3D scenes, they don't replace them. i've seen them used for AR/VR, photogrammetry, and high-performance 3D. going from splats to a 3D model would be a downgrade in terms of performance.
text0404
splats don't have wireframes, and they have an embedded webgpu viewer in the linked page.
null
slowtrek
Is anything like this available locally yet?
emmelaich
Here's the repo: https://github.com/szymanowiczs/splatter-image
Apparently you can clone and run the demo locally. But wasn't clear at a glance how much is local and what hardware required.
echelon
Your link above (Splatter Image) is not the same code / paper / research as Bolt3D.
This is a previous paper/work by the lead author a year before they interned at Google Research and produced Bolt3D.
Bolt3D appears to be his intern research project done in conjunction with a bunch of other Google and DeepMind researchers.
I don't suspect there will ever be publicly available code for this.
ashikns
Isn't it generating in the browser using webgpu?
gessha
I assume that’s for interactive viewing only, not for generation.
> Our method takes 6.25 seconds to reconstruct one scene on a single H100 NVIDIA GPU or 15 seconds on an A100.
null
null
dvrp
I mean, it's the same author but seems like co-authors are different.
How do you know it's the actual implementation?
null
Show. Us. The. Wireframes!
Every single time a new "Generate 3D" thing appears, they never show the wireframes of the objects/scenes up front, always you need to download and inspect things yourself. How is this not standard practice already?
Not displaying the wireframes at all, or even offer sample files so we could at least see it ourselves, just makes it look like you already know that the generated results are unusable...