Hunyuan3D-2-Turbo: fast high-quality shape generation in ~1s on a 4090
79 comments
·March 20, 2025fixprix
ForTheKidz
> The game Myst is all about this magical writing script that allowed people to write entire worlds in books. That's where it feels like this is all going. Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.
This is probably the first pitch for using AI as leverage that's actually connected with me. I don't want to write my own movie (sounds fucking miserable), but I do want to watch yours!
iaw
I have this system 80% done for novels on my machine at home.
It is terrifyingly good at writing. I expected Freshmen college level but it's actually close to professional in terms of prose.
The plan is maybe transition into children's books then children shows made with AI catered to a particular child at a particular phase of development (Bluey talks to your kid about making sure to pick up their toys)
thisisnotauser
I think there's a big question in there about AI that breaks a lot of my preexisting worldviews about how economics works: if anyone can do this at home, who are you going to sell it to?
Maybe today only a few people can do this, but five years from now? Ten? What sucker would pay for any TV shows or books or video games or anything if there's a comfy UI workflow or whatever I can download for free to make my own?
YurgenJurgensen
The ‘professional level’ prose to which you refer: “ABSOLUTE PRIORITY: TOTAL, COMPLETE, AND ABSOLUTE QUANTUM TOTAL ULTIMATE BEYOND INFINITY QUANTUM SUPREME LEGAL AND FINANCIAL NUCLEAR ACCOUNTABILITY”
Even if AI prose weren’t shockingly dull, these models all go completely insane long before they reach novel length. Anthropic are doing a good job embarrassing themselves at an easy bug-catching game for barely-literate 8-year olds as we speak, and the model’s grip on reality is basically gone at this point, even with a second LLM trying to keep it on track. And even before they get to the ‘insanity’ stage, their writing inevitably experiences regression towards the average of all writing styles regardless of the prompt, so there’s not much ‘prompt engineering’ you can do to fix this.
Ancalagon
This has not been my experience. Which models are you using? The AI's all seem to lose the plot eventually.
yfw
The value of art is that it's a human creation and a product of human expression. The movie you generate from AI is at best content.
sinzin91
You should check out Blender MCP, which allows you to connect Claude Desktop/Cursor/etc to Blender as a tool. Still early days from my experiments but shows where it could go https://github.com/ahujasid/blender-mcp
dr_kiszonka
This looks great! Do you think you might add an option to use the model linked here instead of Hyper3D?
mclau156
I have never seen knowledge to be the limiting factor in success in the 3D world, its usually lots of dedicated time to model, rig, and animate
iamjackg
It's often the limiting factor to getting started, though. Idiosyncratic interfaces and control methods make it really tedious to start learning from scratch.
spookie
I don't think they are idiosyncratic. They are built for purpose, one simply lacks what to look for. Same for programming really.
I also think that using AI would only lengthen the learning period. It will get some kind of results faster, though.
spookie
If you need time dedicated to it, knowledge is the limiting factor.
anonzzzies
You tried sharing your screen with Gemini intead of screenshots? I found it sometimes is really brilliant and sometimes terrible. It's mostly a win really.
fixprix
I just tried it for the first time and it was a pretty cool experience. Will definitely be using this more. Thanks for the tip!
baq
Look up blender and unity MCP videos. It’s working today.
fixprix
Watching a video on it now, thanks!
tempaccount420
> Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.
This is what Windows Copilot should have been!
sruc
Nice model, but strange license. You are not allowed to use it in EU, UK, and South Korea.
“Territory” shall mean the worldwide territory, excluding the territory of the European Union, United Kingdom and South Korea.
You agree not to use Tencent Hunyuan 3D 2.0 or Model Derivatives: 1. Outside the Territory;
johaugum
Meta’s Llama models (and likely many others') have similar restrictions.
Since they don’t fully comply with EU AI regulations, Meta preemptively disallows their use in those regions to avoid legal complications:
“With respect to any multimodal models included in Llama 3.2, the rights granted under Section 1(a) of the Llama 3.2 Community License Agreement are not being granted to you if you are an individual domiciled in, or a company with a principal place of business in, the European Union. This restriction does not apply to end users of a product or service that incorporates any such multimodal models”
https://github.com/meta-llama/llama-models/blob/main/models/...
littlestymaar
This is merely a “we don't take responsibility if this somehow violates EU rules around AI”, it's not something they can enforce in any way.
But even as such a strategy, I don't think that would hold if the Commission decided to fine Tencent for releasing that in case it violated the regulation.
IMHO it's just the lawyers doing something to please the boss who asked them to “solve the problem” (which they can't, really).
ForTheKidz
Probably for domestic protection more than face value. Western licenses certainly have similar clauses to protect against liability for sanction violations. It's not like they can actually do much to prevent the EU from gaining from it.
North Korea? Maybe. Uk? Who gives a shit
justlikereddit
[flagged]
manjunaths
I tried it on my Radeon 7900 GRE 16GB on Windows 11 WSL Ubuntu 24.04 with torch 2.4.0 and rocm 6.3.4, from here, https://repo.radeon.com/rocm/manylinux/rocm-rel-6.3.4/.
I am impressed, it runs very fast. Far faster than the non-turbo version. But the primary time is being spent on the texture generation and not on the model generation. As far as I can understand this speeds up the model generation and not the texture generation. But impressive nonetheless.
I also took a head shot of my kid and ran it through https://www.adobe.com/express/feature/ai/image/remove-backgr... and cropped the image and resized it to 1024x1024 and it spit out a 3d model with texture of my kid. There are still some small artifacts, but I am impressed. It works very well with the assets/example_images. Very usable.
Good work Hunyuan!
Y_Y
How are they extracting value here? Is this just space-race-4-turbo propagandising?
I see plenty of GitHub sites that are barely more than advertising, where some company tries to foss-wash their crapware, or tries to build a little text-colouring library that burrows into big projects as a sleeper dependency. But this isn't that.
What's the long game for these companies?
yowlingcat
There's an old Joel Spolsky post that's evergreen about this strategy -- "commoditize your complement" [1]. I think it's done for the same reason Meta has made llama reasonably open -- making it open ensures that a proprietary monopoly over AI doesn't threaten your business model, which is noteworthy when your business model might include aggregating tons of UGC and monetizing engagement over it. True, you may not be able to run the only "walled garden" around it anymore, but at least someone else can't raid your walled garden to make a new one that you can't resell anymore. That's the simplest strategic rationale I could give for it, but I can imagine deeper layers going beyond that.
https://www.joelonsoftware.com/2002/06/12/strategy-letter-v/
awongh
What's the best img2mesh model out there right now, regardless of processing requirements?
Are any of them better or worse with mesh cleanliness? Thinking in terms of 3d printing....
MITSardine
From what I could tell of the Git repo (2min skimming), their model is generating a point cloud, and they're then applying non-ML meshing methods on that (marching cubes) to generate a surface mesh. So you could plug any point-cloud-to-surface-mesh software in there.
I wondered initially how they managed to produce valid meshes robustly, but the answer is not to produce a mesh, which I think is wise!
quitit
Running my usual img2mesh tests on this.
1. It does a pretty good job, definitely a steady improvement
2. The demos are quite generous versus my own testing, however this type of cherry-picking isn't unusual.
3. The mesh is reasonably clean. There are still some areas of total mayhem (but these are easy to fix in clary modelling software.)
leshokunin
Can we see meshes, exports in common apps as examples?
This looks better than the other one on the front page rn
llm_nerd
Generate some of your own meshes and drop them in Blender.
https://huggingface.co/spaces/tencent/Hunyuan3D-2
The meshes are very face-rich, and unfortunately do not reduce well in any current tool [1]. A skilled Blender user can quickly generate better meshes with a small fraction of the vertices. However if you don't care about that, or if you're just using it for brainstorming starter models it can be super useful.
[1] A massive improvement in the space will be AI or algorithmic tools which can decimate models better than the current crop. Often thousands of vertices can be reduced to a fraction with no appreciable impact in quality, but current tools can't do this.
kilpikaarna
There's Quad Remesher (integrated in C4D and ZBrush as ZRemesher). It's proprietary but quite affordable ($109 for a perpetual commercial license or $15/month -- no, not affiliated).
No AI, just clever algorithms. I'm sure there are people trying to train a model to do the same thing but jankier and more unpredictable, though.
llm_nerd
It's an interesting project and seems to work superbly on human created topographies, but in some testing with outputs of Hunyuan3D v2, it is definitely a miss. It is massively destructive to the model and seems to miss extremely obvious optimizations of the mesh while destroying the fidelity of the model even at very low reduction settings.
Something about the way this project generates models does not mesh, har har, with the algorithms of Quad Remesher.
dvrp
Agree. That's why I posted it; I was surprised people were sleeping on this. But it's because they posted something yesterday and so the link dedup logic ignored this. This is why I linked to the commit instead.
There are meshes examples on the Github. I'll toy around with it.
dvrp
See also: https://github.com/Tencent/FlashVDM
null
boppo1
Can it run on a 4080 but slower, or is the vram a limitation?
llm_nerd
It can run on a 4080 if you divide and conquer. I just ran a set on my 3060 (12GB), although I have my own script which does each step separately as each stage uses from 6 - 12GB of VRAM.
-loads the diffusion model to go from text to an image and then generate a varied series of images based upon my text. One of the most powerful features of this tool, in my opinion, is text to mesh, and to do this it uses a variant of Stable Diffusion to create 2D images as a starting point, then returning to the image to mesh pipeline. If you already have an image this part obviously isn't necessary.
-frees the diffusion model from memory.
Then for each image I-
-load the image to mesh model, which takes approximately 12GB of VRAM. Generate a mesh
-free the image to mesh model
-load the mesh + image to textured mesh model. Texture the mesh
-free the mesh + image to textured mesh model
It adds a lot of I/O between each stage, but with super fast SSDs it just isn't a big problem.
llm_nerd
Just as one humorous aside, if you use the text to mesh pipeline, as mentioned the first stage is simply a call to a presumably fine-tuned variant of stable diffusion with your text and the following prompts (translated from Simplified Chinese)-
Positive: "White background, 3D style, best quality"
Negative: "text, closeup, cropped, out of frame, worst quality, low quality, JPEG artifacts, PGLY, duplicate, morbid, mutilated, extra fingers, mutated hands, bad hands, bad face, mutation, deformed, blurry, dehydrated, bad anatomy, bad proportions, extra limbs, cloned face, disfigured, gross proportions, malformed limbs, missing arms, missing legs, extra arms, extra legs, fused fingers, too many fingers, long neck"
Thought that was funny.
dvrp
They don't mention that and I don't have one — can you try for yourself and let us know? I think you can get it from Huggingface or GH @ https://github.com/Tencent/Hunyuan3D-2
fancyfredbot
They mention "It takes 6 GB VRAM for shape generation and 24.5 GB for shape and texture generation in total."
So based on this your 4080 can do shape but not texture generation.
boppo1
Nnice, that's all i needed anyway.
thot_experiment
almost certainly, i haven't tried the most recent models but i have used hy3d2 and hy3d2-fast a lot and they're quite light to inference. You're gonna spend more time decoding the latent than you will inferencing. Takes about 6gb vram on my machine, I can't imagine these will be heavier.
lwansbrough
How long before we start getting these rigged using AI too? I’ve seen a few of these 3D models so far but none that do rigging.
halkony
This is what I'm looking forward to the most, there's a lot of potential for virtual reality with these models.
debbiedowner
Has anyone tried it on a 3090?
I recently got into creating avatars for VR and have used AI to learn Unity/Blender so ridiculously fast, like just a couple weeks I've been at it now. All the major models can answer basically any question. I can paste in screenshots of what I'm working on and questions and it will tell me step by step what to do. I'll ask it what particular settings mean, there are so many settings in 3d programs; it'll explain them all and suggest defaults. You can literally give Gemini UV maps and it'll generate textures for you, or this for 3d models. It feels like the jump before/after stack overflow.
The game Myst is all about this magical writing script that allowed people to write entire worlds in books. That's where it feels like this is all going. Unity/Blender/Photoshop/etc.. is ripe for putting a LLM over the entire UI and exposing the APIs to it.