Skip to content(if available)orjump to list(if available)

LL3M: Large Language 3D Modelers

LL3M: Large Language 3D Modelers

120 comments

·August 17, 2025

Etherlord87

As someone using Blender for ~7 years, with over 1000 answers on Blender Stack Exchange and total score of 48.000:

This tool is maybe useful if you want to learn Python, in particular Blender Python API basics, I don't really see other usage of this. All examples given are extremely simple to do; please don't use a tool like this, because it takes your prompt and generates the most bland version of it possible. It really takes only about a day to go through some tutorials and learn how to make models like these in Blender, with solid color or some basic textures. The other thousands of days is what you would spend on creating correct topology, making an armature, animating, making more advanced shaders, creating parametric geometry nodes setups... But simple models like these you can create effortlessly, and those will be YOUR models, the way (roughly, of course) how you imagined them. After a few weeks you're probably going to model them faster than the time it takes for prompt engineering. By that time your imagination, skill in Blender and understanding of 3D technicalities will improve, and it will keep improving moving onward. And what will you learn using this AI?

I think meshy.ai is much more promising, but still I think I'd only consider using it if I wanted to convert photo/render into a mesh with a texture properly positioned onto it, to then refine the mesh by sculpting - and sculpting is one of my weakest skills in Blender. BTW I made a test showcasing how meshy.ai works: https://blender.stackexchange.com/a/319797/60486

elif

As someone who has tried to go through blender tutorials for multiple days, I can tell you, there is no chance I can get close to any of these examples.

I think you might be projecting your abilities a bit too much.

As someone who wants to make and use 3d models, not someone who wants to be a 3d model artist, this tech is insanely useful.

Etherlord87

Wrong tutorials. A lot of these models consist of just taking a primitive like a sphere, scaling it, and then creating another primitive, scaling it, moving it, so you have overlapping hulls ("bad" topology). Then in shading you just create a default material and set its color.

There are models in the examples that require e.g. extrusion (which is literally: select faces, press E, drag mouse).

Some shapes are smoothed/subdivided with Catmul-Clark Subdivision Surface modifier, which you can add simply by pressing CTRL+2 in "Object Mode" (the digit is the number of subdivisions, basically use 1 to 3, you may set more for renders).

Here's a good, albeit old tutorial: https://www.youtube.com/watch?v=1jHUY3qoBu8

Yes I made some assumptions when estimating it takes about a day to learn to make models like this: you have a free day to spend it in its entirety to learn, and as a hackernews user your IQ is over average and you're technically savvy. And last assumption: you learn skills required evenly, rather than going deep into the rabbit hole of e.g. correct topology; if you go through something like Andrew Pierce's doughnut tutorial, it may take more than a day, especially if you play around with the various functions of Blender rather than strictly following the videos - but you will end up making significantly better models than the examples presented, e.g. you will know to inset cylinder's ngons to avoid the Catmul-Clark subdiv artifacts you can see on the 2nd column of hats.

> this tech is insanely useful.

No, it isn't, but you don't see it, because you don't have enough experience to see it (Dunning-Kruger effect) - this is why I mentioned my experience, not to flex but to point out I have the experience required to estimate the value of this tool.

xtracto

It's amazing how little understanding some people with "a gift" for certain skills have.

I play guitar, it's easy and I enjoy it a lot. I've taught plsome friends to play it, and some of them just... don't have it in them.

Similarly,.I've always liked drawing/painting and 3d modeling. But for some reason, that part of my brain is Just not there. I just can't do visualization. I've even tried award winning books (drawing with the right side of the brain) without success.

Way back in the day I tried 3D modeling with AW maya, 3d studio max and then Blender. I WANT to convert a sphere into a nice warrior, I died to make 3d games: I had the C/C++ part covered, as well as the opengl one. But I couldn't model a trash can,.after following all tutorials and.books.

This technology solves that for us who don't have that gift. I understand that for people that can "draw the rest of the fking owl" it won't look as much, but darn, it opens a world for me.

cthlee

Most of these 3d asset generation tools oversimplify things down to stacking primitives and call it modeling, which skips fundamentals like extrusion, subdivision, and proper topology. If they wanted to make a tool actually worthwhile, what do you think the core features should be? Like it would be great if it enforces clean topology, streamline subdivision workflows, but given your xp I'm curious what you’d consider essential.

echelon

0.0001% of the population can sculpt 3D and leverage complex 3D toolchains. The rest of us (80% or whatever - the market will be big) don't want to touch those systems. We don't have the time, patience, or energy for it, yet we'd love to have custom 3D games and content quickly and easily. For all sorts of use cases.

But that misses the fact that this is only the beginning. These models will soon generate entire worlds. They will eventually surpass human modeller capabilities and they'll deliver stunning results in 1/100,000th the time. From an idea, photo, or video. And easy to mold, like clay. With just a few words, a click, or a tap.

Blender's days are long in the tooth.

I'm short on Blender, Houdini, Unreal Engine, Godot, and the like. That entire industry is going to be reinvented from scratch and look nothing like what exists today.

That said, companies like CSM, Tripo, and Meshy are probably not the right solutions. They feel like steam-powered horses.

Something like Genie, but not from Google.

jayd16

I think you'll come to realize that the margin between people willing to learn blender today and people looking to generate models but won't learn how today is razor thin.

What's the use case of generating a model if all modelling and game engines are gone?

catlifeonmars

> These models will soon generate entire worlds. They will eventually surpass human modeller capabilities and they'll deliver stunning results in 1/100,000th the time. From an idea, photo, or video. And easy to mold, like clay. With just a few words, a click, or a tap.

This is a pretty sweeping and unqualified claim. Are you sure you’re not just trying to sell snake oil?

numpad0

100% of population has all the tools needed + ChatGPT for free to write a novel. Only 0.0001% are even able to complete even a short story - they often can't hold a complete and consistent plot in their head.

"AI allows those excluded from the guild" is total BS.

Gut figures, ~85% of creativity comes from skill itself. ~10% or so comes from prior arts. And it's all multiplied by willingness[0, 1] which >99.9999% of population has << 0.0001 as the value. Tools just don't change that, it just weighs down on the creativity part.

mxmilkiib

Blender will just add AI creation/editing

fwip

> These models will soon generate entire worlds.

They may. It's hard to expect this when we already see LLMs plateauing at their current abilities. Nothing you've said is certain.

weregiraffe

> That entire industry is going to be reinvented from scratch

Hey, I heard that one before! The entire financial industry was supposed to have been reinvented from scratch by crypto.

srid

This reminds me of Elon Musk's recent claims on the future of gaming:

    This – but in real-time – is the future of gaming and all media
https://x.com/elonmusk/status/1954486538630476111

Etherlord87

The only sculpting example I see is the very first hat. Do you want to tell me you wouldn't be able to sculpt that?

I perfectly understand the time/patience/energy argument and my bias here. But even Spore (video game) editor with all its limitations gives you a similar result to the examples provided, and at least there you are the one giving the shape to your work, which gives you more control, and your art more soul, and moreover puts you on a creative path where the results are getting better.

Will the AI soon surpass human modeller? I don't know... I hear so much hype for AI, I have fallen victim to it myself where I spent quite some time trying to use AI for some serious work and guess what - it works as a search engine, it will give me a ffmpeg command that I could duckduckgo anyway, it will give me an Autohotkey script that I could figure out myself after a quick search etc. The LLM fails even at the tasks that seem optimal for it - I have tried multiple times to translate movie subtitles with it, and while the translation was better than using machine learning, at some point the AI goes crazy and decides to change the order of scenes in a movie - something that I couldn't detect until I watched the movie with friends, so it was a critical failure. I described a word, and the AI failed to give me the word I couldn't remember, and a simple search on thesaurus succeeded instead. I described what I remembered from a quote, but the AI failed to give me the quote, but my googlefu was enough to find it.

You probably know how to code, and would cringe if someone suggested to you to just ask the AI to write you the code for a video game without you yourself knowing how to code to at least supervise and correct it, and yet you think the 3D modelling will be good enough without intervention of a 3D artist; maybe, but as someone experienced in 3D I just don't see it, just like I don't see AI making Hollywood movies even though a lot of people claim it's a matter of years before that becomes the reality.

Instead what I see is AI slop everywhere and I'm sure video games will be filled with AI crap, just like a lot of places were filled with machine-learning translations because Google seriously suggested on its conferences that the translations are good enough (and if someone speaks only English, the Dunning-Kruger effect kicks in).

Sure, eventually we might have AGI and humanity will be obsolete. But I'm not a fan of extrapolating hyperbolic data; one Youtuber made an estimation that in a couple decades Earth will be visited by aliens, because there won't be enough Earthlings to satisfy his channel viewership stats.

tarr11

One of my hobbies is Houdini which is like Blender. While I agree with you that you can build a nice parameterised model in a few days - if you want to make an entire scene or a short film, you will need hundreds if not thousands of models, all textured and topolgized and many of them rigged, animated or even have simulations.

What this means is that making even a 2 minute short animation is out of reach for a solo artist. Your only option today is to go buy an asset pack and do your best. But then of course your art will look like the asset pack.

AI Tools like this reduce one of the 20+ stages down to something reachable by someone working solo.

thwarted

> What this means is that making even a 2 minute short animation is out of reach for a solo artist.

Is it truly the duration of the result that consumes effort and the number of people required? What is the threshold for a solo artist? Is it expected that a 2 minute short takes half as much effort/people as a 4 minute short? Does the effort/people scale linearly, geometrically, or exponentially with the duration? Does a 2 minute short of a two entity dialog take the same as a 4 minute short of a monologue?

> Your only option today is to go buy an asset pack and do your best. But then of course your art will look like the asset pack.

What's more valuable? That you can create a 2 minute short solo or that all the assets don't look like they came from an asset pack? The examples shown in TFA look like they were procedurally generated, and customizations beyond the simple "add more vertexes" are going to take time to get a truly unique style.

> AI Tools like this reduce one of the 20+ stages down to something reachable by someone working solo.

To what end? Who's the audience for the 2 minute short by a solo developer? Is it meant to show friends? Post to social media as a meme? Add to a portfolio to get a job? Does something created by skipping a large portion of the 20+ steps truly demonstrate the person's ability, skill, or experience?

latexr

> Your only option today is to go buy an asset pack and do your best.

There is a real possibility the assets generated by these tools will look equally or even more generic, the same generated images today are full of tells.

> What this means is that making even a 2 minute short animation is out of reach for a solo artist.

Flatland was animated and edited by a single person. In 2007. It’s a good movie. Granted, the characters are geometric shapes, but still it’s a 90 minute 3D movie.

https://en.wikipedia.org/wiki/Flatland_(2007_Ehlinger_film)

Puparia is a gorgeous 2D animated film done by a single person in 2020.

https://en.wikipedia.org/wiki/Puparia

These are exceptional cases (by definition, as there aren’t that many of them), but do not underestimate solo artists and the power of passion and resilience.

Ey7NFZ3P0nzAe

There are always exceptions. I think the parent is refering to the many solo artists that would almost be able to make such great movies if not for some of the time constraints or life event etc. I'm sure there are countless solo artists that made 75% of a great movie then lacked time for unforeseeable reasons. Making the creation a bit easier allows much more solo artists to create!

oblio

Puparia is a 3 minute short film that took a veteran artist 3 years to make. I think you're making OP's point.

jimmis

As a designer/dev working on AI for customer service tools, who has to constantly reminding stakeholders that LLMs aren't creative, aren't good at steering conversations, etc. I wish there was more focus on integrating AI into tools in ways that make work faster, rather than trying to do-it-all. There's still so much low-hanging fruit out there.

Other than the obvious (IDEs), wish there were more tools like Fusion360's ai auto-constraints. Saves so much time on something that is mostly tedious and uncreative. I could see similar integrations for Blender (honestly the most interesting part of what op posted is changing the materials... could save a lot of time spent connecting noodles).

sbuk

Tedious tasks, like retopologising, UV unwrapping and rigging would be great examples of where AI in tools like Maya and Blender could be really useful.

dash2

What if I don't want to spend a few weeks learning Blender? What if I just want to spend a couple of hours and get something that's good enough?

ghurtado

If you think the results in that page are "good enough", then I assure you, as a heavy blender and Gen AI user, that it would take you much less time to get this good at blender (about Logo turtle level) than it would to figure out how to run this model yourself locally, with all the headaches attached.

Without question.

spiralcoaster

Sounds like you're looking for something like an asset store, or open source models.

raincole

The best thing you can do is to just make money. I am serious.

The current 3D GenAI isn't that good. And if when they eventually become good enough they won't be very cheap to run locally, at least for quite a while. You just need to wait & hoard spare cash. Learning how to use the current models is like trying to get GPT1 to write code for you.

numpad0

Then you don't get any positive recognitions for the product anyway.

exasperaited

... is like...

What if I don't want to learn guitar? What if I just want to spend a couple of hours and get something that sounds like guitar?

I tend to say in this situation: you can do that. Nobody's stopping you. But you shouldn't expect wider culture to treat you like you've done the work. So what new creative work are you seeking to do with the time you've saved?

aledalgrande

I don't know a lot about 3D modeling, but I can see that the objects created by this AI are way too high poly, which would be bad for performance if used e.g. in a game. But it still looks like a great prototyping tool to me, especially if you want to express an idea in your head to an actual 3D designer, in the same way UX designers can show a prototype to developers with Claude code now, instead of trying to repro an idea with Figma.

sbarre

Yeah but remember this tool is, today, the worst it will ever be.

This kind of work will only improve, and we're in early days for these kinds of applications of LLM tech.

latexr

I wish that “it will get better” wasn’t the response every time someone shares actionable advice and thoughtful specific criticism about the state of the art.

You don’t know if it will get better. Even if it does, you don’t know by how much or the time frame. You don’t know if it will ever improve enough to overcome the current limitations. You don’t know if it will take years.

In the meantime, while someone is sitting on their ass for years waiting for the uncertain future of the tool getting better, someone else is getting their hands dirty, learning the craft, improving, having fun, collaborating, creating.

There is plenty of garbage out there where we were promised “it will only get better”, “in five years (eternally five years away) it will take over the world”, and now they’re dead. Where’s the metaverse NFT web3 future? Thrown into a trash can and lit on fire, replaced by the next embarrassment of chatting with porn versions of your step mom.

https://old.reddit.com/r/singularity/comments/1mrygl4/this_i...

sbarre

> You don’t know if it will get better. Even if it does, you don’t know by how much or the time frame. You don’t know if it will ever improve enough to overcome the current limitations. You don’t know if it will take years.

You are _technically_ correct but if I base my assumptions on the fact that almost all worthwhile software and technology has gotten better over the years, I feel pretty confident in standing behind that assumption.

> In the meantime, while someone is sitting on their ass for years waiting for the uncertain future of the tool getting better, someone else is getting their hands dirty, learning the craft, improving, having fun, collaborating, creating.

This is a pretty cynical take. We all decide where we prioritize our efforts and spend our time in life, and very few of us have the luxury to freely choose where we want to focus our learning.

While I wait for technologies I enjoy but haven't mastered to get better, I am certainly not "sitting on my ass".. I am dedicating my time to other necessary things like making a living or supporting my family.

In this specific case I wish I could spend hours and hours getting good at Blender and 3D modelling or animation. Dog knows I tried when I was younger.. But it wasn't in the cards.

I'm allowed to be excited at the prospect that technology advancements will make this more accessible and interesting for me to explore and enjoy with less time investment. I also want to "get my hands dirty, learn, improve, have fun, create" but on my own terms and in my own time.

Any objection to that is shitty gatekeeping.

soulofmischief

On the other hand, as both an artist and machine learning practitioner, I think most artists are only seeing the surface layer here and have little insight on its derivative, the algorithms and state of research which are advancing the state of the art on a weekly basis. It'll never be obvious that we're at the critical moment because critical phase changes happen all at once, suddenly, out of nowhere.

There is an insane amount of low-hanging fruit right now, and potentially decades or centuries of very important math to be worked out around optimal learning strategies, but it's clear that we do have a very strong likelihood of our ways of life being fundamentally altered by these technologies.

I mean already, artists are suddenly having to grip with all sorts of new and forgotten questions around artistic identity and integrity, what qualifies as art, who qualifies as an artist... Generative technology has already made artists begin to question and radically redefine the nature of art, and if it's good enough to do that, I think it's already worth serious consideration. These technologies, even in current form, were considered science fiction or literal magic up until very recently.

jappgar

LLMs aren't suited to this, just like they aren't suited to generating images (different models do the hard work, even when you're using an LLM interface).

I agree with the parent comment. This might be neat to learn the basics of blender scripting, but it's an incredibly inefficient and clumsy way of making anything worthwhile.

sbarre

That's fair, and perhaps a different kind of multi-modal model will emerge that is better at learning and interacting with UIs..

Or maybe applications will develop new interfaces to meet LLMs in the middle, sort of how MCP servers are a very primitive version of that for APIs..

Future improvements don't just have to be a better version of exactly what it is today, it can certainly mean changing or combining approaches.

Leaving AI/LLM aside, 3D modeling and animation tech has drastically evolved over the years, removing the need for lots of manual and complicated work by automating or simplifying the workflow for achieving better results.

ghurtado

Right.

This is like training an AI on being an Excel expert, and then ask it to make Doom for you: You're gonna get some result, and it will be impressive given the constraints. It's also going to be pure dog shit that will never see the light of day other than as a meme.

ghurtado

> Yeah but remember this tool is, today, the worst it will ever be.

True. And it if it stops to be a path forward because a better approach is found (something more likely than not), then this is also the best it will ever be.

voxleone

It's a neat tool. But i suspect it can only ever be as good as Blender. Of course I could be wrong.

parineum

Hammers have existed for thousands of years and still can't do my laundry.

blakcod

Let’s be honest most are just looking to get acquired. Then enshittified.

btown

I think there’s a really interesting point here that even if a model is capable of planning and reasoning, part of the skillset of a creator is that asking for the right thing requires an understanding of how that model is creating its artifacts.

And in 3D, you won’t be able to do that without understanding, as an operator, what you’d want to ask for. Do you want this specific part to be made parametrically in a specific way for future flexibility, animation, or rendering? When? Why? And do these understandings of techniques give you creative ideas?

A model trained solely on the visual outcome won’t add these constraints unless you know to ask for them.

Even if future iterations of this technology become more advanced and can generate complex models, you need to develop a skillset to be able to gauge and plan around how they fit into your larger vision. And that skillset requires fundamentals.

myhf

The article is about language models. Language models are not capable of planning or reasoning.

overfeed

> I don't really see other usage of this

My hot-take: this is the future of high-fidelity prompt-based image generation, and not diffusion models. Cycles (or any other physically based renderer) is superior to diffusion models because it is not probabilistic, so scene generation via LLM before handing to off to a tool leads to superior results, IMO - at least for "realistic" outputs.

raincole

Of course no one knows the future, but I think it's very plausible that the future of films/games (especially games) tech resembles something like this:

1. Generation something that looks good in 2D latent space

2. Generation 3D representation from 2D

3. Next time the same scene is shown on the screen, reuse information from step 2 to guide step 1

overfeed

That's an interesting idea! I'm thinking step 2 might be inserting 3d foreground/hero objects in front of a 2d background/inside a 2d worldbox

ghurtado

> My hot-take: this is the future of high-fidelity prompt-based image generation and not diffusion models

Why are those two the only options?

overfeed

> Why are those two the only options?

I made no such claim. The only thing I declared is my belief in the superiority of PBR over diffusion models for a specific subset of image-generation tasks.

I also clearly framed this as my opinion, you are free to have yours.

nickparker

I've had surprising success with meshy.ai as part of a workflow to go from images my friends want to good 3D models. The workflow is

1. Have gpt5 or really any image model, midjourney retexture is also good, convert the original image to something closer to a matte rendered mesh, IE remove extraneous detail and any transparency / other confusing volumetric effects

2. Throw it in meshy.ai image to 3D mode, select the best one or maybe return to 1 with a different simplified image style if I don't like the results

3. Pull it into blender and make whatever mods I want in mesh editing mode, eg specific fits and sizing to assemble with other stuff, add some asymmetry to an almost-symmetric thing because the model has strong symmetry priors and turning them off in the UI doesn't realllyyy turn them off, or model on top of the AI'd mesh to get a cleaner one for further processing.

The meshes are fairly OK structure wise, clearly some sort of marching cubes or perhaps dual contouring approach on top of a NeRF-ish generator.

I'm an extremely fast mechanical CAD user and a mediocre blender artist, so getting an AI starting point is quite handy to block out the overall shape and let me just do edits. EG a friend wants to recreate a particular statue of a human, tweaking some T-posed generic human model into the right pose and proportions would have taken me "more hours than I'm willing to give him for this" ie I wouldn't have done it, but with this workflow it was 5 minutes of AI and then an hour of fussing in Blender to go from the solid model to the curvilinear wireframe style of the original statue.

QuantumNomad_

> 1. […] convert the original image to something closer to a matte rendered mesh […]

Sounds interesting. Do you have any example images like that you could share? I understand the part about making transparent surfaces not transparent. But I’m not sure how the whole image looks like after this step.

Also, would you be willing to share the prompt you type to achieve this?

menzoic

GPT-5 is a text only model. ChatGPT uses 4o for images still

mattnewton

The naming is very confusing. I thought the underlying model was gpt image 1 in the api but transparently shown as part of the same chat model in the UI?

emporas

Very encouraging results. Spatial intelligence of LLMs was very bad, one year back. I spent quite some time to make them write stories in which objects are put into up and down positions, left and right, front or back, they always got hopelessly confused.

I asked GPT which one is the most scriptable CAD software, it's answer was Freecad. Blender is not a CAD software as far as I understand, the user cannot make measurements like Freecad.

Unfortunately Freecad's API is a little bit scattered and not well organized, GPT has trouble remembering/searching and retrieving the relevant functions. Blender is a lot more popular, more code on the internet, and it performs much better.

ThomPete

Would it be possible to write a script for CAD that could do measurements

emporas

Measurements are printed and given usually to construction workers, usually along some axis. People who lay the bricks take a top view from a house with the dimensions along the y axis. People who build the doors and windows take a side view of the dimensions along the x axis. And so on.

Blender cannot do that as far as I understand.

Something like that for example: [1]

[1] https://all3dp.com/2/freecad-2d-tutorial/

ThomPete

Looks like it can

nrjames

I was playing with Aseprite (pixel editor) the other day. You can script it with Lua, so I asked Claude to help me write scripts that would create different procedurally-generated characters each time they were run. They were reproducible with seeds and kind of resembled people, but very far from what I would consider to be high quality. It was a fun little project and easily accessible, though.

- https://www.aseprite.org

kleiba

I've been looking for a good pixel-art AI for a while. Most things I tried look okay, but not stunning. If anyone has had good experience with an AI tool for that, I'd be grateful for a link.

notimpotent

If you're interested in that, check out the guys over at pixellab.ai

They have an Aesprite plugin that generates pretty nice looking sprites from your prompts.

reactordev

Before you trash on the 3d model quality, just think about the dancing baby and early pixar animations. This is incredible. I can't wait to be able to prompt my llm to generate a near-ready 3d model that all I have to do is tweak, texture, bake, and export.

jappgar

LLMs are language models. Meshes aren't language. Yes this can create python to create simple objects, but that's not how anyone actually creates beautiful 3d art. Just like no one is handwriting svg files to create vector art.

LLMs alone will never make visual art. They can provide you an interface to other models, but that's not what this is.

rozab

This is of course true, but have you ever seen Inigo Quilez's SDF renderings? It's certainly not scalable, but it sure is interesting

https://www.youtube.com/watch?v=8--5LwHRhjk

margalabargala

That's fine. I'm happy to define "visual art" as things LLMs can't do, and use LLMs only for the 3d modelling tasks that are not "visual art".

Such tasks can be "not making visual art", but that doesn't mean they aren't useful.

reactordev

I know that, I was making a statement about how you can.

Not exactly sure what your point is. If an LLM can take an idea and spit out words, it can spit out instructions (just like we can with code) to generate meshes, or boids, or point clouds or whatever. Secondary stages would refine that into something usable and the artist would come in to refine, texture, bake, possibly animate, and export.

In fact, this paper is exactly that. Words as input, code to use with blender as output. We really just need a headless blender to spit it out as a GLTF and it’s good to go to second stage.

therouwboat

If you have an artist, can't you just talk to her about what you want and then she makes the model and all the rest of it? I don't really understand what you gain if you pay for LLM, make model with it and then give it to artist.

nperez

I'm not a modeler but I've tried it a few times. For me, modeling is a pain that I need to deal with to solo-dev a 3d game project. I would think about using something like this for small indie projects to output super low-poly base models, which I could then essentially use as a scaffold for my own finer adjustments. Saving time is better than generating high-poly masterpieces, for me at least.

ranahanocka

Author here. AMA!

ThouYS

large token models are coming for everything, because everything can be made a token.

the detour via language here is not needed, these models can speak geometry more and more fluently

numpad0

This seem like a great observation, a lot of negative reactions to AI generated data seem to come from limitations of using language, thereby denying good creative input from its users.

Stevvo

Right, like word2vec blew everyone's mind back in the day, but 3D models have always existed in a "vector space".

DrBenCarson

https://zoo.dev/design-studio is better in ever conceivable way

king_terry

This is amazing. Solo game dev will actually become solo.

nativeit

I’ve been saying for a long time, “gaming is just too pro-social and well adjusted, we need more isolation and introversion in gaming!”

bluefirebrand

I hope we adopt some kind of "This product uses AI generated content" label for games so I can avoid them forever :)

Workaccount2

That's when people just lie about using AI. The thing about AI is that it is only obvious when it is obvious.

hhh

Steam has this

jellybaby2

Haaa you sound jealous :-) Pity you’re not smart enough to benefit

keyle

This is at "cute" level of useful I feel. A few more iterations though and this will get interesting.

swinglock

Looks like a fun toy. That can already be useful. I'm thinking of games that don't even have to leave a prototype stage, e.g. Roblox, or actually just for better prototyping. Even if it can't produce anything sufficiently good yet (depends on the game and audience, look at Minecraft), if it's fun tinker with that's enough to be useful. If it improves that will certainly be more exciting but it already looks useful.

throwmeaway222

What I always wanted in video games where you can craft weapons was the idea that you can combine the duct tape with wood with fishing lures and it creates something the designer didn't think about :)

manc_lad

Zelda Breath of the Wild attempts to approach this with an interesting interface.

xnx

Tears of the Kingdom?