Skip to content(if available)orjump to list(if available)

Teaching LLMs how to solid model

Teaching LLMs how to solid model

66 comments

·April 23, 2025

alnwlsn

The future: "and I want a 3mm hole in one side of the plate. No the other side. No, not like that, at the bottom. Now make it 10mm from the other hole. No the other hole. No, up not sideways. Wait, which way is up? Never mind, I'll do it myself."

I'm having trouble understanding why you would want to do this. A good interface between what I want and the model I will make is to draw a picture, not write an essay. This is already (more or less) how Solidworks operates. AI might be able to turn my napkin sketch into a model, but I would still need to draw something, and I'm not good at drawing.

The bottleneck continues to be having a good enough description to make what you want. I have serious doubts that even a skilled person will be able to do it efficiently with text alone. Some combo of drawing and point+click would be much better.

This would be useful for short enough tasks like "change all the #6-32 threads to M3" though. To do so without breaking the feature tree would be quite impressive.

seveibar

Most likely you won’t be asking for specific things like “3mm hole 3in from the side”, you’ll say things like “Create a plastic enclosure sized to go under a desk, ok add a usb receptacle opening, ok add flanges with standard screw holes”

In the text to CAD ecosystem we talk about matching our language/framework to “design intent” a lot. The ideal interface is usually higher level than people expect it to be.

mediaman

The problem is that this isn't very useful except for the very earliest ideation stages of industrial design, which hardly need CAD anyway.

Most parts need to fit with something else, usually some set of components. Then there are considerations around draft, moldability, size of core pins, sliders, direction of ejection, wall thickness, coring out, radii, ribs for stiffness, tolerances...

LLMs seem far off from being the right answer here. There is, however, lots to make more efficient. Maybe you could tokenize breps in some useful way and see if transformers could become competent speaking in brep tokens? It's hand-wavy but maybe there's something there.

Mechanical engineers do not try to explain models to each other in English. They gather around Solidworks or send pictures to each other. It is incredibly hard to explain a model in English, and I don't see how a traditional LLM would be any better.

abe_m

I think this is along the lines of the AI horseless carriage[1] topic that is also on the front page right now. You seem to be describing the current method as operated through an AI intermediary. I think the power in AI for CAD will be at a higher level than lines, faces and holes. It will be more along the lines of "make a bracket between these two parts". "Make this part bolt to that other part". "Attach this pump to this gear train" (where the AI determines the pump uses a SAE 4 bolt flange of a particular size and a splined connection, then adds the required features to the housing and shafts). I think it will operate on higher structures than current CAD typically works with, and I don't think it will be history tree and sketch based like Solidworks or Inventor. I suspect it will be more of a direct modelling approach. I also think integrating FEA to allow the AI to check its work will be part of it. When you tell it to make a bracket between two parts, it can check the weight of the two parts, and some environmental specification from a project definition, then auto-configure FEA to check the correct number of bolts, material thickness, etc. If it made the bracket from folded sheet steel, you could then tell it you want a cast aluminum bracket, and it could redo the work.

[1]https://news.ycombinator.com/item?id=43773813

alnwlsn

You're right, but I think we have a long way to go. Even our best CAD packages today don't work nearly as well as advertised. I dread to think what Dassault or Autodesk would charge per seat for something that could do the above!

abe_m

I agree. I think a major hindrance to the current pro CAD systems is being stuck to the feature history tree, and rather low level features. Considerable amounts of requirements data is just added to a drawing free-form without semantic machine-readable meaning. Lots of tolerancing, fit, GD&T, datums, etc are just lines in a PDF. There is the move to MBD/PMI and the NIST driven STEP digital thread, but the state of CAD is a long way from that being common. I think we need to get to the data being embedded in the model ala MBD/PMI, but then go beyond it. The definition of threads, gear or spline teeth, ORB and other hydraulic ports don't fit comfortably into the current system. There needs to be a higher level machine-readable capture, and I think that is where the LLMs may be able to step in.

I suspect the next step will be such a departure that it won't be Siemens, Dassault, or Autodesk that do it.

coderenegade

I think this is correct, especially the part about how we actually do modelling. The topological naming problem is really born from the fact that we want to do operations on features that may no longer exist if we alter the tree at an earlier point. An AI model might find it easier to work directly with boolean operations or meshes, at which point, there is no topological naming problem.

eurekin

I have come across a significant number of non engineers wanting to do, what ultimately involves some basic CAD modelling. Some can stall on such tasks for years (home renovation) or just don't do it at all. After some brief research, the main cause is not wanting to sink over 30 hours into learning basics of a cad package of choice.

For some reason they imagine it as a daunting, complicated, impenetrable task with many pitfalls, which aren't surmountable. Be it interface, general idea how it operates, fear of unknown details (tolerances, clearances).

It's easy to underestimate the knowledge required to use a cad productively.

One such anecdata near me are highschools that buy 3d printers and think pupils will naturally want to print models. After initial days of fascination they stopped being used at all. I've heard from a person close to the education that it's a country wide phenomena.

Back to the point though - maybe there's a group of users that want to create, but just can't do CAD at all and such text description seem perfect for them.

Animats

There's a mindset change needed to use a feature tree based constructive solid geometry system. The order in which you do things is implicit in the feature tree. Once you get this, it's not too hard. But figuring out where to start can be tough.

I miss the TechShop days, from when the CEO of Autodesk liked the maker movement and supplied TechShop with full Autodesk Inventor. I learned to use it and liked it. You can still get Fusion 360, but it's not as good.

The problem with free CAD systems is that they suffer from the classic open source disease - a terrible user interface. Often this is patched by making the interface scriptable or programmable or themeable, which doesn't help. 3D UI is really, really hard. You need to be able to do things such as change the viewpoint and zoom without losing the current selection set, using nothing but a mouse.

(Inventor is overkill for most people. You get warnings such as "The two gears do not have a relatively prime number of teeth, which may cause uneven wear.")

phkahler

>> I have come across a significant number of non engineers wanting to do, what ultimately involves some basic CAD modelling.

I very much want Solvespace to be the tool for those people. It's very easy to learn and do the basics. But some of the bugs still need to get fixed (failures tend to be big problems for new users because without experience its hard to explain what's going wrong or a workaround) and we need a darn chamfer and fillet tool.

Animats

> I very much want Solvespace to be the tool for those people.

Probably not. "Copyright 2008-2022 SolveSpace contributors. Most recent update June 2 2022."

itissid

> and I want a 3mm hole in one side of the plate. No the other side. No, not like that, at the bottom. Now make it 10mm from the other hole. No the other hole. No, up not sideways.

One thing that is interesting here is you can read faster than TTS to absorb info. But you can speak much faster than you can type. So is it all that typing that's the problem or could be just an interface problem? and in your example, you could also just draw with your hand(wrist sensor) + talk.

As I've been using agents to code this way. Its way faster.

alnwlsn

Feels a bit like being on a call with someone at the hardware store, about something that you both don't know the name for. Maybe the person on the other end is confused, or maybe you aren't describing it all that well. Isn't it easier to take a picture of the thing or just take the thing itself and show it to someone who works there? Harder again to do that when the thing you want isn't sold at the store, which is probably why you're modeling it in the first place.

Most of the mechanical people I've met are good at talking with their hands. "take this thing like this, turn it like that, mount it like this, drill a hole here, look down there" and so on. We still don't have a good analog for this in computers. VR is the closest we have and it's still leagues behind the Human Hand mk. 1. Video is good too, but you have to put in a bit more attention to camerawork and lighting than taking a selfie.

michaelt

> I'm having trouble understanding why you would want to do this.

You would be amazed at how much time CAD users spend using Propriety CAD Package A to redraw things from PDFs generated by Propriety CAD Package B

plorg

I spend a lot of time using proprietary CAD package A to redraw things from Who Knows What, but that's mostly because the proprietary CAD data my vendor would send me is trapped behind an NDA for which getting legal approval would take more time and effort than just modeling the things in front of me with minimum viable detail, or else my vendor is 2 businesses removed from the person with the CAD data that I need (and may require a different NDA that we can't sign without convincing our vendor to do the same). Anyone I've ever been able to request CAD data from will just send me STEP or parasolid files and they will work well enough for me to do my job. Often I spend more time removing model features so my computer will run a little faster.

oofbaroomf

If you would use LLM-assisted CAD for real industrial design, you would have to end up by specifying exactly where everything has to go and what size it has to be. But if you are doing that then you may as well make an automated program to convert those specific requirements into a 3D model.

Oh wait, that's CAD.

Cynical take aside, I think this could be quite useful for normal people making simple stuff, and could really help consumer 3D printing have a much larger impact.

whatshisface

Here's how it might work, by analogy to the workflow for image generation:

"An aerodynamically curved plastic enclosure for a form-over-function guitar amp."

Then you get something with the basic shapes and bevels in place, and adjust it in CAD to fit your actual design goals. Then,

"Given this shape, make it easy to injection mold."

Then it would smooth out some things a little too much, and you'd fix it in CAD. Then, finally,

"Making only very small changes and no changes at all to the surfaces I've marked as mounting-related in CAD, unify my additions visually with the overall design of the curved shell."

Then you'd have to fix a couple other things, and you'd be finished.

tylergetsay

In your example, what about mounting the electronics or specifying that the control knobs need to fit within these dimensions? I guess its easy if those objects are available as a model, but thats not always the case.. 3d scanner maybe?

whatshisface

You'd get control knobs of a reasonable size, and mounting holes in an arbitrary rectangle, then correct them with the true dimensions outside of generation.

ssl-3

So maybe the future is to draw a picture, and go from there?

For instance: My modelling abilities are limited. I can draw what I want, with measurements, but I am not a draftsman. I can also explain the concept, in conversational English, to a person who uses CAD regularly and they can hammer out a model in no time. This is a thing that I've done successfully in the past.

Could I just do it myself? Sure, eventually! But my modelling needs are very few and far between. It isn't something I need to do every day, or even every year. It would take me longer to learn the workflow and toolsets of [insert CAD system here] than to just earn some money doing something that I'm already good at and pay someone else to do the CAD work.

Except maybe in the future, perhaps I will be able use the bot to help bridge the gap between a napkin sketch of a widget and a digital model of that same widget. (Maybe like Scotty tried to do with the mouse in Star Trek IV.)

(And before anyone says it: I'm not really particularly interested in becoming proficient at CAD. I know I can learn it, but I just don't want to. It has never been my goal to become proficient at every trade under the sun and there are other skills that I'd rather focus on learning and maintaining instead. And that's OK -- there's lots of other things in life that I will probably also never seek to be proficient at, too.)

spmcl

I did this a few months ago to make a Christmas ornament. There are some rough edges with the process, but for hobby 3D printing, current LLMs with OpenSCAD is a game-changer. I hadn't touched my 3D printer for years until this project.

https://seanmcloughl.in/3d-modeling-with-llms-as-a-cad-luddi...

0_____0

As a MCAD user this makes me feel more confident that my skills are safe for a bit longer. The geometry you were trying to generate (minus bayonet lock, which is actually a tricky thing to make because it relies on elastic properties of the material) takes maybe a few minutes to build in Solidworks or any modern CAD package.

dgacmu

This matches my experience having Claude 3.5 and Gemini 2.0-flash generate openSCAD, but I would call it interesting instead of a game changer.

It gets pretty confused about the rotation of some things and generally needs manual fixing. But it kind of gets the big picture sort of right. It mmmmayybe saved me time the last time I used it but I'm not sure. Fun experiment though.

adamweld

A recent Ezra Klein Interview[0] mentioned some "AI-Enabled" CAD tools used in China. Does anyone know what tools they might be talking about? I haven't been able to find any open-source tools with similar claims.

>I went with my colleague Keith Bradsher to Zeekr, one of China’s new car companies. We went into the design lab and watched the designer doing a 3D model of one of their new cars, putting it in different contexts — desert, rainforest, beach, different weather conditions.

>And we asked him what software he was using. We thought it was just some traditional CAD design. He said: It’s an open-source A.I. 3D design tool. He said what used to take him three months he now does in three hours.

[0] https://www.nytimes.com/2025/04/15/opinion/ezra-klein-podcas...

sota_pop

Sounds like he could have been using an implementation of stable-diffusion+control-net. I’ve used Automatic1111, but I understand comfyUI and somethingsomethingforge are more modern versions.

throwaway314155

Happy to be corrected but this sounds like the kind of bullshit that crops up from time to time confusing "old" AI with generative AI.

Not that I don't believe it's possible. I just think the alternative (that it's bullshit) is more likely.

ariwilson

I'm a great user for this problem as I just got a 3D printer and I'm no good at modeling. I'm doing tutorials and printing a few things with TinkerCAD now, but my historic visualization sense is not great. I used SketchUp when I had a working Oculus Quest which was very cool but not sure how practical it is.

Unfortunately I tried to generate OpenSCAD a few times to make more complex things and it hasn't been a great experience. I just tried o3 with the prompt "create a cool case for a Pixel 6 Pro in openscad" and, even after a few attempts at fixing, still had a bunch of non-working parts with e.g. the USB-C port in the wrong place, missing or incorrect speaker holes, a design motif for the case not connected to the case, etc.

It reminds me of ChatGPT in late 2022 when it could generate code that worked for simple cases but anything mildly subtle it would randomly mess up. Maybe someone needs to finetune one of the more advanced models on some data / screenshots from Thingiverse or MakerWorld?

geor9e

I get that CAD interfaces are terrible - but if I imagine the technological utopia of the future - using the english language as the interface sounds terrible no matter how well you do it. Unless you are paraplegic and speaking is your only means of manipulating the world.

I much prefer the direction of sculpting with my hands in VR, pulling the dimensions out with a pinch, snapping things parellel with my fine motor control. Or sketching on an iPad, just dragging a sketch to extrude is to it's normal, etc etc. These UIs could be vastly improved.

I get that LLMs are amazing lately, but perhaps keep them somewhere under the hood where I never need to speak to them. My hands are bored and capable of a very high bandwidth of precise communication.

klysm

I’m not sure that CAD interfaces are terrible, it’s just hard work

_mattb

Really cool, I'd love to try something like this for quick and simple enclosures. Right now I have some prototype electronics hot glued to a piece of plywood. It would be awesome to give a GenCAD workflow the existing part STLs (if they exist) and have it roughly arrange everything and then create the 3D model for a case.

Maybe there could be a mating/assembly eval in the future that would work towards that?

alexose

As a huge OpenSCAD fan and everyday Cursor user, it seems obvious to me that there's a huge opportunity _if_ we can improve the baseline OpenSCAD code quality.

If the model could plan ahead well, set up good functions, pull from standard libraries, etc., it would be instantly better than most humans.

If it had a sense of real-world applications, physics, etc., well, it would be superhuman.

Is anyone working on this right now? If so I'd love to contribute.

switchbak

OpenSCAD has some fundamental issues with which folks are well aware. Build123d is a Python alternative that shows promise and seems more capable, and there's others around.

Hard to beat the mindshare of OpenSCAD at the moment though.

dave1010uk

I 3D printed a replacement screw cap for something that GPT-4o designed for me with OpenSCAD a few months ago. It worked very well and the resulting code was easy to tweak.

Good to hear that newer models are getting better at this. With evals and RL feedback loops, I suspect it's the kind of thing that LLMs will get very good at.

Vision language models can also improve their 3D model generation if you give them renders of the output: "Generating CAD Code with Vision-Language Models for 3D Designs" https://arxiv.org/html/2410.05340v2

OpenSCAD is primitive. There are many libraries that may give LLMs a boost. https://openscad.org/libraries.html

mertleee

Really curious how you got these tools to talk so elegantly to each-other? Is this an MCP implementation or?

conorbergin

Your prompts are very long for how simple the models are, using a CAD package would be far more productive.

I can see AI being used to generate geometry, but not a text based one, it would have to be able to reason with 3d forms and do differential geometry.

You might be able to get somewhere by training an LLM to make models with a DSL for Open Cascade, or any other sufficiently powerful modelling kernel. Then you could train the AI to make query based commands, such as:

  // places a threaded hole at every corner of the top surface (maybe this is an enclosure)
  CUT hole(10mm,m3,threaded) LOCATIONS surfaces().parallel(Z).first().inset(10).outside_corners()
This has a better chance of being robust as the LLM would just have to remember common patterns, rather than manually placing holes in 3d space, which is much harder.

wgpatrick

I definitely agree with your point about the long prompts.

The long prompts are primarily an artifact of trying to make an eval where there is a "correct" STL.

I think your broader point, text input is bad for CAD, is also correct. Some combo of voice/text input + using a cursor to click on geometry makes sense. For example, clicking on the surface in question and then asking for "m6 threaded holes at the corners". I think a drawing input also make sense as its quite quick to do.

eMPee584

Actually XR is great for this, with a good 3D interface two-handed manipulation of objects felt surprisingly useful when I last tried an app called GravitySketch on my pico4..

Legend2440

There are diffusion models for 3D generation. They make pretty good decorative or ornamental models, like figurines. They are less good for CAD.

howon92

> To my surprise, Zoo’s API didn’t perform particularly well in comparison to LLMs generating STLs by creating OpenSCAD

This is interesting. As foundational models get better and better, does having proprietary data lose its defensibility more?

jessfraz

Zoo co-founder here. Our product is still pre-v1. But getting to v1 very soon. We actually built a whole new CAD kernel from the ground up. I say this because we can't actually train on models the CAD engine does not yet support. Just 2 weeks ago we shipped csg boolean operations to the CAD engine. This unlocked new data to train our model on that use those operations. So its actually fair to say at the time he used our model we using about 2% of the data we actually have. Once we can use more and more the ability of the model will only get better.

jmcpheron

It's so cool to see this post, and so many other commenters with similar projects.

I had the same thought recently and designed a flexible bracelet for pi Day using openscad and a mix of some the major AI providers. I'm cool to see other people are doing similar projects. I'm surprised how well I can do basic shapes and open scad with these AI assistants.

https://github.com/jmcpheron/counted-out-pi