Skip to content(if available)orjump to list(if available)

Show HN: Torch Lens Maker – Differentiable Geometric Optics in PyTorch

Show HN: Torch Lens Maker – Differentiable Geometric Optics in PyTorch

44 comments

·March 21, 2025

Hello HN! For the past 6 months I've been working on an open source python library that implements differentiable geometric optics in PyTorch. It's very experimental still, but eventually the goal is to use it to design optical systems with a state of the art optimization framework and a beautiful code based API. Think OpenSCAD, but for optical systems.

Not only is PyTorch's autograd an amazing general purpose optimizer, but torch.nn (the neural network building blocks) can be used pretty much out of the box to model an optical system. This is because there is a strong analogy to be made between layers of a neural network, and optical elements in a so-called sequential optical system. So the magic is that we can stack lenses as if we were stacking Conv2D and ReLu layers and everything works out. Instead of Conv2D you have ray-surface collision detection, instead of ReLu you have the law of refraction. Designing lenses is surprisingly like training a neural network.

Check out the docs for examples of using the API. My favorite one is the rainbow :) https://victorpoughon.github.io/torchlensmaker/examples/rain...

You should be able to `pip install torchlensmaker` to try it out, but I just set it up so let me know if there's any trouble.

I was part of the Winter 1'24 batch at the Recurse Center (https://www.recurse.com/) working on this project pretty much full time. I'm happy to talk about that experience too!

etik

Great work! Here's some prior art in the (torch) space: https://github.com/vccimaging/DiffOptics

A few notes, though paraxial approximations are "dumb", they are very useful tools for lens designers and understanding/constraining the design space - calculating the F/#, aperture stop, principal planes and is critical in some approaches. This pushes what autodiff tools are capable of because you need to get Hessians of your surface. There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.

fouronnes3

Yes, that DiffOptics paper was one of my main inspiration for this project. It's a very cool paper.

> There's also a rich history in objective function definition and quadrature integration techniques thereof which you can work to implement, and you may like to have users be able to specify explicit parametric constraints.

Yes, this is definitely the direction I want to take the project in. If you have any reference material to share I'd be interested!

etik

Gaussian quadrature integration for rms spot size or wavefront error:

> Forbes, G. W. (1989). Optical system assessment for design: numerical ray tracing in the Gaussian pupil. Journal of the Optical Society of America A, 6(8), 1123. https://doi.org/10.1364/josaa.6.001123

In general, you'll want to look at MTF calculation (look at Zemax's manual for explanation/how-to). There is also a technique to target optimization at particular spatial frequencies:

> K. E. Moore, E. Elliott, et. al. "Digital Contrast Optimization - A faster and better method for optimizing system MTF," in Optical Design and Fabrication 2017 (Freeform, IODC, OFT), OSA Technical Digest (online) (Optical Society of America, 2017), paper IW1A.3

skwb

I'm a avid (hobbyist) photographer and I've noticed a TON of genuinely good 3rd party lenses (primarily Sigma and Tamron) and even 'fine' lenses at rock bottom prices (Viltrox, 7Artisans, TTArtisans, etc) for like $250. The conventional wisdom I've heard is that computer aided design has totally revolutionized this field.

I can only hope that projects like these help build better lenses for the future.

cbarrick

Neat!

I've been working off and on on a similar hobby project, working through the book _Computational Fourier Optics: A MATLAB Tutorial_, and implementing it in Jax.

My main interest is adaptive optics, but I'm only a hobbyist (limited physics background) and honestly haven't had much time to put into it.

fouronnes3

Would love to chat with you about your project! I'm very interested in jax also. You can find my email on my website if you wanna get in touch :)

barrenko

If you could be bothered to write a blog post on it, I'd be interested in reading it.

mhalle

It's really awesome that you've taken a widely available tool like PyTorch and used it out of domain to provide a library like this, especially one focused on exact solutions and not approximations.

Any plans to include diffractive optics as well? (A totally self-serving question, given that refractive optics is much more common.) In a past life I taught holography and wrote interactive programs to visualize the image forming properties of holograms.

aaclark

This is very cool and crosses paths with a few projects I've been working on recently. - implementing a ReLU network in Blender, mostly for visualization - applying the Riemann-Schwarz mapping theorem to discrete radiance fields - solving a spherical-elliptical optics dilemma in perspective projection Your project dovetails spectacularly with this yet you've tackled the core chain of geometry problems "in the opposite direction". It seems I'll have to pick a different thesis topic, but I'd love to pick your brain about it.

fouronnes3

Feel free to contact me! Love to chat :) My contact info is on my website.

Scipio_Afri

Very cool. This is somewhat naive question considering I actually have an EE background, and I think I know the answer but considering their shared EM theory, do you see any parallels of this thinking tangentially applicable to radio frequency system design?

fouronnes3

I know absolutely nothing about radio so I can't really answer, sorry! But there's really something to be said about using PyTorch (or any other ML framework for that matter) as a general purpose optimizer. The modeling capabilities of torch.nn are quite extraordinary, and the fully dynamic nature of the PyTorch graph (something that wasn't really possible with previous frameworks like tensorflow) is really something that hasn't been talked about enough in my opinion. It's like differentiable programming, basically. You can write any "normal" python function and get an *exact* derivative of it. There are some caveats but it's very very powerful.

num3ric

Potential similarities with Mitsuba's inverse rendering functionality?https://mitsuba.readthedocs.io/en/stable/src/inverse_renderi...

bee_rider

I will ask a dumb question as someone who knows nothing about this stuff (since you already have good questions by smart people):

How close is something like this to being competitive with ray-tracing (as featured in video game engines, or as featured in something like Blender)? I guess, since it is using Torch it should be… surprisingly performant, right? You get some hardware acceleration at least.

fouronnes3

Both this project (and optical design in general) and rendering engines (like video games or any 3D rendering) implement ray tracing, and so are related. But the application is different and therefore they are not really competing. The underlying math is similar, but implementations will be quite different.

Ray tracing for rendering typically needs to figure out which surface a ray is hitting as part of collision detection. This is typically done with something called Bouding Volume Hierachies. Optical design (at least in sequential mode) side steps that issue completely because the order of surface collisions is known in advance.

Another big difference is that ray tracing for optical design needs to be differentiable. This is why I made this project in PyTorch, so that the entire collision detection code and physics implementation (refraction, reflection) can be differentiated with respect to parameters that describe the shape of surfaces. Then you can gradient descent the entire optical system to find optimal parameters.

Finally, rendering raytracing typically implements a lot of realism functions like diffuse or partial reflection which makes the code acutally more complex in some ways. But optical design will care more precisely about things like precise modeling of dispersion, which is not a huge focus for rendering. And there can be real-time performance constraints if you're doing a video game also. Here the implementation really doesn't care about any real time stuff.

makizar

Could you ELI5 what the applications would be ? Could a render engine be built on top of this and hooked up to a DCC like Blender ? Or is this a way to do computational photography, say correct the depth of field of an image of "denoise" it ?

fouronnes3

The main application is designing optical systems. Say you want to build a camera lens. Modern camera lenses are made of multiple individual lenses, sometimes up to 12 or more pieces stacked together. Everything from the shape of the lens surfaces, to the exact materials and gaps between the pieces has to be precisely calculated so that light ends up where you want to!

pixelpoet

Surprised no one has mentioned Mitsuba renderer, in particular the caustic design demo: https://www.youtube.com/watch?v=eTHL3W2NUn0&list=PLI9y-85z_P...

qoez

As an expert in this: What's your opinion on using optics like this as actual neural networks? Any big drawbacks or big real benefits

isgb

Is there any way to simulate (maybe even interactively) things like focus and zoom? It would be cool to have some way to shift lenses (or lens groups) along the optical axis and visualize how light rays get projected onto the image plane.

fouronnes3

That would be cool indeed! Not really a focus of this project - and kinda complex because it's all in python. Only the rendering widget is in JS, but it's only passively displaying the input data it gets as JSON.

Check out this project[1] which kinda does that, although it's 2D only as far as I know. But it's fully interactive, which is super neat.

[1] https://phydemo.app/ray-optics/