Distributed Ray-Tracing
10 comments
·October 16, 2025amelius
maybewhenthesun
I don't know PBTR, but in other raytracers the trick is to create a single surface between the two materials and set the ior of that surface's material to the quotient of the iors of the materials o both sides.
That way the light will refract on the internal boundary as if it moves from the one material to the other.
Prerequisite is that you ened to be able to create non-manifold objects...
amelius
Ok, thanks, that sounds like an important hint.
When you say quotient, which material's ior is in the numerator and which in the denominator?
tylermw
There is a way to model this type of situation for watertight dielectrics with interface tracking: you assign each material a priority value, and a transition between materials occurs when entering that material only if it has a higher priority than your current material. Yining Karl Li has a great article about it:
https://blog.yiningkarlli.com/2019/05/nested-dielectrics.htm...
that inspired me to add the feature to my renderer (rayrender.net).
The downside to priority tracking (and possibly why PBRT does not include it) is it introduces a lots of overhead to ray traversal due to each ray needing to track a priority list. Modern raytracers use packets of rays for GPU/SIMD operations, and thus minimizing the ray size is extremely important to maximize throughput and minimize cache misses.
amelius
Wow, the problem is more involved than I (a simple user) realized ...
Maybe I have to broaden my search for a raytracer. What would be my best bet for correctly simulating multi-material lenses (so with physical correctness), in Linux (open source), preferably with GPU support?
(By the way, as a user I'd be happy to give up even a factor of 10 of performance if the resulting rendering was 100% physically accurate)
knorker
> Conventional ray-tracing is estimating illumination using a single sample across the entire domain, which constitutes a particularly crude approximation.
Straw man.
> Shadows have a hard edge, as only infinitesimally small point light sources of zero volume can be simulated
Uh, no. Raytracing can definitely have emitting surfaces and volumes.
> Reflection / Refraction can only simulate a limited set of light paths, for perfect mirror surfaces, or perfectly homogeneous transparent media.
You sure about that?
> More complex effects like depth of field are not supported.
https://www.povray.org/documentation/view/3.60/248/
Also, the title should get a "2019" tag.
user____name
This is implicitly about Whitted Raytracing, which was synonymous with cost effective "raytracing" for a time.
The simplified history is usually presented as Whitted Raytracing -> Distributed Raytracing -> Path Tracing.
The gist is that in Whitted for each surface hit a single shadow ray per light, a reflection ray and a refraction ray are traced. Shadows and reflections are perfectly hard. Distributed raytracing takes all those single rays and shoots N randomized rays instead, which gives soft reflections and shadows. Neither of these orthodox algorithms imply indirect lighting, which is what Path Tracing added into the mix.
This is not considering other light transport algorithms such as radiosity or photon mapping, which were popular ways of doing more cost effective global illumination in the nineties and noughties.
wang_li
In '92 I was writing ray tracing and multiple rays per pixel randomly distributed around the center of the pixel was well understood. It's just a kind of anti-aliasing.
ginko
I believe the article refers specifically to Whitted ray tracing when referring to 'ray-tracing' as opposed to distribution ray tracing or path tracing.
POV-ray supports all kind of ray tracing and path tracing techniques, not just Whitted RT.
rana763
[dead]
Perhaps someone can help me with this. I was doing some experimentation with lenses and PBTR v4. This was going great, I was able to model the projection of an object through a lens onto a surface quite well. However, now I want to simulate doublets: lenses which consist of two parts so with two materials. I don't know how to model this in PBTR. It seems that it is not possible to have a shape (lens) touch more than one other material.
> PBRT's MediumInterface system can only represent a single "inside" medium and a single "outside" medium per shape. If a shape physically touches multiple different media (for example, a glass sphere sitting at the interface between water and air), PBRT cannot directly represent this configuration.
I think this is kind of odd for a renderer which is otherwise quite capable. Can anyone explain why this is the case, and how I can work around this limitation?