Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For optimised ray tracing you don't beam the light from the camera as that has the same chance to bounce to the sun through indirect illumination as a ray from the sun has if it is going to bounce to the camera.

What they are saying here is wrong or rather extremely simplified for a younger audience.



No that's wrong. You do start tracing paths at the camera. The lens usually is extremely small and the rays that do contribute are highly directional, the probability of hitting the lens is extremely low and the probability of the hitting the lens in a direction that actually contributes to the final image is even lower. On the other hand there usually are many lights in a scene with a total area that's much larger than the area of lens and most lights don't have a directional profile. So you're much more likely to hit a light than you are to hit the lens, hence we start at the camera and not at a light.

They do simplify things a bit though. Normally we don't trace one path at a time, but we trace multiple of them. Each time we intersect with an object we do not only create another ray to continue to path, but we also sample a point on a light and we connect the two points by a ray to finish the path. This process is called Next Event Estimation and we can combine both 'accidental' paths and 'connected' paths by using a technique called Multiple Importance Sampling (MIS).


One more thing that makes this possible: you actually have some room for "choice" in which way the ray goes, and so at the 'last step' you can just choose that it goes towards the sun. What do I mean? Well, most objects are largely diffuse, meaning that light is reflected in a random direction. When you hit a diffuse object, you can choose to sample light that 'randomly' bounces off in the direction of the sun. Since if it bounces off away from the sun, then it doesn't contribute light.

Lots of caveats here, of course. You do need to also sample light going in other directions, since the sun isn't the only light source (other objects reflect). You can only do this on diffuse surfaces, so you need to keep going until you hit a diffuse surfaces. Most surfaces are partly diffuse partly specular, etc. so you'll actually want to sample both straight towards the light source and off in other angles.

But what does the most simple path tracer look like? You shoot rays from the camera. If it hits a diffuse surface, bounce a ray toward each light, adding that light if that ray isn't obstructed from the light. If it hits a specular surface, bounce off based on the surface and ray orientations, and recurse when you hit another surface. You see how we cheat? If everything is diffuse, then we only ever make one bounce, straight from us to the sun. But that's a great first order approximation, since sunlight is so much brighter than reflected light. Same approach works for more bounces; just end with it trying to hit the sun.


What you describe is next event estimation.


Thanks for giving the name; I wanted to describe why connecting two paths works. Other answers seemed to skirt around that without spelt it out.


I was wondering that. I don't have anything to do with 3D graphics, but that had occurred to me. What method do they use to ensure only the rays that have the camera and sun as end points are rendered?


That's very simple: if we don't end up hitting a light the path won't carry any radiance, so it won't contribute to the final image. If we start on a light and we don't end up hitting the camera the path won't carry any importance and so it won't contribute to the final image either.

Keep in mind that there is a very mathematical foundation to all this, we're not just tracing paths for the fun of it. Basically what we want to solve is a path integral (an integral over all paths), we do this using a technique called Monte Carlo integration (which basically means we use randomness). We first sample a path (using path tracing) and then we calculate the contribution of that path (which basically is the amount of radiance is carries divided by the probability of sampling the path) and then we add that contribution to the right pixel.


Bidirectional path tracing. You send rays from the sun, rays from the camera, and try to connect them. The ones that connect are the ones that get computed for illumination.


Bidirectional path tracing is one way of sampling paths (actually it combines many techniques for sampling paths and weights those using something called Multiple Importance Sampling). It's not the only way of doing it. Disney most likely uses path tracing with next event estimation. This means that they start a path as explained in the video and they end a path by sampling a point on a light and then connecting the start of the path with the point to form a full path. This is one the techniques used by bidirectional path tracing (bdpt), this technique uses n vertices on the camera path and 1 vertex on the light path, but bdpt also uses techniques with s vertices on the camera path and t vertices on the light path. This means that there are multiple ways to sample the same path, so these techniques need to be weighted using something called Multiple Importance Sampling.


How is that even possible? The beam incidence angular calculation (and multi path dispersion) creates insane complexity that must be computed on the fly to even make sense.

For instance: a beam hits some material and needs to reflect or worse, pass through via transparency. Another issue: If we are calculating on a per pixel basis, that means bundling multiple paths together to figure out what the weighted return will look like. How can this all be computed with any kind of efficiency without cheating?


Bidirectional path tracing doesn't do anything clever to "try to connect" paths from light sources and cameras. The approach is just

- Trace random paths from light sources until they terminate (usually decided with Russian roulette).

- Trace random paths from the camera (usually N per pixel, or you can use more paths in noisy areas) until they terminate.

- Try to connect each point in a camera path with each point in a light path, using a simple line test. If it succeeds, that color is added to the pixel from which the camera path originated.

At least that's my understanding; I've only implemented simpler algorithms and read a bit about bidirectional path tracing.

> If we are calculating on a per pixel basis, that means bundling multiple paths together to figure out what the weighted return will look like. How can this all be computed with any kind of efficiency without cheating?

Right, we still need to consider many paths per pixel to get a high quality image. But it converges faster than most other Monte Carlo techniques.


Shoot a ray from the camera to an object. Then 'connect' it to all light sources, by measuring the change in angle, and properties of the surface. And add up.

If the surface is refractive/reflective, recursively shoot one more ray calculating the right direction, with correctly diminished intensity and follow the same process.


That's funny that would say that since what you are saying is completely wrong. Different techniques would have different probabilities, but none would have what you are describing.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: