MP2 Depth of Field

Handin info

Spring 2006 Gallery

Spring 2005 Gallery

Spring 2004 Gallery

Spring 2003 Gallery

Spring 2002 Gallery

Spring 2001 Gallery

Augment the ray tracer you wrote for MP1 to support depth of field using distributed ray tracing. Your ray tracer's input should include a focal distance which indicates the distance from the viewpoint to the focal plane (the plane where everything is in focus), and an aperture, which is the diameter of the disk from which rays are cast into the scene. Both parameters should be specified in world coordinate units.

These units correspond to a real camera's depth of field. In this case, the diameter of the aperture is given by the focal length of the lens divided by the f-stop. For example, if you have a focal length of 35mm, and your f-stop is set to 2, then your aperture (diameter of the disk of ray origins) would be 17.5mm. This setting usually exaggerates the depth of field effect, especially for objects near the camera. If you instead set your f-stop to 16, then your aperture would be only about 2mm, which would yield a hardly perceptible depth of field effect for typical scene distances from the camera. (When photographing, the f-stop also affects the exposure necessary, which controls  the amount of light that reaches the film. Graphics liberates us from the confines of physical film media, even though we as a field seem to enjoy simulating this medium as closely as possible. In any case, the aperture of your distributed ray tracer can control depth of field without worrying about exposure.)

To disguise aliasing artifacts as noise, you should jitter your samples in the lens plane. An easy mistake in computer graphics is to accidentally bias statistical samples. In this case if one used polar coordinates with a random radius and angle, the resulting jitter would be biased toward the center of the disk. A better technique for uniformly sampling points in a disk is to pick random points in a bounding square, and throw out choices if they fall outside of the disk.

You should also jitter your screen samples using the uniform jitter technique that randomly places samples in a subdivided grid on each pixel. Note that this does not mean that you need 16^2 = 256 samples per pixel. Only 16 samples are needed because each sample can be simultaneously jittered in the image plane for antialiasing and in the lens plane for depth of field.

You should use at least 16 samples per pixel, though I will allow fewer samples if an adaptive sampling scheme is implemented. Adaptive sampling is not required for this assignment. Depending on the severity of the depth of field effect, you will likely need more than 16 samples per pixel to avoid graininess in the resulting picture.

Create a scene that highlights the use of depth of field for a dramatic effect. Your program should compile and run automatically, out of the box, on either the PC or Unix Workstations in one of the labs used for this class. The program is due:

10:00am, Monday, 25 Sep 2006.

The late penalty is 10% per day with a maximum of 5 days late.



Grading for Spring 2006
10 - documentation
55 - base points. at least one effect (dof or antialiasing)
10 - having both depth of field and antialiasing
5 - having an additional effect (glossy refl, soft shadows, motion blur, etc...)
10 - efficiency and correctness.
    5 - efficiency (adaptive sampling or other acceleration)
    5 - no-bias sampling (for example, using a rejection method to sample a disc)
10 - picture (make it impressive!). Also, make sure all your features that WORK are SHOWN. If I see a feature in your code, but not in one of your scenes, you aren't going to get full credit for it.
total: 100 points

Some have asked for clarification on how to set up depth of field. The figures used in the notes assumed the image plane (the framebuffer of pixels in world coordinates) could be placed in the focal plane. This is not necessary, though. The focal plane and the image plane can be in different locations, though they must be parallel. For each pixel in the image plane, we cast an eye ray through its center. This eye ray will intersect the focal plane at some point that can be determined either using a ray-plane intersection or simply similar-triangles. We now cast 16 rays from various points from the lens plane (a plane passing through the eyepoint parallel to the image plane) that all pass through the same point in the focal plane.

Others have suggested that the focal plane should actually be a focal sphere around the eyepoint. This might be an interesting effect but is not realistic. The reason we use a plane is that a real camera uses its lens to focus light onto film. Since the film is flat (planar), its focal image in the world is also planar. Note that some compound fisheye lenses can produce a "barrel" distortion, but setting up such a lens system is beyond the scope of this class.