Assignment 2: Simulating Realistic Cameras
Due: Thursday, Mar 13 2008 at 11:59pm
Most computer graphics systems assume a pinhole camera model, which images the entire scene in focus. In contrast, real cameras use multi-lens systems with finite apertures that produce interesting effects such as depth of field (FOV), nonperspective projection, and vignetting. In this assignment, you will extend
with support for a more realistic camera model that accurately simulates these effects. We will provide you with specifications of real lens systems with various focal lengths, each composed of multiple lens elements. You will build a
class that simulates the traversal of light through the lens sytem and generate camera rays for the ray tracer. With this simulator, you can experiment with settings available on a real camera, such as changing focus, aperture size, and focal length.
Step 1: Download the starter code and scenes
Click the link below to download a zip file containing the starter code, several
scene files / textures, and the camera specification files:
You should unzip and copy the source code to the
folder, project files to the corresponding folder on your platform, and scene and camera files to your
Understanding the starter code
Look through the files and make sure you understand the basic code structure and interface functions. The parameters passed into the
class include standard
class members and some additional ones:
focaldistance: The distance at which the camera is focused at. For example, setting this to
4000 means the camera is focused at 4 meters away from the origin of the lens system.
focallength: The focal length of the lens system. This is only used to set the proper aperture size of the lens system.
fstop: Relative aperture size (typically varies between 1.4 and 32). The effective aperture size is
focallength / fstop.
filmdiagonal: Film size (typically 35mm).
specfile: Camera specification file.
Note that all units are in millimeter
(mm). You can find examples of these parameters in the provided scene files.
Understanding the camera specification file (
You should read this paper
to understand the description of a camera file. Again, all units are in millimeter. File parsing has been provided to you in the starter code through the
interface function. Initial data read from the file are stored in each
In pbrt, a camera's viewing direction is assumed to be along the positive z-direction in camera space. Therefore, you should assume that the lens elements are places along the z-axis in camera space, and your camera should be facing directly down the z-axis. Certainly for convenience you can assume otherwise but should remember to make corresponding conversion when transforming a camera ray to the world space.
The scene files (
There are 8 scenes in total provided to you. The first set of four scene (
) demonstrate four different lens assemblies. Below are sample images produced for these four scenes:
50mm double gauss 22mm wide angle 10mm fisheye 250mm telephoto
The second set of scenes (
) use the same camera lens (50mm double gauss) but vary the aperture size and focal distance, so you can experiment with camera focusing and depth of field effects. Note
: for 491K students, as you are not required to implement camera focusing, the three
scenes will end up looking the same; however, you should observe the change in depth of field between
f/2.8 (less DOF) f/16 (more DOF) f/2.8 (near focus) f/2.8 (far focus)
Step 2: Implement the
You only need to make changes in the three provided source files:
. These files will build a pbrt plugin
on Linux). Your main task is to implement the
takes a sample position on the film plane (given by
) and returns a stochastic ray from the camera out into the scene. To the rest of pbrt, your camera appears as just any other camera. Therefore, in case you have any question about how it works, examine any available camera class in the
folder. In addition, you may need to read part of Chapter 6 in the textbook to get familiar with how the camera class works.
Here are the specific steps you should take (all computation should happen in camera space):
- Compute the position of the sample on the film plane. To do so, you need to take use of
sample.ImageY (which vary from 0 to the film resolution) and the film's physical size (
- The color of a pixel is proportional to the irradiance received at that pixel. This value is an estimate of all light reaching the pixel from the world and through the lens system. The easiest way to sample light paths is to fire a random ray at the back (rightmost) lens element, and trace the ray through the lenses until it exits the front element of the lens system. In order to pick a stochastic sample on the lens element, you should use the
ConcentricSampleDisk() function. This function returns a uniformly random sample on a unit disk. You should pass in
sample.lensV as the first two parameters, and the disk sample result will be returned in the last two parameters of that function. The code for tracing a ray through the lens system should be implemented in
- Certainly some rays will terminate while passing through the lenses, either because they fail to intersect a lens element or they hit the aperture stop. If successful, the generated ray (i.e. its origin and direction) should be returned in the
Ray* ray parameter that is passed in to the
GenerateRay function. Note that you must call
CameraToWorld to transform this ray to world space before returning. Also remember to set
ray->maxt properly. You can follow example in any other camera class (e.g.
GenerateRay also needs to return a weight proportional to the geometric factor in irradiance estimate. Initially you can return 1.0 for all rays that succesfully pass through the lens system. However, doing so will produce a rendering result that is biased. The proper weight should be assigned according to the projected solid angle of each sampled ray. For rays that fail to pass through the lens system, you should return a weight of 0.0.
- Render each of the provided scenes using your realistic camera simulator. Example images are given below. Note that in order to get a noise-free image, you need to use many samples, hence the rendering could take quite a while to finish.
- For debugging you should use a smaller image resolution (such as 64x48) and a small number of samples (such as 4 to 16). For finaly quality rendering, you should use the default resolution and at least 256 samples.
- If pbrt reports that 'Non-a-number radiance value returned', that probably means the camera ray you generated is invalid. Check the origin and direction of the camera to make sure they are valid.
- While tracing a ray through lenses, you need to test intersection of ray with each lens surface. You also need to determine how rays refract if they intersect lens' surface. The refraction is computed based on Snell's law; a review of the basic formula can be found here.
- Use special rays for checking your lens tracing function. For example, a ray along the z-axis (i.e. starts at the film center and goes toward the lens center) is guaranteed to pass through the lens system. If that fails, something is wrong with your tracing code.
- Very Important: The final quality rendering can take a long time to complete, make sure you leave enough time instead of rushing at the last moment!!!
- [Update] A drawlens program is provided for you to visually debug your ray-lens tracing code. The program is provided as is. You should read the README.txt to understand how to use it: Download drawlens.zip
Step 3: (691MM only) Focusing
So far the film plane has been placed at a fixed location specified by the camera definition file. Your task here is to implement the 'focusing' capability for your camera class so it can be focused at any distance in the scene. In other words, you need to adjust the camera's film plane position (
) based on the
parameter provided in the scene file. You have two ways to achieve this goal, and you can implement either
- The first method is to fire a paraxial ray that starts at the
focaldistance along the z-axis and toward the lens: use your existing lens tracing code to find out where the ray intersects the z-axis at the other (right) side of the lenses. Then you can simply place the film place at the intersection point. Note that you should make this ray close to the z-axis or otherwise it can fail to pass through the lenses. This method is essentially using inverse tracing to figure out the film plane position.
- The second method is to adjust the distance based on Section 3.3 of the Kolb paper. To do so, you need to 1) find out the two principle planes and focal lengths by firing parallel rays from the left and right and compute intersections with the z-axis; 2) solve a quadratic equation to compute the distance to adjust.
If successful, your code should produce different results for the following three scenes:
. The same camera is used in all these scenes but is focused at a different
For this assignment, you should pack the following files in a single
file and email it to email@example.com
- Your modified
- One image (in EXR or JPG or PNG format) for each of the 8 scene provided (you must render them with at least 256 samples).
- Submit any other images you generated and let us know what other cool things you did.
This assignment will be graded on a
scale. Partial grades (such as 3.5) will be given based on partially complete implementation.
0: Little or no work was done.
1: Significant effort was put into the assignment, but the code does not work or fails to produce any meaningful image.
2: The code produces images but have significant artifacts, such as wrong field of view, a large amount of noise / black spots, significant distortion, missing geometry etc.
3: The code produces images that are almost correct, but have slight artifacts such as noise, lack of vignetting, incorrect depth of field etc.
4: All requirements met. All images are generated correctly without obvious artifacts. For 691MM students, the camera focus must work correctly in order to get 4 points.