CS 315 Homework 7 - Ray Tracer

Due Tues Nov 25 at 11:59pm

Overview

In this assignment, you will write a simple ray tracer. Like rasterization, ray tracing is an algorithm for rendering 3D scenes into 2D images. However, ray tracing can produce some amazing visual effects that rasterization has difficulty with: such as accurate shadows, mirror reflection, and refractions/transparencies. When you've finished with this assignment, you'll have produced a piece of software (almost entirely on your own) that can generate images such as those shown below.

In the end, your ray tracer will be able to:

  1. Render spheres and triangles (and by extension, meshes made of triangles!), including transformed surfaces
  2. Perform basic shading using the Phong illumination model
  3. Include lighting from ambient, point, and directional light sources
  4. Capture shadows cast by objects that occlude light sources
  5. Calculate mirror reflections

If you wish, you may also include further effects: texture mapping, depth of field, soft shadows, etc. These are also available options for the final project (see Homework 8).

(The last of these, the Cornell Box is a classic way of demonstrating your ray tracer).

This assignment should be completed individually. You are encouraged to discuss high-level issues with other students, but your code must be your own.

Objectives

Necessary Files

You will want a copy of the cs315-hwk7.zip file which contains the starter code for this assignment. The starter code contains a basic canvas setup, as well as implemented helper methods for letting you load scene files and color the individual pixels of the canvas.

Particular items of note:

Finally, as always, the zip file also contains a README.txt file that you will need to fill out.

Assignment Details

This assignment is both more straightforward (no fighting with OpenGL shaders!) and more open-ended (no explicit development steps!). In effect, you're writing a complete graphics program utilizing the concepts we've discussed so far. However, I've included some hints and suggestions below.

Using the Scene Files

Your program will render scenes that are defined in a scene file, specified in JSON format:

      {
        "camera" : {...}, //object with details about the camera
        "bounce_depth" : 0, //number of times light should "bounce" when reflecting
        "shadow_bias" : 0.0001, //shadow bias for shadowing
        "lights" : [...], //array of light sources for your scene
        "materials" : [...], //array of materials (reflectance values) for your scene 
        "surfaces" : [...] //array of surfaces (spheres or triangles) that are in your scene
      }

Caveat: I have manually adapted the scene files into JSON format (from a different text-based format). While I've tested most of the components, it's possible there may be an error in translation (like a minus sign getting removed). If you come across a problem with a particular scene file, let me know.

Object Oriented... Javascript?!

While ray tracing is a relatively simple algorithm conceptually, it requires a non-trivial amount of code to implement (because you need to do all the work yourself). If organized poorly, the program can get out of hand in a hurry. Fortunately, the ray tracing algorithm is a good target for object-oriented software design!

JavaScript does support a style of Object-Oriented programming called prototype-based programming. The basic idea behind JavaScript's OOP is that functions are themselves objects, and can carry around their (scoped) variables with them. Thus we can define a function

  var Camera = function(eye, at, up, fovy, aspect){
    this.eye = eye;
    ...
  };  

And then later instantiate a new instance of this function:

  var cam = new Camera(eyeValue, atValue ...);

And each of the assigned values will continue to be in scope, allowing us to access them like instance variables!

Additionally, we can additional functions inside our "Camera" object, and then each instance of that function object will have access to the inner function. However, because we don't want to use excess memory by redefining the function for each object, we instead define the function for the "prototype" of the class instance:

  Camera.prototype.castRay = function(x, y){
    ...
  };

This ensures that each time we create a new Camera object, the function has the castRay method defined.

It is also possible to use inheritance in JavaScript (to avoid duplicating methods), though it's a bit tricky and probably isn't necessary for this assignment. See here for an example.

HOWEVER, because JavaScript is loosely-types, we can gain the polymorphism benefit of OOP by using duck-typing. The basic idea is that we can call a method on an object, as as long as the object has that method things will work--no matter what type the object actually is ("if it quacks like a duck..."). For example:

  var Sphere = function(..){}; //sphere
  Sphere.prototype.intersects = function(ray){...};
  var Triangle = function(..){}; //triangle
  Triangle.prototype.intersects = function(ray){...};

  var surfaces = [];
  surfaces.push(new Sphere()); //add both types to our array
  surfaces.push(new Triangle());

  for(var i=0; i<surfaces.length; i++){
    sufaces[i].intersects(ray); //call the appropriate method on the object, whatever type it is
  }

For more details about Object-Oriented programming in JavaScript, I recommend this Mozilla tutorial.

There are a number of classes you may consider for your ray tracer:

Not all classes are necessary. JavaScript objects are robust and quick to work with: if a class is only storing values (as may be the case with Rays or Intersections), you could possibly just use a regular hash. However, if you want your object to be able to perform methods, making a class is a useful abstraction.

Development Plan

There are a number of components to this assignment. However, the order that you complete these components is fairly open-ended. I have provided a suggested ordering (and some implementation notes) below. Whatever you do: make sure one piece of functionality works before you add in the next!. Otherwise you may run into bugs from half-completed code!

  1. No matter what, the first thing you'll need to set up is the Camera so you can cast rays through pixels! Details about to determine the origin and the direction from a ray are detailed in the Shirley reading (pg. 74).Basically, you're going to calculate the component vectors of the viewMatrix using the "eye", "at", and "up" values (like we discussed in class), and then use these to take a linear interpolation of the pixel's location in screen coordinates.
    • Because we're specifying a view volume in terms of the fovy and aspect ratio rather than a left-right-top-bottom, you'll need to modify your calculation of the (u,v) coordinates of the image plane. Some substitution math (with an assumption that the focal length is 1) gives:
        h = 2*Math.tan(rad(fovy/2.0));
        w = h*aspect;
      
        u = (w * i/(canvas.width - 1)) - (w/2.0);
        v = (-h * j/(canvas.height - 1)) + (h/2.0);
    • Pro tip! For values that are the same for all rays (e.g., h and w), calculate them out ahead of time to speed up your program!
    • Note that all your scenes should use perspective views.
    • You can test your ray casting math with the following values (with a camera from the SphereTest scene):
        width = 512; height = 512
        ray = camera.castRay(0,0);
        //=> o: 0,0,0; d: -0.41421356237309503,0.41421356237309503,-1 raytracer.js:115
        ray = camera.castRay(width,height);
        //=> o: 0,0,0; d: 0.4158347504841443,-0.4158347504841443,-1 raytracer.js:117
        ray = camera.castRay(0,height);
        //=> o: 0,0,0; d: -0.41421356237309503,-0.4158347504841443,-1 raytracer.js:119
        ray = camera.castRay(width,0);
        //=> o: 0,0,0; d: 0.4158347504841443,0.41421356237309503,-1 raytracer.js:121
        ray = camera.castRay(width/2,height/2);
        //=> o: 0,0,0; d: 0.0008105940555246383,-0.0008105940555246383,-1           
    • (Lots of help for this one because it's so fundamental to getting anything else working!)
  2. The next easiest step is to add Sphere Intersection. We will go over the intersection algorithm in class, and it is also in the Shirley text. Start by setting the pixel to be a constant color (e.g., white) if there is an intersection, and a different color (e.g., black) if there is not. This should let you produce an image like the SphereTest example.
    • Note: there is some additional work needed to grab all of the shapes from the scene files! But once you have that in, you can test this with multiple different scenes and see the spheres showing up.
    • After you have this intersection working (and Materials specified), you can begin adding in Sphere Shading
  3. You should also add in Triangle Intersection early on. Again, we will go over the intersection algorithm in class, and it is also in the Shirley text. Start by setting the pixel to be a constant color (e.g., white) if there is an intersection, and a different color (e.g., black) if there is not. This should let you produce an image like the TriangleTest example.
    • Remember that triangle vertices need not be specified in counter-clockwise order!
    • After you have this intersection working (and Materials specified), you can begin adding in Triangle Shading
  4. In order to shade objects, you'll need to specify the Light Sources and Materials. Read the sources from the scene file, and add functionality to compute the reflected light from a material at a given hit location (fragment) with a given normal. You will need to "port" the logic for your Phong reflectance model from the previous homeworks.
    • Tip: a "shininess" value of 0 should mean there is no specular reflection; do not raise a value to the 0th power!
  5. Once you have lighting and shading implemented, you can add Sphere Shading. Calculate the intersection point and normal for a sphere, then return the color of the sphere based on those values, and set the pixel to that color. If there is no intersection, set the pixel to be black.
    • This should allow you to produce images like SphereShadingTest1 (for Point lights) and SphereShadingTest2 (for Directional lights).
  6. You can use a similar process for implementing Triangle Shading, which you can test with TriangleShadingTest.
  7. Next you should add in Transformations. We'll go over this idea in class, and it's in Chapter 13 of the Shirley reading. The basic idea is that we want to calculate an intersection with a transformed object. However, rather than trying to do that math--instead we'll transform the ray into the object's coordinate frame!.
    • If M is the object's transformation matrix, then we want to transform the cast ray (both its origin and its direction) by M-1, and use that ray to check for interaction with the normal sphere. Note that you can easily invert a mat4 with the glMatrix library, and apply that matrix to the ray!
      • Remember that the ray's origin is a point (so should have a homogenous coordinate of 1), and the ray's direction is a vector (so should have a homogenous coordinate of 0).
    • Remember that in order to calculate the intersection point, you'll need to convert it back into world coordinates by multiply by M. Similarly, you'll need to transform the calculated normal by multiply it by the normal matrix (M-1)T
    • You can test this on the TransformationTest and the FullTest
  8. Once you've got all your models and lighting working, you can start adding extra features, such as Shadows. Whenever you intersect a shape, you'll need to cast a new shadow ray towards the light (or in the opposite direction for a Directional light). If you intersect an object, then you are in shadow and so should not include the contribution of that light in your reflectance model!
    • Note that this casting will likely be started from the same method where you compute the material color of a point
    • You can test this on the ShadowTest1 scene (for a Point light) and ShadowTest2 (for a Directional light). The CornellBox and FullTest also include shadows.
  9. Finally, you can add in Mirror Reflections. This will involve recursive ray tracing: when you find an intersection, recurse and cast a ray in the reflected direction (you'll need to calculate the directin of this vector yourself, as there isn't a built-in method in glMatrix. Notes on the math are in the lecture slides). If the reflected ray hits a surface, calculate the color at that spot, and add it into your shaded color (weighted by the material's mirror reflectance).
    • Remember to keep track of the number of "bounces", and stop when you hit the scene's limit!
    • You can test this on the RecursiveTest; you may wish to adjust the bounce value for testing! The FullTest and CornellBox also include reflections.

With all those done, you should have a working ray tracer! As a bonus step: write a new scene file (with at least 3 objects) that shows off your raytracer and produces a cool render to show off :)

Testing and Debugging

Extensions

This is another somewhat open-ended assignment, so has lots of possible extensions. You can earn up to 5 points of extra credit for these extensions. Note that implementing many of these (extending your ray tracer) are fair game for the Final Project as well (but cannot count as both an extension and final project).

Submitting

BEFORE YOU SUBMIT: make sure your code is fully functional! Grading will be based primarily on functionality. I cannot promise that I can easily find any errors in your code that keep it from running, so make sure it works.

Upload your entire project (including the assets/ and lib/ folders) to the Hwk7 submission folder on vhedwig (see the instructions if you need help). Also include a filled-in copy of the README.txt file in your project directory!

The homework is due at midnight on Tue Nov 25.

Grading

This assignment will be graded out of 28 points: