CS 315 Homework 7 - Ray Tracer
Due Tues Nov 25 at 11:59pm
Overview
In this assignment, you will write a simple ray tracer. Like rasterization, ray tracing is an algorithm for rendering 3D scenes into 2D images. However, ray tracing can produce some amazing visual effects that rasterization has difficulty with: such as accurate shadows, mirror reflection, and refractions/transparencies. When you've finished with this assignment, you'll have produced a piece of software (almost entirely on your own) that can generate images such as those shown below.
In the end, your ray tracer will be able to:
- Render spheres and triangles (and by extension, meshes made of triangles!), including transformed surfaces
- Perform basic shading using the Phong illumination model
- Include lighting from ambient, point, and directional light sources
- Capture shadows cast by objects that occlude light sources
- Calculate mirror reflections
If you wish, you may also include further effects: texture mapping, depth of field, soft shadows, etc. These are also available options for the final project (see Homework 8).
(The last of these, the Cornell Box is a classic way of demonstrating your ray tracer).
This assignment should be completed individually. You are encouraged to discuss high-level issues with other students, but your code must be your own.
Objectives
-
Be able to implement the mathematics and algorithms of ray tracing
- (Including light, shadows, reflections, and transformations)
- Practice applying previous concepts from the course, such as viewing, transformations, and lighting
- Apply object-oriented programming principles to graphics
Necessary Files
You will want a copy of the
cs315-hwk7.zip file
which contains the starter code for this assignment. The starter code contains a basic canvas
setup, as well as implemented helper methods for letting you load scene files and color the individual pixels of the canvas.
Particular items of note:
-
The
assets/
folder contains a number of scene files you can use for testing. More details about these files can be found below. You will need to produce your own scene file for this assignment as well. Additionally, theexamples/
folder contains example renderings of these scene files, so you can know what they are supposed to look like! -
Since ray tracing involves its own graphical pipeline, we won't be using OpenGL any more. Thus while Ive included the OpenGL libraries from the previous assignment, though they are not loaded into the HTML by default. However, we are loading the
glMatrix
library, which you can use to do transformations/vector math!
Finally, as always, the zip file also contains a README.txt
file that you will need to fill out.
Assignment Details
This assignment is both more straightforward (no fighting with OpenGL shaders!) and more open-ended (no explicit development steps!). In effect, you're writing a complete graphics program utilizing the concepts we've discussed so far. However, I've included some hints and suggestions below.
Using the Scene Files
Your program will render scenes that are defined in a scene file, specified in JSON format:
{ "camera" : {...}, //object with details about the camera "bounce_depth" : 0, //number of times light should "bounce" when reflecting "shadow_bias" : 0.0001, //shadow bias for shadowing "lights" : [...], //array of light sources for your scene "materials" : [...], //array of materials (reflectance values) for your scene "surfaces" : [...] //array of surfaces (spheres or triangles) that are in your scene }
- The
"camera"
hash specifies an "eye", an "up" vector, a look-at vector ("at"), the vertical field of view (fovy), and the "aspect" ratio. The meaning of these should be self explanatory -
Elements in the
"lights"
array are objects that each represent a different light with the following format:-
The
"source"
field specifies the type of light as a String, either"Ambient"
,"Point"
, or"Directional"
. -
No matter the type, each light has a
"color"
field that specifies the intensity of the emitted light. Note that for simplicity lights do not have separate intensities for the ambient, diffuse, and specular components -
In addition,
"Point"
lights have a"position"
field (specifying a location as an array) and"Directional"
lights have a"direction"
field (specifying the direction vector as an array).
-
The
-
Elements in the
"materials"
array are objects that represent a different material. These materials specify the ambient reflectance color ("ka"), the diffuse reflectance color ("kd"), the specular reflectance color ("ks"), and the "shininess" value. Additionally, materials specify a mirror reflectance ("kr"), which is the intensity of mirrored light to return. E.g., a "kr" value of[0.5, 0.5, 0.5]
would reflect half of the light mirrored from other objects; while a value of[0,0,0]
will not have any mirror reflection. -
Elements in the
"surfaces"
array are objects that each represent a different surface with the following format:-
The
"shape"
field specifies the type of surface, either"Sphere"
or"Triangle"
. Depending on the shape, the object has additional fields:-
Sphere
s have a"center"
(specifying the location as an array) and a"radius"
. -
Triangle
s have three points:"p1"
,"p2"
, and"p3"
. Important: these points may not be in counter-clockwise order!
-
-
All elements specify a
"material"
that is the 0-based index into the"materials"
array for which material to apply to the surface. -
Optionally, each element can specify a
"transforms"
array. This array contains a number of additional arrays representing transformations to apply to the object.- The first entry in the sub-array is the type of transformation (either
"Translate"
,"Scale"
, or"Rotate"
). -
The second entry in the sub-aray is another array of details about that transformation.
"Translate"
transformations include the vector to translate by, and"Scale"
transformations include the vector to scale by."Rotate"
transformations include an array of Euler Rotations: first an angle (in degrees) to rotate around the X-axis, then an angle to rotate around the Y-axis, then an angle to rotate around the Z-axis. - Note that transformations are listed in coordinate-frame order--that is, you can just apply them in order in the code!
- The first entry in the sub-array is the type of transformation (either
-
The
Caveat: I have manually adapted the scene files into JSON format (from a different text-based format). While I've tested most of the components, it's possible there may be an error in translation (like a minus sign getting removed). If you come across a problem with a particular scene file, let me know.
Object Oriented... Javascript?!
While ray tracing is a relatively simple algorithm conceptually, it requires a non-trivial amount of code to implement (because you need to do all the work yourself). If organized poorly, the program can get out of hand in a hurry. Fortunately, the ray tracing algorithm is a good target for object-oriented software design!
JavaScript does support a style of Object-Oriented programming called prototype-based programming. The basic idea behind JavaScript's OOP is that functions are themselves objects, and can carry around their (scoped) variables with them. Thus we can define a function
var Camera = function(eye, at, up, fovy, aspect){ this.eye = eye; ... };
And then later instantiate a new instance of this function:
var cam = new Camera(eyeValue, atValue ...);
And each of the assigned values will continue to be in scope, allowing us to access them like instance variables!
Additionally, we can additional functions inside our "Camera" object, and then each instance of that function object will have access to the inner function. However, because we don't want to use excess memory by redefining the function for each object, we instead define the function for the "prototype" of the class instance:
Camera.prototype.castRay = function(x, y){ ... };
This ensures that each time we create a new Camera
object, the function has the castRay
method defined.
It is also possible to use inheritance in JavaScript (to avoid duplicating methods), though it's a bit tricky and probably isn't necessary for this assignment. See here for an example.
HOWEVER, because JavaScript is loosely-types, we can gain the polymorphism benefit of OOP by using duck-typing. The basic idea is that we can call a method on an object, as as long as the object has that method things will work--no matter what type the object actually is ("if it quacks like a duck..."). For example:
var Sphere = function(..){}; //sphere Sphere.prototype.intersects = function(ray){...}; var Triangle = function(..){}; //triangle Triangle.prototype.intersects = function(ray){...}; var surfaces = []; surfaces.push(new Sphere()); //add both types to our array surfaces.push(new Triangle()); for(var i=0; i<surfaces.length; i++){ sufaces[i].intersects(ray); //call the appropriate method on the object, whatever type it is }
For more details about Object-Oriented programming in JavaScript, I recommend this Mozilla tutorial.
There are a number of classes you may consider for your ray tracer:
-
Camera
A class representing the virtual scene camera. It tracks location and facing of the camera, and may be responsible for generating viewing rays. It can store variables to quickly generate new rays without having to recompute appropriate vectors. -
Sphere
A class representing a sphere. Spheres are easy to intersect with! -
Triangle
A class represent a triangle. Triangles are slightly more complicated to intersect with (but refer to the lecture nodes for pseudocode). -
Material
A class that represents the material reflectance of a surface. You might have this class do shading computations: given an intersection point, a normal, a viewing direction, and a list of lights in the scene, what color is the material? -
AmbientLight, PointLight, DirectionalLight
Classes that represents a light source. It may be able at least provide the direction of the emitted light to the surface point (or vice versa). -
Ray
A class representing a ray blasting through space. Remember rays are defined by the equationa + d*t
. Rays might also have information about a minimum and maximum t value for which intersections are considered valid. -
Intersection
A class that represents an intersection between a ray and a surface. This could store information about that intersection, such as the t value it occured at, the intersection point, the normal value, etc.
Not all classes are necessary. JavaScript objects are robust and quick to work with: if a class is only storing values (as may be the case with Rays or Intersections), you could possibly just use a regular hash. However, if you want your object to be able to perform methods, making a class is a useful abstraction.
- Your goal is clean, readable, well-organized code---both so that I can follow it and so that you can esily debug it! I recommend using OOP as much as you can!
Development Plan
There are a number of components to this assignment. However, the order that you complete these components is fairly open-ended. I have provided a suggested ordering (and some implementation notes) below. Whatever you do: make sure one piece of functionality works before you add in the next!. Otherwise you may run into bugs from half-completed code!
-
No matter what, the first thing you'll need to set up is the Camera so you can cast rays through pixels! Details about to determine the origin and the direction from a ray are detailed in the Shirley reading (pg. 74).Basically, you're going to calculate the component vectors of the viewMatrix using the "eye", "at", and "up" values (like we discussed in class), and then use these to take a linear interpolation of the pixel's location in screen coordinates.
-
Because we're specifying a view volume in terms of the
fovy
andaspect ratio
rather than a left-right-top-bottom, you'll need to modify your calculation of the (u,v) coordinates of the image plane. Some substitution math (with an assumption that the focal length is 1) gives:h = 2*Math.tan(rad(fovy/2.0)); w = h*aspect; u = (w * i/(canvas.width - 1)) - (w/2.0); v = (-h * j/(canvas.height - 1)) + (h/2.0);
-
Pro tip! For values that are the same for all rays (e.g.,
h
andw
), calculate them out ahead of time to speed up your program! - Note that all your scenes should use perspective views.
-
You can test your ray casting math with the following values (with a camera from the
SphereTest
scene):width = 512; height = 512 ray = camera.castRay(0,0); //=> o: 0,0,0; d: -0.41421356237309503,0.41421356237309503,-1 raytracer.js:115 ray = camera.castRay(width,height); //=> o: 0,0,0; d: 0.4158347504841443,-0.4158347504841443,-1 raytracer.js:117 ray = camera.castRay(0,height); //=> o: 0,0,0; d: -0.41421356237309503,-0.4158347504841443,-1 raytracer.js:119 ray = camera.castRay(width,0); //=> o: 0,0,0; d: 0.4158347504841443,0.41421356237309503,-1 raytracer.js:121 ray = camera.castRay(width/2,height/2); //=> o: 0,0,0; d: 0.0008105940555246383,-0.0008105940555246383,-1
- (Lots of help for this one because it's so fundamental to getting anything else working!)
-
Because we're specifying a view volume in terms of the
-
The next easiest step is to add Sphere Intersection. We will go over the intersection algorithm in class, and it is also in the Shirley text. Start by setting the pixel to be a constant color (e.g., white) if there is an intersection, and a different color (e.g., black) if there is not. This should let you produce an image like the
SphereTest
example.- Note: there is some additional work needed to grab all of the shapes from the scene files! But once you have that in, you can test this with multiple different scenes and see the spheres showing up.
- After you have this intersection working (and Materials specified), you can begin adding in Sphere Shading
-
You should also add in Triangle Intersection early on. Again, we will go over the intersection algorithm in class, and it is also in the Shirley text. Start by setting the pixel to be a constant color (e.g., white) if there is an intersection, and a different color (e.g., black) if there is not. This should let you produce an image like the
TriangleTest
example.- Remember that triangle vertices need not be specified in counter-clockwise order!
- After you have this intersection working (and Materials specified), you can begin adding in Triangle Shading
-
In order to shade objects, you'll need to specify the Light Sources and Materials. Read the sources from the scene file, and add functionality to compute the reflected light from a material at a given hit location (fragment) with a given normal. You will need to "port" the logic for your Phong reflectance model from the previous homeworks.
- Tip: a "shininess" value of 0 should mean there is no specular reflection; do not raise a value to the 0th power!
-
Once you have lighting and shading implemented, you can add Sphere Shading. Calculate the intersection point and normal for a sphere, then return the color of the sphere based on those values, and set the pixel to that color. If there is no intersection, set the pixel to be black.
- This should allow you to produce images like
SphereShadingTest1
(for Point lights) andSphereShadingTest2
(for Directional lights).
- This should allow you to produce images like
-
You can use a similar process for implementing Triangle Shading, which you can test with
TriangleShadingTest
. -
Next you should add in Transformations. We'll go over this idea in class, and it's in Chapter 13 of the Shirley reading. The basic idea is that we want to calculate an intersection with a transformed object. However, rather than trying to do that math--instead we'll transform the ray into the object's coordinate frame!.
-
If M is the object's transformation matrix, then we want to transform the cast ray (both its origin and its direction) by
M-1
, and use that ray to check for interaction with the normal sphere. Note that you can easily invert amat4
with the glMatrix library, and apply that matrix to the ray!- Remember that the ray's origin is a point (so should have a homogenous coordinate of 1), and the ray's direction is a vector (so should have a homogenous coordinate of 0).
-
Remember that in order to calculate the intersection point, you'll need to convert it back into world coordinates by multiply by M. Similarly, you'll need to transform the calculated normal by multiply it by the normal matrix
(M-1)T
-
You can test this on the
TransformationTest
and theFullTest
-
If M is the object's transformation matrix, then we want to transform the cast ray (both its origin and its direction) by
-
Once you've got all your models and lighting working, you can start adding extra features, such as Shadows. Whenever you intersect a shape, you'll need to cast a new shadow ray towards the light (or in the opposite direction for a Directional light). If you intersect an object, then you are in shadow and so should not include the contribution of that light in your reflectance model!
- Note that this casting will likely be started from the same method where you compute the material color of a point
- You can test this on the
ShadowTest1
scene (for a Point light) andShadowTest2
(for a Directional light). TheCornellBox
andFullTest
also include shadows.
-
Finally, you can add in Mirror Reflections. This will involve recursive ray tracing: when you find an intersection, recurse and cast a ray in the reflected direction (you'll need to calculate the directin of this vector yourself, as there isn't a built-in method in glMatrix. Notes on the math are in the lecture slides). If the reflected ray hits a surface, calculate the color at that spot, and add it into your shaded color (weighted by the material's mirror reflectance).
- Remember to keep track of the number of "bounces", and stop when you hit the scene's limit!
- You can test this on the
RecursiveTest
; you may wish to adjust the bounce value for testing! TheFullTest
andCornellBox
also include reflections.
With all those done, you should have a working ray tracer! As a bonus step: write a new scene file (with at least 3 objects) that shows off your raytracer and produces a cool render to show off :)
Testing and Debugging
- Ray tracing does a lot of calculations--and because we're doing it in JavaScript (rather than on the graphics card), this means that rendering a scene (especially with lots of objects or recursion) can take a long time. Plan your tests carefully and be patient! You can easily speed up the rendering by reducing the size of your canvas (e.g., to 300x300) for testing.
-
Since ray tracing computes over thousands of pixels, it can be very challenging to debug problems. This is almost as bad as the shader code from the previous assignment.
-
I have provided a sentinel
DEBUG
variable that you can use to restrict printing. I have also provided you some code that will allow you to cast a ray at a certain (clicked) pixel, withDEBUG
set to true. This you can do something like:if(DEBUG){ console.log(ray); }
-
I have provided a sentinel
Extensions
This is another somewhat open-ended assignment, so has lots of possible extensions. You can earn up to 5 points of extra credit for these extensions. Note that implementing many of these (extending your ray tracer) are fair game for the Final Project as well (but cannot count as both an extension and final project).
- Refraction You might add in refraction as well as reflection; you'll need to include additional parameters in your materials.
- Additional light types You might try implementing spot lights, area lights, or lighting from an environment map.
-
Additional primitive types:
It's easy to extend a ray tracer by adding different geometric primitives, such as triangle meshes (but watch out for speed!), bezier or b-spine patches, or implicit surfaces. All you need to do is figure out how you know when you've intersected the surface, and what the normal is at that intersection point!
- If you do this, specify the "shape" field of your surface, e.g., "mesh", "bezier", etc.
- Distribution ray tracing effects: Generate multiple viewing rays per pixel for anti-aliasing, soft shadows, motion blur, and depth-of-field defocus blur.
- Acceleration structures: Add in bounding volume hierarchies (such as axis-aligned bounding boxes) or space partitioning trees to speed up your program. This will help you to efficiently render scenes with a large number of primitives.
- Non-linear perspective: Your ray tracer generates images by sending viewing rays through a planar imaging surface, but the imaging surface doesn't have to be a plane. If you replace it with a hemisphere, for instance, you can get a cool fish-eye lens effect. There are other nifty imaging surfaces you can play with, such as cylinders.
Submitting
BEFORE YOU SUBMIT: make sure your code is fully functional! Grading will be based primarily on functionality. I cannot promise that I can easily find any errors in your code that keep it from running, so make sure it works.
Upload your entire project (including the assets/
and lib/
folders) to the Hwk7 submission folder on vhedwig (see
the instructions
if you need help).
Also include a filled-in copy of the
README.txt
file in your project directory!
The homework is due at midnight on Tue Nov 25.
Grading
This assignment will be graded out of 28 points:
- [1pt] Your program uses ray tracing to calculate the color of each pixel
- [2pt] Your program is able to cast rays based on the designated camera
- [2pt] Your program calculate the intersection between rays and Spheres
- [2pt] Your program calculate the intersection between rays and Triangles
- [3pt] Your program uses the given materials and Phong reflection to shade objects
- [3pt] Your program handles each of the different forms of light (Ambient, Point, Directional)
- [2pt] Your program calculates interactions with transformed surfaces
- [1pt] Your program calculates correct hit location and normals of transformed surfaces
- [3pt] Objects in your scene cast shadows
- [1pt] Objects cast shadows from both point and directional light sources
- [3pt] Objects include recursive mirror reflections
- [1pt] Your program utilizes the given max bounce depth
- [2pt] Your code is organized and readable (e.g., makes effective use of OPP)
- [1pt] Your code is well-documented (particularly parameters for methods)
- [1pt] README is complete
- [+2pt] You have included a new scene file that demonstrates your raytracer's functionality