diff --git a/hw3/Images/Task2/sp24-raytracer-task2-beast.png b/hw3/Images/Task2/sp24-raytracer-task2-beast.png new file mode 100644 index 0000000..3493612 Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-beast.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-beetle.png b/hw3/Images/Task2/sp24-raytracer-task2-beetle.png new file mode 100644 index 0000000..398994c Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-beetle.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-cow.png b/hw3/Images/Task2/sp24-raytracer-task2-cow.png new file mode 100644 index 0000000..9d14408 Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-cow.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png b/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png new file mode 100644 index 0000000..3b2063f Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png differ diff --git a/hw3/index.html b/hw3/index.html index aa38870..ebe8bd7 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -108,7 +108,7 @@

CS 184: Computer Graphics and Imaging, Spring 2024

Homework 3: PathTracer

-

Ian Dong

+

Ian Dong and Colin Steidtmann



@@ -117,11 +117,13 @@

Ian Dong

Overview

- In this homework, I implemented a path tracing renderer. First, I worked on generating camera rays from image space - to sensor in camera space and their intersection with triangles and spheres. Then, I built a bounding volume - hierarchy to accelerate ray intersection tests and speed up the path tracers rendering. Afterwards, I explored - direct illumination to simulate light sources and render images with realistic shadowing. Then, I implemented global - illumination to simulate indirect lighting and reflections using diffuse BSDF. Finally, I implemented adaptive + In this homework, we implemented a path tracing renderer. First, we worked on generating camera rays from image + space + to sensor in camera space and their intersection with triangles and spheres. Then, we built a bounding volume + hierarchy to accelerate ray intersection tests and speed up the path tracers rendering. Afterwards, we explored + direct illumination to simulate light sources and render images with realistic shadowing. Then, we implemented + global + illumination to simulate indirect lighting and reflections using diffuse BSDF. Finally, we implemented adaptive sampling to reduce noise in the rendered images.

@@ -150,22 +152,22 @@

- For the ray generation portion of the rendering pipeline, I first made sure to find the boundaries of the + For the ray generation portion of the rendering pipeline, we first made sure to find the boundaries of the camera space by calculating \(\text{tan}(\frac{\text{hFov}}{2})\) and \(\text{tan}(\frac{\text{vFov}}{2})\) since the bottom left corner is defined as (\(-\text{tan}(\frac{\text{hFov}}{2})\), \(\text{tan}(\frac{\text{vFov}}{2}) -1\)) and the top right corner is defined as (\(\text{tan}(\frac{\text{hFov}}{2})\), - \(\text{tan}(\frac{\text{vFov}}{2}), -1\)). Then, I used the instance variables hFov and vFov which are in degrees to calculate the height and width length before using linear interpolation to find the camera image coordinates. - Afterwards, I used this->c2w to convert my camera image coordinates into - world space coordinates and also normalized the direction vector. Finally, I constructed the ray with this vector + Afterwards, we used this->c2w to convert my camera image coordinates into + world space coordinates and also normalized the direction vector. Finally, we constructed the ray with this vector and defined the min_t and max_t.

- For the primitive intersection portion of the rendering pipeline, I generated num_samples using this->gridSampler->get_sample(). I made sure to normalize the coordinates - before calling on the previously implemented method to generate the ray. Finally, I called this->gridSampler->get_sample(). We made sure to normalize the coordinates + before calling on the previously implemented method to generate the ray. Finally, we called this->est_radiance_global_illumination() to get the sample radiance and averaged the radiance to update the pixel in the buffer. @@ -178,7 +180,17 @@

Explain the triangle intersection algorithm you implemented in your own words.

- For the ray-triangle intersection, I implemented Moeller-Trumbore algorithm. + For the ray-triangle intersection, we implemented the Moller-Trumbore formula. This algorithm takes in a ray with + origin, \(o\), and direction, \(d\), as well as a triangle with vertices, \(p_0\), \(p_1\), and \(p_2\) and solves + the following equation: + $$\vec{O} + t\vec{D} = (1 - b_1 - b_2) \vec{p}_0 + b_1 \vec{p}_1.$$ + We followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) + was not within the range of the minimum and maximum time range, + the ray would be parallel to the triangle and thus would not intersect with the triangle given this time range. + Otherwise, the ray would intersect within the triangle's plane. However, we needed to make sure it was within the + triangle so we checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we + updated + the intersection struct.


@@ -217,58 +229,129 @@


-

+

Section II: Bounding Volume Hierarchy (20 Points)

- -

- Walk through your BVH construction algorithm. Explain the heuristic you chose for picking the splitting point. -

-

- YOUR RESPONSE GOES HERE -

- -

- Show images with normal shading for a few large .dae files that you can only render with BVH acceleration. -

- -
- - - - - - - - - -
- -
example1.dae
-
- -
example2.dae
-
- -
example3.dae
-
- -
example4.dae
-
+
+

+ Walk through your BVH construction algorithm. Explain the heuristic you chose for picking the splitting point. +

+

+ We implemented a recursive BVH construction algorithm. These were the formal steps and cases. +

    +
  1. + Base Case: If the number of primitives is less than or equal to max_leaf_size, then we created a leaf node and assigned its start and end to + the passed in start and end iterators. Finally, we returned this leaf node. +
  2. +
  3. + Recursive Case: Otherwise, we needed to find the best split point to create the left and right BVH nodes. First, + we iterated through all three dimensions and created a new function to find the median of the primitives for the + current dimension. we temporarily split the primitives into the two nodes based on this median axis. The + heuristic we used was the sum of the surface areas of the two bounding boxes and chose the axis that minimized + this sum. Afterwards, we split the primitives into the two nodes, updated the iterator to connect them, and + found the midpoint before passing in the new start and end iterators into the recursive BVH construction + algorithm. If at any time a split led to all of the primitives being in one node, we would just follow the base + case logic and assign the start and end to the node. Finally, we returned the node. +
  4. +
+

+
+
+
+

+ Show images with normal shading for a few large .dae files that you can + only render with BVH acceleration. +

+ Here are some screenshots of the .dae files rendered with normal shading + using BVH acceleration: +

+ +
+ + + + + + + + + +
+ +
cow.dae
+
+ +
beast.dae
+
+ +
maxplanck.dae
+
+ +
beetle.dae
+
+

-

- Compare rendering times on a few scenes with moderately complex geometries with and without BVH acceleration. - Present your results in a one-paragraph analysis. -

-

- YOUR RESPONSE GOES HERE -

+
+

+ Compare rendering times on a few scenes with moderately complex geometries with and without BVH acceleration. + Present your results in a one-paragraph analysis. +

+

+ As shown in the table below, we found significant speedups in rendering times when using BVH acceleration. We used + three .dae scenes with differing number of primitives. It looks that the + rendering time is proportional to the average number of intersection tests per ray. Without BVH acceleration, we + had to cast every single ray on every primitive and thus it scales linearly with the number of primitives. With + BVH acceleration, we split the primitives into two different nodes so effectively reduced it down logarithmically + and thus do not need to check as many primitives so the intersection tests remain relatively constant. The BVH + data structure helps us to quickly find the intersection of a ray with the scene and thus significantly reduces + the time it takes to render the scene. + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
.dae SceneNumber of PrimitivesRender Time (no BVH)Render Time (BVH)Avg Intersection Tests per Ray (no BVH)Avg Intersection Tests per Ray (BVH)
teapot.dae246443.25 s0.0752 ms1006.192.89
peter.dae40018 697.63 s0.0837 s11753.022.41
CBlucy.dae133796 2640.11 s0.1127 s30217.623.03
+

+



@@ -278,12 +361,15 @@

Section III: Direct Illumination (20 Points)

Focus on one particular scene with at least one area light and compare the noise levels in soft shadows when rendering with 1, 4, 16, and 64 light rays (the -l flag) and with 1 sample per pixel (the -s flag) using light sampling, not uniform hemisphere sampling. Compare the results between uniform hemisphere sampling and lighting sampling in a one-paragraph analysis. --> -

- Walk through both implementations of the direct lighting function. -

-

- YOUR RESPONSE GOES HERE -

+
+

+ Walk through both implementations of the direct lighting function. +

+

+ Direct lighting is zero bounce lighting, the light that comes directly from the light source, plus one bounce lighting, the light that comes back to the camera after reflecting off the scene once. We will need to sample the light that comes +

+
+

@@ -303,11 +389,13 @@

- +
CBspheres_lambertian.dae
- +
CBspheres_lambertian.dae
@@ -327,39 +415,39 @@


-

- Focus on one particular scene with at least one area light and compare the noise levels in soft shadows - when rendering with 1, 4, 16, and 64 light rays (the -l flag) and with 1 sample per pixel (the -s flag) using - light sampling, not uniform hemisphere sampling. -

- -
- - - - - - - - - -
- -
1 Light Ray (example1.dae)
-
- -
4 Light Rays (example1.dae)
-
- -
16 Light Rays (example1.dae)
-
- -
64 Light Rays (example1.dae)
-
-
-

- YOUR EXPLANATION GOES HERE -

+

+ Focus on one particular scene with at least one area light and compare the noise levels in soft shadows + when rendering with 1, 4, 16, and 64 light rays (the -l flag) and with 1 sample per pixel (the -s flag) using + light sampling, not uniform hemisphere sampling. +

+ +
+ + + + + + + + + +
+ +
1 Light Ray (example1.dae)
+
+ +
4 Light Rays (example1.dae)
+
+ +
16 Light Rays (example1.dae)
+
+ +
64 Light Rays (example1.dae)
+
+
+

+ YOUR EXPLANATION GOES HERE +


@@ -386,7 +474,7 @@

Walk through your implementation of the indirect lighting function.

- YOUR RESPONSE GOES HERE +


@@ -670,7 +758,21 @@

Explain adaptive sampling. Walk through your implementation of the adaptive sampling.

- YOUR RESPONSE GOES HERE + Adaptive sampling helps to improve the efficiency and quality of rendering images, especially in situations where + certain parts of the image require more computational resources to accurately represent than others. Instead of + increasing the sample rate and thus the rendering time, adaptive sampling concentrates the samples in the more + difficult parts of the image as there are some pixels that converge quicker than others. The idea is to allocate + more samples to regions that require higher fidelity representation, while reducing the number of samples in + smoother areas where details are less noticeable. We implemented adaptive sampling by updating \(s_1\) and \(s_2\) + as defined in the spec. After a multiple of samplesPerBatch, we calculated + the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), we would stop sampling the + pixel and save the total number of samples that we have taken to calculate the color correctly. We used the + following equations: + $$s_1 = \sum_{i = 1}^{n} x_i$$ + $$s_2 = \sum_{i = 1}^{n} x_i^2$$ + $$\mu = \frac{s_1}{n}$$ + $$\sigma^2 = \frac{1}{n - 1}\cdot \left(s_2 - \frac{s_1^2}{n}\right)$$ + $$I = 1.96 \cdot \frac{\sigma}{\sqrt{n}}$$