diff --git a/hw3/Images/Task2/sp24-raytracer-task2-beast.png b/hw3/Images/Task2/sp24-raytracer-task2-beast.png new file mode 100644 index 0000000..3493612 Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-beast.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-beetle.png b/hw3/Images/Task2/sp24-raytracer-task2-beetle.png new file mode 100644 index 0000000..398994c Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-beetle.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-cow.png b/hw3/Images/Task2/sp24-raytracer-task2-cow.png new file mode 100644 index 0000000..9d14408 Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-cow.png differ diff --git a/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png b/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png new file mode 100644 index 0000000..3b2063f Binary files /dev/null and b/hw3/Images/Task2/sp24-raytracer-task2-maxplanck.png differ diff --git a/hw3/index.html b/hw3/index.html index aa38870..ebe8bd7 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -108,7 +108,7 @@
- For the ray generation portion of the rendering pipeline, I first made sure to find the boundaries of the
+ For the ray generation portion of the rendering pipeline, we first made sure to find the boundaries of the
camera space by calculating \(\text{tan}(\frac{\text{hFov}}{2})\) and \(\text{tan}(\frac{\text{vFov}}{2})\) since
the bottom left corner is defined as (\(-\text{tan}(\frac{\text{hFov}}{2})\), \(\text{tan}(\frac{\text{vFov}}{2})
-1\)) and the top right corner is defined as (\(\text{tan}(\frac{\text{hFov}}{2})\),
- \(\text{tan}(\frac{\text{vFov}}{2}), -1\)). Then, I used the instance variables hFov
and vFov
which are in degrees to
calculate the height and width length before using linear interpolation to find the camera image coordinates.
- Afterwards, I used this->c2w
to convert my camera image coordinates into
- world space coordinates and also normalized the direction vector. Finally, I constructed the ray with this vector
+ Afterwards, we used this->c2w
to convert my camera image coordinates into
+ world space coordinates and also normalized the direction vector. Finally, we constructed the ray with this vector
and defined the min_t
and max_t
.
- For the primitive intersection portion of the rendering pipeline, I generated num_samples
using this->gridSampler->get_sample()
. I made sure to normalize the coordinates
- before calling on the previously implemented method to generate the ray. Finally, I called this->gridSampler->get_sample()
. We made sure to normalize the coordinates
+ before calling on the previously implemented method to generate the ray. Finally, we called this->est_radiance_global_illumination()
to get the sample radiance and
averaged
the radiance to update the pixel in the buffer.
@@ -178,7 +180,17 @@
- For the ray-triangle intersection, I implemented Moeller-Trumbore algorithm. + For the ray-triangle intersection, we implemented the Moller-Trumbore formula. This algorithm takes in a ray with + origin, \(o\), and direction, \(d\), as well as a triangle with vertices, \(p_0\), \(p_1\), and \(p_2\) and solves + the following equation: + $$\vec{O} + t\vec{D} = (1 - b_1 - b_2) \vec{p}_0 + b_1 \vec{p}_1.$$ + We followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) + was not within the range of the minimum and maximum time range, + the ray would be parallel to the triangle and thus would not intersect with the triangle given this time range. + Otherwise, the ray would intersect within the triangle's plane. However, we needed to make sure it was within the + triangle so we checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we + updated + the intersection struct.
- YOUR RESPONSE GOES HERE -
- -
-
- |
-
-
- |
-
-
- |
-
-
- |
-
+ We implemented a recursive BVH construction algorithm. These were the formal steps and cases. +
max_leaf_size
, then we created a leaf node and assigned its start and end to
+ the passed in start and end iterators. Finally, we returned this leaf node.
+ .dae
files that you can
+ only render with BVH acceleration.
+ .dae
files rendered with normal shading
+ using BVH acceleration:
+
+
+ cow.dae |
+
+
+ beast.dae |
+
+
+ maxplanck.dae |
+
+
+ beetle.dae |
+
- YOUR RESPONSE GOES HERE -
+
+ As shown in the table below, we found significant speedups in rendering times when using BVH acceleration. We used
+ three .dae
scenes with differing number of primitives. It looks that the
+ rendering time is proportional to the average number of intersection tests per ray. Without BVH acceleration, we
+ had to cast every single ray on every primitive and thus it scales linearly with the number of primitives. With
+ BVH acceleration, we split the primitives into two different nodes so effectively reduced it down logarithmically
+ and thus do not need to check as many primitives so the intersection tests remain relatively constant. The BVH
+ data structure helps us to quickly find the intersection of a ray with the scene and thus significantly reduces
+ the time it takes to render the scene.
+
.dae Scene |
+ Number of Primitives | +Render Time (no BVH) | +Render Time (BVH) | +Avg Intersection Tests per Ray (no BVH) | +Avg Intersection Tests per Ray (BVH) | +
---|---|---|---|---|---|
teapot.dae |
+ 2464 | +43.25 s | +0.0752 ms | +1006.19 | +2.89 | +
peter.dae |
+ 40018 | +697.63 s | +0.0837 s | +11753.02 | +2.41 | +
CBlucy.dae |
+ 133796 | +2640.11 s | +0.1127 s | +30217.62 | +3.03 | +
- YOUR RESPONSE GOES HERE -
++ Direct lighting is zero bounce lighting, the light that comes directly from the light source, plus one bounce lighting, the light that comes back to the camera after reflecting off the scene once. We will need to sample the light that comes +
+CBspheres_lambertian.dae
CBspheres_lambertian.dae
-
- |
-
-
- |
-
-
- |
-
-
- |
-
- YOUR EXPLANATION GOES HERE -
+
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+ YOUR EXPLANATION GOES HERE +
- YOUR RESPONSE GOES HERE +
- YOUR RESPONSE GOES HERE
+ Adaptive sampling helps to improve the efficiency and quality of rendering images, especially in situations where
+ certain parts of the image require more computational resources to accurately represent than others. Instead of
+ increasing the sample rate and thus the rendering time, adaptive sampling concentrates the samples in the more
+ difficult parts of the image as there are some pixels that converge quicker than others. The idea is to allocate
+ more samples to regions that require higher fidelity representation, while reducing the number of samples in
+ smoother areas where details are less noticeable. We implemented adaptive sampling by updating \(s_1\) and \(s_2\)
+ as defined in the spec. After a multiple of samplesPerBatch
, we calculated
+ the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), we would stop sampling the
+ pixel and save the total number of samples that we have taken to calculate the color correctly. We used the
+ following equations:
+ $$s_1 = \sum_{i = 1}^{n} x_i$$
+ $$s_2 = \sum_{i = 1}^{n} x_i^2$$
+ $$\mu = \frac{s_1}{n}$$
+ $$\sigma^2 = \frac{1}{n - 1}\cdot \left(s_2 - \frac{s_1^2}{n}\right)$$
+ $$I = 1.96 \cdot \frac{\sigma}{\sqrt{n}}$$