diff --git a/assets/CS-184-Pathtracer-Writeup.pdf b/assets/CS-184-Pathtracer-Writeup.pdf new file mode 100644 index 0000000..1f69420 Binary files /dev/null and b/assets/CS-184-Pathtracer-Writeup.pdf differ diff --git a/assets/cloth-sim.png b/assets/cloth-sim.png new file mode 100644 index 0000000..3b407b2 Binary files /dev/null and b/assets/cloth-sim.png differ diff --git a/hw3/index.html b/hw3/index.html index e1a03ac..29e149a 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -108,7 +108,7 @@
- For the ray generation portion of the rendering pipeline, we first made sure to find the boundaries of the
+ For the ray generation portion of the rendering pipeline, I first made sure to find the boundaries of the
camera space by calculating \(\text{tan}(\frac{\text{hFov}}{2})\) and \(\text{tan}(\frac{\text{vFov}}{2})\) since
the bottom left corner is defined as (\(-\text{tan}(\frac{\text{hFov}}{2})\), \(\text{tan}(\frac{\text{vFov}}{2})
-1\)) and the top right corner is defined as (\(\text{tan}(\frac{\text{hFov}}{2})\),
- \(\text{tan}(\frac{\text{vFov}}{2}), -1\)). Then, we used the instance variables hFov
and vFov
which are in degrees to
calculate the height and width length before using linear interpolation to find the camera image coordinates.
- Afterwards, we used this->c2w
to convert my camera image coordinates into
- world space coordinates and also normalized the direction vector. Finally, we constructed the ray with this vector
+ Afterwards, I used this->c2w
to convert my camera image coordinates into
+ world space coordinates and also normalized the direction vector. Finally, I constructed the ray with this vector
and defined the min_t
and max_t
.
- For the primitive intersection portion of the rendering pipeline, we generated num_samples
using this->gridSampler->get_sample()
. We made sure to normalize the coordinates
- before calling on the previously implemented method to generate the ray. Finally, we called this->gridSampler->get_sample()
. I made sure to normalize the coordinates
+ before calling on the previously implemented method to generate the ray. Finally, I called this->est_radiance_global_illumination()
to get the sample radiance and
averaged
the radiance to update the pixel in the buffer.
@@ -180,24 +180,24 @@
- For the ray-triangle intersection, we implemented the Moller-Trumbore formula. This algorithm takes in a ray with + For the ray-triangle intersection, I implemented the Moller-Trumbore formula. This algorithm takes in a ray with origin, \(o\), and direction, \(d\), as well as a triangle with vertices, \(p_0\), \(p_1\), and \(p_2\) and solves the following equation: $$\vec{O} + t\vec{D} = (1 - b_1 - b_2) \vec{p}_0 + b_1 \vec{p}_1.$$ - We followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) + I followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) was not within the range of the minimum and maximum time range, the ray would be parallel to the triangle and thus would not intersect with the triangle given this time range. - Otherwise, the ray would intersect within the triangle's plane. However, we needed to make sure it was within the - triangle so we checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we + Otherwise, the ray would intersect within the triangle's plane. However, I needed to make sure it was within the + triangle so I checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we updated the intersection struct.
- For the ray-sphere intersection, we followed the steps in the class slides. We set the equation of the ray equal
- to the equation of the sphere and solved for the intersection with the quadratic formula. We checked to see if the
- discriminant was positive so that we could find the times of intersection. Because it was a quadratic equation,
+ For the ray-sphere intersection, I followed the steps in the class slides. I set the equation of the ray equal
+ to the equation of the sphere and solved for the intersection with the quadratic formula. I checked to see if the
+ discriminant was positive so that I could find the times of intersection. Because it was a quadratic equation,
there could be up to two solutions and assigned the smaller one to t1
and
- the larger one to t2
. If these times of intersection were within the ray's time range, we updated the intersection struct.
+ the larger one to t2
. If these times of intersection were within the ray's time range, I updated the intersection struct.
- We implemented a recursive BVH construction algorithm. These were the formal steps and cases. + I implemented a recursive BVH construction algorithm. These were the formal steps and cases.
max_leaf_size
, then we created a leaf node and assigned its start and end to
- the passed in start and end iterators. Finally, we returned this leaf node.
+ class="highlighter-rouge">max_leaf_size, then I created a leaf node and assigned its start and end to
+ the passed in start and end iterators. Finally, I returned this leaf node.
- As shown in the table below, we found significant speedups in rendering times when using BVH acceleration. We used
+ As shown in the table below, I found significant speedups in rendering times when using BVH acceleration. I used
three .dae
scenes with differing number of primitives. It looks that the
rendering time is proportional to the average number of intersection tests per ray. Without BVH acceleration, we
had to cast every single ray on every primitive and thus it scales linearly with the number of primitives. With
- BVH acceleration, we split the primitives into two different nodes so effectively reduced it down logarithmically
+ BVH acceleration, I split the primitives into two different nodes so effectively reduced it down logarithmically
and thus do not need to check as many primitives so the intersection tests remain relatively constant. The BVH
data structure helps us to quickly find the intersection of a ray with the scene and thus significantly reduces
the time it takes to render the scene.
@@ -374,33 +374,33 @@
Direct lighting is zero bounce lighting, the light that comes directly from the light source, plus one bounce - lighting, the light that comes back to the camera after reflecting off the scene once. For zero bounce, we only - need to return the light from the light source without any bounces. However, for one bounce, we need to determine - how much light is reflected back to the camera after the ray intersects with the scene. Because we cannot compute - an infinite integral, we instead used a Monte-Carlo Estimator of the reflectance. + lighting, the light that comes back to the camera after reflecting off the scene once. For zero bounce, I only + need to return the light from the light source without any bounces. However, for one bounce, I need to determine + how much light is reflected back to the camera after the ray intersects with the scene. Because I cannot compute + an infinite integral, I instead used a Monte-Carlo Estimator of the reflectance.
- For uniform hemisphere sampling, we iterated through the number of samples and sampled a vector uniformly from the - hemisphere and converted it into the world space. Afterwards, we created the ray with this vector as the - direction. If the ray intersected the scene, we would calculate the BSDF $f(\text{w_out}, \text{w_in})$, the + For uniform hemisphere sampling, I iterated through the number of samples and sampled a vector uniformly from the + hemisphere and converted it into the world space. Afterwards, I created the ray with this vector as the + direction. If the ray intersected the scene, I would calculate the BSDF $f(\text{w_out}, \text{w_in})$, the emitted radiance $L_i$, and the angle between the - surface normal and the sampled vector. Finally, we computed the sample mean of the reflectance calculations from + surface normal and the sampled vector. Finally, I computed the sample mean of the reflectance calculations from lecture using the following formula and previous calculations: $$\frac{1}{N} \sum_{i = 1}^{n} \frac{f_r(\text{p}, \omega_i \rightarrow \omega_r) L_i(\text{p}, \omega_j) \text{cos}\theta_j}{p(\omega_j)}$$
- For importance lighting sampling, instead of sampling from a uniform hemisphere we iterated through each of the + For importance lighting sampling, instead of sampling from a uniform hemisphere I iterated through each of the light sources and calculated the number of - samples needed based on if it was a delta light and sampled uniformly from each light source. Then, we iterated + samples needed based on if it was a delta light and sampled uniformly from each light source. Then, I iterated through the number of samples and calculated the emitted radiance along with the sampled world space vector for our ray. If the ray intersected the scene, we would calculate the BSDF and the angle between the surface normal and the sampled vector and rejected rays that - were on the opposite side of the surface. For each light, we computed the mean reflectance using the formula from - above. Finally, we added this mean reflectance to the total reflectance and returned the total reflectance. + were on the opposite side of the surface. For each light, I computed the mean reflectance using the formula from + above. Finally, I added this mean reflectance to the total reflectance and returned the total reflectance.
Shown in the images above, when there are low amount of light rays there are more noise in the soft shadows with - individual dots making it up. As we increase the number of light rays, however, we noticed that the noise + individual dots making it up. As I increase the number of light rays, however, I noticed that the noise decreased dramatically. At 64 light rays, the noise was almost completely gone and the soft shadows were much - smoother. This is because with more light rays, we are able to sample more points on the light source and thus get + smoother. This is because with more light rays, I are able to sample more points on the light source and thus get a better estimate of the light intensity at a point on the surface. This is especially important for area lights, - where the light intensity can vary across the light source. Thus, the more light rays we have, the more accurate + where the light intensity can vary across the light source. Thus, the more light rays I have, the more accurate our estimate of the light intensity at a point on the surface and the smoother the soft shadows will be.
@@ -504,7 +504,7 @@- We noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise + I noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise in hemisphere sampling comes from the fact that only a small portion of the rays cast actually hit the scene. In contrast, importance lighting sampling only considers the rays that actually contribute to the illumination of the @@ -529,21 +529,21 @@
- We implemented a recursive function to calculate the indirect lighting as each bounce of the light will need to be + I implemented a recursive function to calculate the indirect lighting as each bounce of the light will need to be calculated with the previous bounce. These were the formal steps and cases.
one_bounce_radiance
.
one_bounce_radiance
for
- the current bounce. Afterwards, we sampled with the BSDF to figure out the next direction the ray will go and
- set the depth to be one less than the current depth. If this ray intersected with the scene, we would recurse to
+ the current bounce. Afterwards, I sampled with the BSDF to figure out the next direction the ray will go and
+ set the depth to be one less than the current depth. If this ray intersected with the scene, I would recurse to
find the next emitted radiance and apply the reflectance formula. If isAccumBounces
is true, we add it to the running total radiance and else we
+ class="highlighter-rouge">isAccumBounces is true, I add it to the running total radiance and else we
would return just the current level's radiance.
max_ray_depth=0
, only has zero bounce
lighting which is the light source itself. However, in the second scene with max_ray_depth=1
, the light bounces off the floor and onto the bunny and we can
+ class="highlighter-rouge">max_ray_depth=1, the light bounces off the floor and onto the bunny and I can
observe the top of the bunny illuminated. Later bounces will also highlight the underside of the bunny as the
light bounces off the floor and walls and ultimately onto the bunny. This helps to diffuse all lighting.
@@ -800,7 +800,7 @@ max_ray_depth=0
, only has zero bounce
lighting which is the light source itself. However, in the second scene with max_ray_depth=1
, the light bounces off the floor and onto the bunny and we can
+ class="highlighter-rouge">max_ray_depth=1, the light bounces off the floor and onto the bunny and I can
observe the top of the bunny illuminated. Later bounces will also highlight the underside of the bunny as the
light bounces off the floor and walls and ultimately onto the bunny. This helps to diffuse all lighting. However,
at the 100 depth layer, there are not that many very large bounce rays because the continuation probability
@@ -880,10 +880,10 @@ samplesPerBatch
, we calculated
- the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), we would stop sampling the
- pixel and save the total number of samples that we have taken to calculate the color correctly. We used the
+ smoother areas where details are less noticeable. I implemented adaptive sampling by updating \(s_1\) and \(s_2\)
+ as defined in the spec. After a multiple of samplesPerBatch
, I calculated
+ the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), I would stop sampling the
+ pixel and save the total number of samples that I have taken to calculate the color correctly. I used the
following equations:
$$s_1 = \sum_{i = 1}^{n} x_i$$
$$s_2 = \sum_{i = 1}^{n} x_i^2$$
diff --git a/hw4/CS 184 Mesh Editor.html b/hw4/CS 184 Mesh Editor.html
new file mode 100644
index 0000000..d3a56c3
--- /dev/null
+++ b/hw4/CS 184 Mesh Editor.html
@@ -0,0 +1,1143 @@
+
+
+
+
+
+ Give a high-level overview of what you implemented in this project. Think about what you've built as a whole. Share your thoughts on what interesting things you've learned from completing the project.
++ Take some screenshots of scene/pinned2.json from a viewing angle where you can clearly see the cloth wireframe + to show the structure of your point masses and springs. ++ +
+ + | +
+
+ |
+ + + | ++ |
+ Show us what the wireframe looks like (1) without any shearing constraints, + (2) with only shearing constraints, and (3) with all constraints. ++ +
+
+ |
+
+
+ |
+
+
+ |
+
+ Experiment with some the parameters in the simulation. + To do so, pause the simulation at the start with P, modify the values of interest, and then resume by pressing P again. + You can also restart the simulation at any time from the cloth's starting position by pressing R. ++ +
+ + Describe the effects of changing the spring constantks
; how does the cloth behave from start to rest with a very lowks
? + A highks
? + +
+ TODO +
+ + +
+
+ What about for density
?
+
+
+
+ + TODO +
+ + +
+
+ What about for damping
?
+
+
+
+ + TODO +
+ + ++ + For each of the above, observe any noticeable differences in the cloth compared to the default parameters + and show us some screenshots of those interesting differences and describe when they occur. + ++ +
+
+ |
+
+
+ |
+ + |
+ TODO +
+ + ++ Show us a screenshot of your shaded cloth from scene/pinned4.json in its final resting state! + If you choose to use different parameters than the default ones, please list them. ++ +
+ Show us screenshots of your shaded cloth from scene/sphere.json in its final resting state + on the sphere using the default+ +ks = 5000
as well as withks = 500
andks = 50000
. +
+
+ |
+
+
+ |
+
+
+ |
+ + |
+ Describe the differences in the results. ++ +
+ TODO +
+ + ++ Show us a screenshot of your shaded cloth lying peacefully at rest on the plane. + If you haven't by now, feel free to express your colorful creativity with the cloth! + (You will need to complete the shaders portion first to show custom colors.) ++ +
+ Show us at least 3 screenshots that document how your cloth falls and folds on itself, + starting with an early, initial self-collision + and ending with the cloth at a more restful state (even if it is still slightly bouncy on the ground). ++ +
+
+ |
+
+
+ |
+
+
+ |
+ + |
+ Vary the+ +density
as well asks
+ + and describe with words and screenshots how they affect the behavior of the cloth as it falls on itself. +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+ TODO +
+ + + + ++ Explain in your own words what is a shader program and how vertex and fragment shaders work together to create lighting and material effects. ++ +
+ TODO +
+ + ++ Explain the Blinn-Phong shading model in your own words. + Show a screenshot of your Blinn-Phong shader outputting only the ambient component, a screen shot only outputting the diffuse component, a screen shot only outputting the specular component, and one using the entire Blinn-Phong model. ++ +
+ TODO +
+ +
+
+ |
+
+
+ |
+
+
+ |
+
+
+ |
+
+ Show a screenshot of your texture mapping shader using your own custom texture by modifying the textures in /textures/
.
+
+
+ + Show a screenshot of bump mapping on the cloth and on the sphere. + Show a screenshot of displacement mapping on the sphere. + Use the same texture for both renders. + You can either provide your own texture or use one of the ones in the textures directory, + BUT choose one that's not the default+ +texture_2.png
. + Compare the two approaches and resulting renders in your own words. + Compare how your the two shaders react to the sphere by changing the sphere mesh's coarseness by using-o 16 -a 16
and then-o 128 -a 128
. +
+
+ |
+
+
+ |
+ + |
+
+ |
+
+
+ |
+ + |
+ TODO +
+ + ++ Show a screenshot of your mirror shader on the cloth and on the sphere. ++ +
+
+ |
+
+
+ |
+ + |
+ Explain what you did in your custom shader, if you made one. ++ +
+ TODO +
+ + + ++ Partner A worked on TODO. +
++ Partner B worked on TODO. +
+ + + ++ The final (optional) part for the mesh competition is where you have the opportunity to be creative and individual, + so be sure to provide a good description of what you were going for, what you did, and how you did it. ++ +
+ N/A +
+ + + ++ If you implemented any additional technical features for the cloth simulation, + clearly describe what you did and provide screenshots that illustrate your work. + If it is an improvement compared to something already existing on the cloth simulation, + compare and contrast them both in words and in images. ++ +
+ N/A +
+ + + +PointMass
objects' new position to simulate the cloth's movement. Then, I
+ implemented a way to detect cloth collision with outside objects as well as resolving self-collisions. Finally,
+ I added wind forces to the cloth to simulate the cloth's movement in the wind.
+
+
+ Because the cloth's springs needed to be in row major order, I first looped through the number of height
+ points as this represented each individual row before iterating through each of the width points to create
+ the required springs. I calculated the x
position in order to fit
+ num_width_points
within the cloth's width. Depending on whether the
+ cloth was horizontal or vertical, I made sure to set y
to either be 1
+ or the correctly spaced out position fit the necessary number of width points and set z
to either the correctly spaced out position fit the necessary number
+ of width points or the random offset. Before inserting the PointMass
,
+ I checked to see the object should be pinned or not. Finally, I iterated through the two-dimensional grid
+ positions and converted them into the PointMass
one-dimensional
+ vector position and checked for boundary conditions before creating the STRUCTURAL
, SHEARING
, and BENDING
springs as listed in the homework description.
+
scene/pinned2.json
from a viewing angle
+ where you can clearly see the cloth wireframe to show the structure of your point masses and springs.
+
+ scene/pinned2.json
:
+
+
+
+ pinned2.json close up view |
+
+
+ pinned2.json above view |
+
+
+ pinned2.json no shearing constraints
+ |
+
+
+ pinned2.json only shearing constraints
+ |
+
+
+ pinned2.json all constraints |
+
scene/pinned4.json
in its final resting state for both the wireframe and shaded versions:
+
+
+ pinned4.json wireframe
+ |
+
+
+ pinned4.json normals
+ |
+