diff --git a/assets/CS-184-Pathtracer-Writeup.pdf b/assets/CS-184-Pathtracer-Writeup.pdf new file mode 100644 index 0000000..1f69420 Binary files /dev/null and b/assets/CS-184-Pathtracer-Writeup.pdf differ diff --git a/assets/cloth-sim.png b/assets/cloth-sim.png new file mode 100644 index 0000000..3b407b2 Binary files /dev/null and b/assets/cloth-sim.png differ diff --git a/hw3/index.html b/hw3/index.html index e1a03ac..29e149a 100644 --- a/hw3/index.html +++ b/hw3/index.html @@ -108,7 +108,7 @@

CS 184: Computer Graphics and Imaging, Spring 2024

Homework 3: PathTracer

-

Ian Dong and Colin Steidtmann

+

Ian Dong



@@ -117,13 +117,13 @@

Ian Dong and Colin Steidtmann

Overview

- In this homework, we implemented a path tracing renderer. First, we worked on generating camera rays from image + In this homework, I implemented a path tracing renderer. First, I worked on generating camera rays from image space - to sensor in camera space and their intersection with triangles and spheres. Then, we built a bounding volume - hierarchy to accelerate ray intersection tests and speed up the path tracers rendering. Afterwards, we explored - direct illumination to simulate light sources and render images with realistic shadowing. Then, we implemented + to sensor in camera space and their intersection with triangles and spheres. Then, I built a bounding volume + hierarchy to accelerate ray intersection tests and speed up the path tracers rendering. Afterwards, I explored + direct illumination to simulate light sources and render images with realistic shadowing. Then, I implemented global - illumination to simulate indirect lighting and reflections using diffuse BSDF. Finally, we implemented adaptive + illumination to simulate indirect lighting and reflections using diffuse BSDF. Finally, I implemented adaptive sampling to reduce noise in the rendered images.

@@ -140,7 +140,7 @@

Overview



-

Section I: Ray Generation and Scene Intersection (20 Points)

+

Section I: Ray Generation and Scene Intersection

@@ -152,22 +152,22 @@

- For the ray generation portion of the rendering pipeline, we first made sure to find the boundaries of the + For the ray generation portion of the rendering pipeline, I first made sure to find the boundaries of the camera space by calculating \(\text{tan}(\frac{\text{hFov}}{2})\) and \(\text{tan}(\frac{\text{vFov}}{2})\) since the bottom left corner is defined as (\(-\text{tan}(\frac{\text{hFov}}{2})\), \(\text{tan}(\frac{\text{vFov}}{2}) -1\)) and the top right corner is defined as (\(\text{tan}(\frac{\text{hFov}}{2})\), - \(\text{tan}(\frac{\text{vFov}}{2}), -1\)). Then, we used the instance variables hFov and vFov which are in degrees to calculate the height and width length before using linear interpolation to find the camera image coordinates. - Afterwards, we used this->c2w to convert my camera image coordinates into - world space coordinates and also normalized the direction vector. Finally, we constructed the ray with this vector + Afterwards, I used this->c2w to convert my camera image coordinates into + world space coordinates and also normalized the direction vector. Finally, I constructed the ray with this vector and defined the min_t and max_t.

- For the primitive intersection portion of the rendering pipeline, we generated num_samples using this->gridSampler->get_sample(). We made sure to normalize the coordinates - before calling on the previously implemented method to generate the ray. Finally, we called this->gridSampler->get_sample(). I made sure to normalize the coordinates + before calling on the previously implemented method to generate the ray. Finally, I called this->est_radiance_global_illumination() to get the sample radiance and averaged the radiance to update the pixel in the buffer. @@ -180,24 +180,24 @@

Explain the triangle/sphere intersection algorithm you implemented in your own words.

- For the ray-triangle intersection, we implemented the Moller-Trumbore formula. This algorithm takes in a ray with + For the ray-triangle intersection, I implemented the Moller-Trumbore formula. This algorithm takes in a ray with origin, \(o\), and direction, \(d\), as well as a triangle with vertices, \(p_0\), \(p_1\), and \(p_2\) and solves the following equation: $$\vec{O} + t\vec{D} = (1 - b_1 - b_2) \vec{p}_0 + b_1 \vec{p}_1.$$ - We followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) + I followed the algorithm by defining each of the variables and solving for \(t\), \(b_1\), and \(b_2\). If \(t\) was not within the range of the minimum and maximum time range, the ray would be parallel to the triangle and thus would not intersect with the triangle given this time range. - Otherwise, the ray would intersect within the triangle's plane. However, we needed to make sure it was within the - triangle so we checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we + Otherwise, the ray would intersect within the triangle's plane. However, I needed to make sure it was within the + triangle so I checked the barycentric coordinates to ensure they were both within [0, 1]. If they were, we updated the intersection struct.

- For the ray-sphere intersection, we followed the steps in the class slides. We set the equation of the ray equal - to the equation of the sphere and solved for the intersection with the quadratic formula. We checked to see if the - discriminant was positive so that we could find the times of intersection. Because it was a quadratic equation, + For the ray-sphere intersection, I followed the steps in the class slides. I set the equation of the ray equal + to the equation of the sphere and solved for the intersection with the quadratic formula. I checked to see if the + discriminant was positive so that I could find the times of intersection. Because it was a quadratic equation, there could be up to two solutions and assigned the smaller one to t1 and - the larger one to t2. If these times of intersection were within the ray's time range, we updated the intersection struct. + the larger one to t2. If these times of intersection were within the ray's time range, I updated the intersection struct.


@@ -248,22 +248,22 @@

Walk through your BVH construction algorithm. Explain the heuristic you chose for picking the splitting point.

- We implemented a recursive BVH construction algorithm. These were the formal steps and cases. + I implemented a recursive BVH construction algorithm. These were the formal steps and cases.

  1. Base Case: If the number of primitives is less than or equal to max_leaf_size, then we created a leaf node and assigned its start and end to - the passed in start and end iterators. Finally, we returned this leaf node. + class="highlighter-rouge">max_leaf_size, then I created a leaf node and assigned its start and end to + the passed in start and end iterators. Finally, I returned this leaf node.
  2. - Recursive Case: Otherwise, we needed to find the best split point to create the left and right BVH nodes. First, - we iterated through all three dimensions and created a new function to find the median of the primitives for the - current dimension. we temporarily split the primitives into the two nodes based on this median axis. The - heuristic we used was the sum of the surface areas of the two bounding boxes and chose the axis that minimized - this sum. Afterwards, we split the primitives into the two nodes, updated the iterator to connect them, and + Recursive Case: Otherwise, I needed to find the best split point to create the left and right BVH nodes. First, + I iterated through all three dimensions and created a new function to find the median of the primitives for the + current dimension. I temporarily split the primitives into the two nodes based on this median axis. The + heuristic I used was the sum of the surface areas of the two bounding boxes and chose the axis that minimized + this sum. Afterwards, I split the primitives into the two nodes, updated the iterator to connect them, and found the midpoint before passing in the new start and end iterators into the recursive BVH construction - algorithm. If at any time a split led to all of the primitives being in one node, we would just follow the base - case logic and assign the start and end to the node. Finally, we returned the node. + algorithm. If at any time a split led to all of the primitives being in one node, I would just follow the base + case logic and assign the start and end to the node. Finally, I returned the node.

@@ -311,11 +311,11 @@

Present your results in a one-paragraph analysis.

- As shown in the table below, we found significant speedups in rendering times when using BVH acceleration. We used + As shown in the table below, I found significant speedups in rendering times when using BVH acceleration. I used three .dae scenes with differing number of primitives. It looks that the rendering time is proportional to the average number of intersection tests per ray. Without BVH acceleration, we had to cast every single ray on every primitive and thus it scales linearly with the number of primitives. With - BVH acceleration, we split the primitives into two different nodes so effectively reduced it down logarithmically + BVH acceleration, I split the primitives into two different nodes so effectively reduced it down logarithmically and thus do not need to check as many primitives so the intersection tests remain relatively constant. The BVH data structure helps us to quickly find the intersection of a ray with the scene and thus significantly reduces the time it takes to render the scene. @@ -374,33 +374,33 @@

Direct lighting is zero bounce lighting, the light that comes directly from the light source, plus one bounce - lighting, the light that comes back to the camera after reflecting off the scene once. For zero bounce, we only - need to return the light from the light source without any bounces. However, for one bounce, we need to determine - how much light is reflected back to the camera after the ray intersects with the scene. Because we cannot compute - an infinite integral, we instead used a Monte-Carlo Estimator of the reflectance. + lighting, the light that comes back to the camera after reflecting off the scene once. For zero bounce, I only + need to return the light from the light source without any bounces. However, for one bounce, I need to determine + how much light is reflected back to the camera after the ray intersects with the scene. Because I cannot compute + an infinite integral, I instead used a Monte-Carlo Estimator of the reflectance.

- For uniform hemisphere sampling, we iterated through the number of samples and sampled a vector uniformly from the - hemisphere and converted it into the world space. Afterwards, we created the ray with this vector as the - direction. If the ray intersected the scene, we would calculate the BSDF $f(\text{w_out}, \text{w_in})$, the + For uniform hemisphere sampling, I iterated through the number of samples and sampled a vector uniformly from the + hemisphere and converted it into the world space. Afterwards, I created the ray with this vector as the + direction. If the ray intersected the scene, I would calculate the BSDF $f(\text{w_out}, \text{w_in})$, the emitted radiance $L_i$, and the angle between the - surface normal and the sampled vector. Finally, we computed the sample mean of the reflectance calculations from + surface normal and the sampled vector. Finally, I computed the sample mean of the reflectance calculations from lecture using the following formula and previous calculations: $$\frac{1}{N} \sum_{i = 1}^{n} \frac{f_r(\text{p}, \omega_i \rightarrow \omega_r) L_i(\text{p}, \omega_j) \text{cos}\theta_j}{p(\omega_j)}$$

- For importance lighting sampling, instead of sampling from a uniform hemisphere we iterated through each of the + For importance lighting sampling, instead of sampling from a uniform hemisphere I iterated through each of the light sources and calculated the number of - samples needed based on if it was a delta light and sampled uniformly from each light source. Then, we iterated + samples needed based on if it was a delta light and sampled uniformly from each light source. Then, I iterated through the number of samples and calculated the emitted radiance along with the sampled world space vector for our ray. If the ray intersected the scene, we would calculate the BSDF and the angle between the surface normal and the sampled vector and rejected rays that - were on the opposite side of the surface. For each light, we computed the mean reflectance using the formula from - above. Finally, we added this mean reflectance to the total reflectance and returned the total reflectance. + were on the opposite side of the surface. For each light, I computed the mean reflectance using the formula from + above. Finally, I added this mean reflectance to the total reflectance and returned the total reflectance.


@@ -448,7 +448,7 @@


- We noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise + I noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise in hemisphere sampling comes from the fact that only a small portion of the rays cast actually hit the scene. In contrast, importance lighting sampling only considers the rays that actually contribute to the illumination of the @@ -489,11 +489,11 @@

Shown in the images above, when there are low amount of light rays there are more noise in the soft shadows with - individual dots making it up. As we increase the number of light rays, however, we noticed that the noise + individual dots making it up. As I increase the number of light rays, however, I noticed that the noise decreased dramatically. At 64 light rays, the noise was almost completely gone and the soft shadows were much - smoother. This is because with more light rays, we are able to sample more points on the light source and thus get + smoother. This is because with more light rays, I are able to sample more points on the light source and thus get a better estimate of the light intensity at a point on the surface. This is especially important for area lights, - where the light intensity can vary across the light source. Thus, the more light rays we have, the more accurate + where the light intensity can vary across the light source. Thus, the more light rays I have, the more accurate our estimate of the light intensity at a point on the surface and the smoother the soft shadows will be.

@@ -504,7 +504,7 @@

Compare the results between uniform hemisphere sampling and lighting sampling in a one-paragraph analysis.

- We noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise + I noticed that importance sampling converged much faster than uniform hemisphere sampling. The soft shadow noise in hemisphere sampling comes from the fact that only a small portion of the rays cast actually hit the scene. In contrast, importance lighting sampling only considers the rays that actually contribute to the illumination of the @@ -529,21 +529,21 @@

Walk through your implementation of the indirect lighting function.

- We implemented a recursive function to calculate the indirect lighting as each bounce of the light will need to be + I implemented a recursive function to calculate the indirect lighting as each bounce of the light will need to be calculated with the previous bounce. These were the formal steps and cases.

  1. - Base Case: If the ray's depth reaches 1, we can just return one_bounce_radiance.
  2. - Recursive Case: Otherwise, we will flip a biased coin and continue path tracing with probability - \(\text{continuation}\_\text{prob}\). Then, we calculated the one_bounce_radiance for - the current bounce. Afterwards, we sampled with the BSDF to figure out the next direction the ray will go and - set the depth to be one less than the current depth. If this ray intersected with the scene, we would recurse to + the current bounce. Afterwards, I sampled with the BSDF to figure out the next direction the ray will go and + set the depth to be one less than the current depth. If this ray intersected with the scene, I would recurse to find the next emitted radiance and apply the reflectance formula. If isAccumBounces is true, we add it to the running total radiance and else we + class="highlighter-rouge">isAccumBounces is true, I add it to the running total radiance and else we would return just the current level's radiance.
@@ -609,7 +609,7 @@

illumination only illuminates the portions of the scene that the light rays can directly reach so they miss out on the the ceiling and undersides of the spheres. Indirect lighting is the opposite as it only illuminates the portions of the scene that the light rays cannot directly reach so they miss out on the the light source and the - direct light and illuminate the underside of the spheres. Thus, we need both direct and indirect illumination to + direct light and illuminate the underside of the spheres. Thus, I need both direct and indirect illumination to get a complete picture of the scene.

@@ -738,7 +738,7 @@

As shown in the images above, each bounce helps to convey more information about the scene as it lights up more portions. The first scene, with max_ray_depth=0, only has zero bounce lighting which is the light source itself. However, in the second scene with max_ray_depth=1, the light bounces off the floor and onto the bunny and we can + class="highlighter-rouge">max_ray_depth=1, the light bounces off the floor and onto the bunny and I can observe the top of the bunny illuminated. Later bounces will also highlight the underside of the bunny as the light bounces off the floor and walls and ultimately onto the bunny. This helps to diffuse all lighting.

@@ -800,7 +800,7 @@

As shown in the images above, each bounce helps to convey more information about the scene as it lights up more portions. The first scene, with max_ray_depth=0, only has zero bounce lighting which is the light source itself. However, in the second scene with max_ray_depth=1, the light bounces off the floor and onto the bunny and we can + class="highlighter-rouge">max_ray_depth=1, the light bounces off the floor and onto the bunny and I can observe the top of the bunny illuminated. Later bounces will also highlight the underside of the bunny as the light bounces off the floor and walls and ultimately onto the bunny. This helps to diffuse all lighting. However, at the 100 depth layer, there are not that many very large bounce rays because the continuation probability @@ -880,10 +880,10 @@

increasing the sample rate and thus the rendering time, adaptive sampling concentrates the samples in the more difficult parts of the image as there are some pixels that converge quicker than others. The idea is to allocate more samples to regions that require higher fidelity representation, while reducing the number of samples in - smoother areas where details are less noticeable. We implemented adaptive sampling by updating \(s_1\) and \(s_2\) - as defined in the spec. After a multiple of samplesPerBatch, we calculated - the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), we would stop sampling the - pixel and save the total number of samples that we have taken to calculate the color correctly. We used the + smoother areas where details are less noticeable. I implemented adaptive sampling by updating \(s_1\) and \(s_2\) + as defined in the spec. After a multiple of samplesPerBatch, I calculated + the mean, standard deviation, and \(I\). If \(I \leq \text{maxTolerance} \cdot \mu\), I would stop sampling the + pixel and save the total number of samples that I have taken to calculate the color correctly. I used the following equations: $$s_1 = \sum_{i = 1}^{n} x_i$$ $$s_2 = \sum_{i = 1}^{n} x_i^2$$ diff --git a/hw4/CS 184 Mesh Editor.html b/hw4/CS 184 Mesh Editor.html new file mode 100644 index 0000000..d3a56c3 --- /dev/null +++ b/hw4/CS 184 Mesh Editor.html @@ -0,0 +1,1143 @@ + + + + + + CS 184 Mesh Editor + + + + + + + + + + + +

CS 184: Computer Graphics and Imaging, Spring 2023

+

Project 4: Cloth Simulator

+

TODO: student names, CS184-???

+ +

+ +
+

Overview

+

Give a high-level overview of what you implemented in this project. Think about what you've built as a whole. Share your thoughts on what interesting things you've learned from completing the project.

+
+ + +

Part 1: Masses and springs

+
+
+ Take some screenshots of scene/pinned2.json from a viewing angle where you can clearly see the cloth wireframe + to show the structure of your point masses and springs. +
+ +
+ + + + + + +
+ + + +
Initial configuration (ks=5,000)
+
+ + +
+
+ + +
+ Show us what the wireframe looks like (1) without any shearing constraints, + (2) with only shearing constraints, and (3) with all constraints. +
+ +
+ + + + + + + + + + +
+ +
No shearing constraints
+
+ +
Only shearing constraints
+
+ +
All constraints
+
+
+ + + + +

Part 2: Simulation via numerical integration

+
+
+ Experiment with some the parameters in the simulation. + To do so, pause the simulation at the start with P, modify the values of interest, and then resume by pressing P again. + You can also restart the simulation at any time from the cloth's starting position by pressing R. +
+ + Describe the effects of changing the spring constant ks; how does the cloth behave from start to rest with a very low ks? + A high ks? + +
+ +

+ TODO +

+ + +
+ + What about for density? + +
+ +

+ TODO +

+ + +
+ + What about for damping? + +
+ +

+ TODO +

+ + +
+ + For each of the above, observe any noticeable differences in the cloth compared to the default parameters + and show us some screenshots of those interesting differences and describe when they occur. + +
+ +
+ + + + + +
+ +
Default Parameters
+
+ +
Default Parameters
+
+
+
+ +

+ TODO +

+ + +
+ Show us a screenshot of your shaded cloth from scene/pinned4.json in its final resting state! + If you choose to use different parameters than the default ones, please list them. +
+ +
+ +
+ + + + +

Part 3: Handling collisions with other objects

+
+
+ Show us screenshots of your shaded cloth from scene/sphere.json in its final resting state + on the sphere using the default ks = 5000 as well as with ks = 500 and ks = 50000. +
+ +
+ + + + + + +
+ +
ks=500
+
+ +
Initial configuration (ks=5,000)
+
+ +
ks=50,000
+
+
+
+ + +
+ Describe the differences in the results. +
+ +

+ TODO +

+ + +
+ Show us a screenshot of your shaded cloth lying peacefully at rest on the plane. + If you haven't by now, feel free to express your colorful creativity with the cloth! + (You will need to complete the shaders portion first to show custom colors.) +
+ +
+ +
+ + + + +

Part 4: Handling self-collisions

+
+ +
+ Show us at least 3 screenshots that document how your cloth falls and folds on itself, + starting with an early, initial self-collision + and ending with the cloth at a more restful state (even if it is still slightly bouncy on the ground). +
+ +
+ + + + + + +
+ +
Self collision 1
+
+ +
Self collision 2
+
+ +
Self collision 3
+
+
+
+ + +
+ Vary the density as well as ks + + and describe with words and screenshots how they affect the behavior of the cloth as it falls on itself. +
+ +
+ + + + + + + + + +
+ +
density=1
+ +
+ +
density=50
+
+ +
ks=1,000
+
+ +
ks=7,500
+
+
+ +

+ TODO +

+ + + + +

Part 5: Cloth Sim

+
+ +
+ Explain in your own words what is a shader program and how vertex and fragment shaders work together to create lighting and material effects. +
+ +

+ TODO +

+ + +
+ Explain the Blinn-Phong shading model in your own words. + Show a screenshot of your Blinn-Phong shader outputting only the ambient component, a screen shot only outputting the diffuse component, a screen shot only outputting the specular component, and one using the entire Blinn-Phong model. +
+ +

+ TODO +

+ +
+ + + + + + + + + +
+ +
Ambient component only
+ +
+ +
Diffuse component only
+
+ +
Specular component only
+
+ +
Complete Blinn-Phong model
+
+
+ + +
+ Show a screenshot of your texture mapping shader using your own custom texture by modifying the textures in /textures/. +
+ +
+ +
+ + +
+ Show a screenshot of bump mapping on the cloth and on the sphere. + Show a screenshot of displacement mapping on the sphere. + Use the same texture for both renders. + You can either provide your own texture or use one of the ones in the textures directory, + BUT choose one that's not the default texture_2.png. + Compare the two approaches and resulting renders in your own words. + Compare how your the two shaders react to the sphere by changing the sphere mesh's coarseness by using -o 16 -a 16 and then -o 128 -a 128. +
+ +
+ + + + + + + + + +
+ +
Bump Mapping on the Cloth
+
+ +
Bump Mapping on the Sphere
+
+
+ +
Displacement Mapping on the Sphere
+
+ +
Displacement Mapping on the Sphere (coarser mesh)
+
+
+
+ +

+ TODO +

+ + +
+ Show a screenshot of your mirror shader on the cloth and on the sphere. +
+ +
+ + + + + +
+ +
Mirror Shader on the Cloth
+
+ +
Mirror Shader on the Sphere
+
+
+
+ + +
+ Explain what you did in your custom shader, if you made one. +
+ +

+ TODO +

+ + + +

Contributions

+

+ Partner A worked on TODO. +

+
+

+ Partner B worked on TODO. +

+ + + +

Mesh Competition Extra Credit (optional)

+
+ The final (optional) part for the mesh competition is where you have the opportunity to be creative and individual, + so be sure to provide a good description of what you were going for, what you did, and how you did it. +
+ +

+ N/A +

+ + + +

Extra Credit (optional)

+
+ If you implemented any additional technical features for the cloth simulation, + clearly describe what you did and provide screenshots that illustrate your work. + If it is an improvement compared to something already existing on the cloth simulation, + compare and contrast them both in words and in images. +
+ +

+ N/A +

+ + + +
\ No newline at end of file diff --git a/hw4/Images/Task1/sp24-clothsim-task1-all.png b/hw4/Images/Task1/sp24-clothsim-task1-all.png new file mode 100644 index 0000000..20ad6df Binary files /dev/null and b/hw4/Images/Task1/sp24-clothsim-task1-all.png differ diff --git a/hw4/Images/Task1/sp24-clothsim-task1-grid1.png b/hw4/Images/Task1/sp24-clothsim-task1-grid1.png new file mode 100644 index 0000000..6fd4fde Binary files /dev/null and b/hw4/Images/Task1/sp24-clothsim-task1-grid1.png differ diff --git a/hw4/Images/Task1/sp24-clothsim-task1-grid2.png b/hw4/Images/Task1/sp24-clothsim-task1-grid2.png new file mode 100644 index 0000000..ada24a0 Binary files /dev/null and b/hw4/Images/Task1/sp24-clothsim-task1-grid2.png differ diff --git a/hw4/Images/Task1/sp24-clothsim-task1-no-shearing.png b/hw4/Images/Task1/sp24-clothsim-task1-no-shearing.png new file mode 100644 index 0000000..b3b3600 Binary files /dev/null and b/hw4/Images/Task1/sp24-clothsim-task1-no-shearing.png differ diff --git a/hw4/Images/Task1/sp24-clothsim-task1-only-shearing.png b/hw4/Images/Task1/sp24-clothsim-task1-only-shearing.png new file mode 100644 index 0000000..7363ae6 Binary files /dev/null and b/hw4/Images/Task1/sp24-clothsim-task1-only-shearing.png differ diff --git a/hw4/Images/Task2/sp24-clothsim-pinned4-normals.png b/hw4/Images/Task2/sp24-clothsim-pinned4-normals.png new file mode 100644 index 0000000..835a5a7 Binary files /dev/null and b/hw4/Images/Task2/sp24-clothsim-pinned4-normals.png differ diff --git a/hw4/Images/Task2/sp24-clothsim-pinned4-wireframe.png b/hw4/Images/Task2/sp24-clothsim-pinned4-wireframe.png new file mode 100644 index 0000000..81af4aa Binary files /dev/null and b/hw4/Images/Task2/sp24-clothsim-pinned4-wireframe.png differ diff --git a/hw4/index.html b/hw4/index.html index c1ca00c..6fccc34 100644 --- a/hw4/index.html +++ b/hw4/index.html @@ -1,7 +1,231 @@ - - - - - Homework 4 index.html here - + + + + + + + + CS 184 Mesh Edit + + + + + + + + + + + +

CS 184: Computer Graphics and Imaging, Spring 2024

+

Homework + 4: Cloth Simulation

+

Ian Dong

+ + +
+

Overview

+ + In this homework, I explored the physics behind cloth simulation by implementing a mass-spring system to + represent the cloth. I used Verlet integration to compute each of the PointMass objects' new position to simulate the cloth's movement. Then, I + implemented a way to detect cloth collision with outside objects as well as resolving self-collisions. Finally, + I added wind forces to the cloth to simulate the cloth's movement in the wind. + +
+
+ +

+ +

Section I: Masses and Springs

+
+

+ + Explain how you implemented the mass-spring system to represent the cloth. + +

+

+ Because the cloth's springs needed to be in row major order, I first looped through the number of height + points as this represented each individual row before iterating through each of the width points to create + the required springs. I calculated the x position in order to fit + num_width_points within the cloth's width. Depending on whether the + cloth was horizontal or vertical, I made sure to set y to either be 1 + or the correctly spaced out position fit the necessary number of width points and set z to either the correctly spaced out position fit the necessary number + of width points or the random offset. Before inserting the PointMass, + I checked to see the object should be pinned or not. Finally, I iterated through the two-dimensional grid + positions and converted them into the PointMass one-dimensional + vector position and checked for boundary conditions before creating the STRUCTURAL, SHEARING, and BENDING springs as listed in the homework description. +

+
+
+
+

+ + Take some screenshots of scene/pinned2.json from a viewing angle + where you can clearly see the cloth wireframe to show the structure of your point masses and springs. + +

+ Here are some screenshots of the cloth wireframe from scene/pinned2.json: + +
+ + + + + +
+ +
pinned2.json close up view
+
+ +
pinned2.json above view
+
+
+
+
+
+

+ Show us what the wireframe looks like (1) without any shearing constraints, (2) with only shearing + constraints, and (3) with all constraints. +

+
+ + + + + + +
+ +
pinned2.json no shearing constraints +
+
+ +
pinned2.json only shearing constraints +
+
+ +
pinned2.json all constraints
+
+
+
+
+
+

Section II: Simulation via Numerical Integration

+
+

+ Show us a screenshot of your shaded cloth from scene/pinned4.json in its final resting state! If you choose to use different parameters than the default ones, please list them. +

+ Here are the screenshots of the shaded cloth from scene/pinned4.json in its final resting state for both the wireframe and shaded versions: +
+ + + + + +
+ +
pinned4.json wireframe +
+
+ +
pinned4.json normals +
+
+
+
+ + \ No newline at end of file