From 44ea9ca4cd68bda1507618f3de502c8248598560 Mon Sep 17 00:00:00 2001 From: priya0004 Date: Wed, 9 Oct 2024 18:09:52 -0700 Subject: [PATCH] [proj3] image and sizing corrections --- proj3/index.html | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/proj3/index.html b/proj3/index.html index 42f547d..ac1b4e0 100644 --- a/proj3/index.html +++ b/proj3/index.html @@ -681,4 +681,4 @@ background-image: url("data:image/svg+xml;charset=UTF-8,%3Csvg%20width%3D%2216%22%20height%3D%2216%22%20viewBox%3D%220%200%2016%2016%22%20fill%3D%22none%22%20xmlns%3D%22http%3A%2F%2Fwww.w3.org%2F2000%2Fsvg%22%3E%0A%3Crect%20x%3D%220.75%22%20y%3D%220.75%22%20width%3D%2214.5%22%20height%3D%2214.5%22%20fill%3D%22white%22%20stroke%3D%22%2336352F%22%20stroke-width%3D%221.5%22%2F%3E%0A%3C%2Fsvg%3E"); } -

cs180: proj3

Face Morphing

Part 1: Defining Correspondences

I chose two photos (one of me, one of Betty White!) and took time in Photoshop to crop and align them, ensuring I finalized them to have the same dimensions. To define the corresponding points on the two faces, I used a previous student’s labeling tool (as mentioned in the project spec) to manually select them. I included the four corners of both images so that the triangulation covers the entire image.

From there, I calculated the averages of the corresponding keypoints and used scipy.spatial.Delaunay to create a triangulation of that average shape. Here are the correspondences for both images as well as the average correspondences’ triangulation mesh.

  • imgA-me.jpg
  • imgB-betty.jpg

Part 2: Computing the "Mid-Way Face”

Using Part 1’s average shape and triangulation mesh, I iterated over each triangle in the mesh and found the transformation matrix AA that transforms the current triangle into the average shape’s corresponding triangle. To compute the inverse warp, I used sk.draw.polygon to extract all pixel coordinates in imgA-me.jpg’s triangles and multiplied them by A1A^{-1}. For values that fall in-between pixel coordinates, I used scipy.interpolate.RegularGridInterpolator (it’s quicker than griddata). As my last step, I took the average of the color of warpedA and warpedB. Below are the original images as well as the midway face.

Note: My mid-way face is pretty creepy for a few reasons

  • I used the only photo of mine that has a blank background, but it wasn’t ideal. For example, my head is slightly tilted, my face isn’t expressionless, and I’m wearing glasses.
  • I scoured the internet for a celebrity photo that was taken at a similar angle (so that I could at least do some cropping), and this image of Betty White was the best I could find.
    • It does not have a blank background. It has a difficult expression to morph with.
  • The transition with our hair isn’t great (I didn’t add any correspondence points on my hair because it worsened the morph, and we were deterred from including non-face points).
  • imgA-me.jpg
  • imgB-betty.jpg

Part 3: The Morph Sequence

To create my gif (45 frames in 30 fps), I implemented the morph function and changed the warp_frac and dissolve_frac arguments to produce each frame.

Part 4: The "Mean Face" of a Population

Using the IMM Face (Danes) Database, I computed the average shape of the dataset. I decided not to pick a subpopulation since the dataset was so skewed (30 out of 37 were men). From there, I morphed each face in the dataset to the average shape.

  • 22-1f.bmp
  • 22-1f.bmp morphed into average shape
  • 30-1f.bmp
  • 30-1f.bmp morphed into average shape
  • 35-1f.bmp
  • 35-1f.bmp morphed into average shape
  • 37-1f.bmp
  • 37-1f.bmp morphed into average shape

I computed the average face of the population by taking the average of the morphed faces. Then, I warped an aligned version of my picture (imgA-me-avg.jpg) into the average shape and warped average face into aligned imgA-me-avg.jpg’s geometry.

  • imgA-me-avg.jpg
  • imgB-avg.jpg
  • me warped into average geometry
  • average face warped into my geometry

Some funky results! A lot of this is due to the differing camera angles and expressions.

me-warped-to-average image’s standouts:

  • reduced my wide-open smile since imgB-avg.jpg’s is a more neutral expression with no teeth showing
  • squished nose to match imgB-avg.jpg’s shape
  • more rectangular forehead to match imgB-avg.jpg’s
    • resulted in the weird line on either side of the final image, which is really the pixels from my hairline
  • aligned eyes (mine were slightly tilted so the resulting image corrects that)
    • resulted in a weird warp of my glasses as well

average-warped-to-me image’s standouts:

  • wider lips to fill in imgA-me-avg.jpg’s smile with teeth
  • warped nose to more closely match mine
  • more curved forehead/hairline
  • slightly tilted, more squinted eyes to match mine (when I smile, my eyes crinkle)

Part 5: Caricatures — Extrapolating from the Mean

Using Part 4’s IMM Face Database average shape, I created caricature-like images of my face by extrapolating from the average shape (with alphaalpha controlling the level of extrapolation). I achieved this by testing values outside of the [0,1][0, 1] range.

  • alpha=0.5alpha = -0.5
  • alpha=1.5alpha = 1.5

Definitely can see the difference (exaggerating the mean face’s features versus my own features).

Bells and Whistles: Changing the Gender of My Face

I morphed my face into the mean face of an Indian male for the following 3 scenarios (after rescaling/resizing and manually plotting their correspondence points):

  1. Shape: Warp my face into the geometry of the average Indian male’s face.
  1. Appearance: Warp average Indian male’s face to my face shape and cross-dissolve.
  1. Both: Morph shape and appearance (aka warp + cross-dissolve)

Note: I had to crop my image quite a lot to align with the Indian male mean face.

  • overlay points for me
  • overlay points for Indian male mean face
  • shape only
  • appearance only
  • shape and appearance

Ran into similar issues discussed in Part 4 (glasses obstruct my facial features like my eyebrows and edges of my face, my slightly tilted eyes also impact the quality of the warp, differing camera angles since I used the same photo throughout this project, etc).

Reflection & Bloopers

Fun (and sometimes slightly creepy lol) project where I learned a lot! The results are definitely dependent on which photos you choose initially, and mine were definitely quite different. I found there’s only so much you can do with resizing in Photoshop, and my struggle came with camera angles (and my slight head tilt). Here are some bloopers I saved from this project (specifically from Part 4).

  • initial workaround produced this when I ran into blending issues (for pixels on the triangulation mesh’s lines)!
  • flipped my coordinates when trying to produce the average image for the population
\ No newline at end of file +

cs180: proj3

Face Morphing

Part 1: Defining Correspondences

I chose two photos (one of me, one of Betty White!) and took time in Photoshop to crop and align them, ensuring I finalized them to have the same dimensions. To define the corresponding points on the two faces, I used a previous student’s labeling tool (as mentioned in the project spec) to manually select them. I included the four corners of both images so that the triangulation covers the entire image.

From there, I calculated the averages of the corresponding keypoints and used scipy.spatial.Delaunay to create a triangulation of that average shape. Here are the correspondences for both images as well as the average correspondences’ triangulation mesh.

  • imgA-me.jpg with correspondences
  • imgB-betty.jpg with correspondences

Part 2: Computing the "Mid-Way Face”

Using Part 1’s average shape and triangulation mesh, I iterated over each triangle in the mesh and found the transformation matrix AA that transforms the current triangle into the average shape’s corresponding triangle. To compute the inverse warp, I used sk.draw.polygon to extract all pixel coordinates in imgA-me.jpg’s triangles and multiplied them by A1A^{-1}. For values that fall in-between pixel coordinates, I used scipy.interpolate.RegularGridInterpolator (it’s quicker than griddata). As my last step, I took the average of the color of warpedA and warpedB. Below are the original images as well as the midway face.

Note: My mid-way face is pretty creepy for a few reasons

  • I used the only photo of mine that has a blank background, but it wasn’t ideal. For example, my head is slightly tilted, my face isn’t expressionless, and I’m wearing glasses.
  • I scoured the internet for a celebrity photo that was taken at a similar angle (so that I could at least do some cropping), and this image of Betty White was the best I could find.
    • It does not have a blank background. It has a difficult expression to morph with.
  • The transition with our hair isn’t great (I didn’t add any correspondence points on my hair because it worsened the morph, and we were deterred from including non-face points).
  • imgA-me.jpg
  • imgB-betty.jpg

Part 3: The Morph Sequence

To create my gif (45 frames in 30 fps), I implemented the morph function and changed the warp_frac and dissolve_frac arguments to produce each frame.

Part 4: The "Mean Face" of a Population

Using the IMM Face (Danes) Database, I computed the average shape of the dataset. I decided not to pick a subpopulation since the dataset was so skewed (30 out of 37 were men). From there, I morphed each face in the dataset to the average shape.

  • 22-1f.bmp
  • 22-1f.bmp morphed into average shape
  • 30-1f.bmp
  • 30-1f.bmp morphed into average shape
  • 35-1f.bmp
  • 35-1f.bmp morphed into average shape
  • 37-1f.bmp
  • 37-1f.bmp morphed into average shape

I computed the average face of the population by taking the average of the morphed faces. Then, I warped an aligned version of my picture (imgA-me-avg.jpg) into the average shape and warped average face into aligned imgA-me-avg.jpg’s geometry.

  • imgA-me-avg.jpg
  • imgB-avg.jpg
  • me warped into average geometry
  • average face warped into my geometry

Some funky results! A lot of this is due to the differing camera angles and expressions.

  • me-warped-to-average image’s standouts:
    • reduced my wide-open smile since imgB-avg.jpg’s is a more neutral expression with no teeth showing
    • squished nose to match imgB-avg.jpg’s shape
    • more rectangular forehead to match imgB-avg.jpg’s
      • resulted in the weird line on either side of the final image, which is really the pixels from my hairline
    • aligned eyes (mine were slightly tilted so the resulting image corrects that)
      • resulted in a weird warp of my glasses as well
  • average-warped-to-me image’s standouts:
    • wider lips to fill in imgA-me-avg.jpg’s smile with teeth
    • warped nose to more closely match mine
    • more curved forehead/hairline
    • slightly tilted, more squinted eyes to match mine (when I smile, my eyes crinkle)

Part 5: Caricatures — Extrapolating from the Mean

Using Part 4’s IMM Face Database average shape, I created caricature-like images of my face by extrapolating from the average shape (with alphaalpha controlling the level of extrapolation). I achieved this by testing values outside of the [0,1][0, 1] range.

  • alpha=0.5alpha = -0.5
  • alpha=1.5alpha = 1.5

Definitely can see the difference (exaggerating the mean face’s features versus my own features).

Bells and Whistles: Changing the Gender of My Face

I morphed my face into the mean face of an Indian male for the following 3 scenarios (after rescaling/resizing and manually plotting their correspondence points):

  1. Shape: Warp my face into the geometry of the average Indian male’s face.
  1. Appearance: Warp average Indian male’s face to my face shape and cross-dissolve.
  1. Both: Morph shape and appearance (aka warp + cross-dissolve)

Note: I had to crop my image quite a lot to align with the Indian male mean face.

  • overlay points for me
  • overlay points for Indian male mean face
  • shape only
  • appearance only
  • shape and appearance

Ran into similar issues discussed in Part 4 (glasses obstruct my facial features like my eyebrows and edges of my face, my slightly tilted eyes also impact the quality of the warp, differing camera angles since I used the same photo throughout this project, etc).

Reflection & Bloopers

Fun (and sometimes slightly creepy lol) project where I learned a lot! The results are definitely dependent on which photos you choose initially, and mine were definitely quite different. I found there’s only so much you can do with resizing in Photoshop, and my struggle came with camera angles (and my slight head tilt). Here are some bloopers I saved from this project (specifically from Part 4).

  • initial workaround produced this when I ran into blending issues (for pixels on the triangulation mesh’s lines)!
  • flipped my coordinates when trying to produce the average image for the population
\ No newline at end of file