You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In apply_warping_field function, you add grid to warp_field. However, looks like your warp_field is obtained from WarpGenerator, which already generates warped coordinates instead of offsets. Is this a bug or did I miss something?
What's your definition of predicted head pose in your implementation? Is it from canonical (world) space to camera space or from the reverse? Normally head pose is represented as RT matrix from canonical (world) space to camera space. When doing src -> canonical with backward warping, we should use RT instead of the inversed matrix, right?
It's unclear in the original paper whether they apply the emotion warp filed first or the RT first. From my understanding, the emotion warp field should be in canonical space, so we should only apply emotion deformation in canonical space. This means it should be applied after RT when doing src --> canonical and before RT when doing canonical --> driving. But in your implementation, you always applied after RT regardless of src-->canonical or canonical --> driving.
The text was updated successfully, but these errors were encountered:
I create a PR - I'm working on something else - https://github.com/johndpope/IMF
it's latest paper from Microsoft from a few weeks back - it's supposed to supercede denseflow / warp fields.
The EmoPortraits code was supposed to drop last month... it should clear up questions on warping. @JZArray has been outspoken about warp code - but i think the PR may hopefully resolve things.
In apply_warping_field function, you add grid to warp_field. However, looks like your warp_field is obtained from WarpGenerator, which already generates warped coordinates instead of offsets. Is this a bug or did I miss something?
What's your definition of predicted head pose in your implementation? Is it from canonical (world) space to camera space or from the reverse? Normally head pose is represented as RT matrix from canonical (world) space to camera space. When doing src -> canonical with backward warping, we should use RT instead of the inversed matrix, right?
It's unclear in the original paper whether they apply the emotion warp filed first or the RT first. From my understanding, the emotion warp field should be in canonical space, so we should only apply emotion deformation in canonical space. This means it should be applied after RT when doing src --> canonical and before RT when doing canonical --> driving. But in your implementation, you always applied after RT regardless of src-->canonical or canonical --> driving.
The text was updated successfully, but these errors were encountered: