Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How expression-FID is calculated? #177

Open
Darius-H opened this issue Oct 18, 2024 · 0 comments
Open

How expression-FID is calculated? #177

Darius-H opened this issue Oct 18, 2024 · 0 comments

Comments

@Darius-H
Copy link

In paper Sec4.1 it says: "E-FID leverages the Inception network’s features to critically evaluate the authenticity of the generated images, providing a more nuanced gauge of image fidelity. First, E-FID employs face reconstruction method, as detailed in the [2], to extract expression parameters. Then, it calculates the FID of these extracted parameters to quantitatively assess the disparity between the facial expressions present in the generated videos and those found in the GT dataset."

It confused me, as expression parameters is in shape (B,64). Did you use inception network to extract the features of the expression parameters before calculating the FID or did you calculate the FID directly from the expression parameters, can you provide the test code?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant