-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
turning on flip #11
Comments
Ok, so the error is cause by the lines 70-71 in I believe the poses sampled from the amass dataset are not represented as quaternions (one can see that the pose has shape 63 not 21 x 4). I've checked the original repo and in the original implementation the code is the same. I think we have to correct it on our own. No idea how they obtained the original results though. A good question is how long they have been training the network? Maybe I should continue the training without the flip and wait until I get similar results. |
The one published by the authors is ok. One has to simply turn of the flip (check issue #11 for a more detailed discussion) and use correct paths to the data
Ok, so based on my preliminary observations using |
Training the model using configs shared with the pretrained models:
results in the following error:
Setting the
flip
tofalse
allows to start the training, so the mistake is most likely somehow caused by this functionality. Based on the paper, the flip is an additional information for the network that we always have two quaternions representing the same rotation (i.e. q and -q)The text was updated successfully, but these errors were encountered: