You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Here are the logs of the first stage when training on a Cape sequence:
And here are the logs of the first stage when training on a ClothSeq sequence:
I notice that the pretraining of the skinning net converges very quickly on the Cape sequence, while the loss drops slowly on the ClothSeq sequence. I have checked the minimal_body file, the scans and SMPL parameters of each frame, all of them have the same format as the Cape dataset and are aligned well. So I'm wondering what the problem is? Do I need just more epochs in the first stage, or something else? I hope you can give me some advice, thank you!
The text was updated successfully, but these errors were encountered:
I have not had a chance to test the code on ClothSeq dataset. My guess is either global scale or vertex density of input scans is different, potentially requiring different hyper parameters.
Hi, I'm training SCANimate on ClothSeq dataset provided by Neural-GIF(https://github.com/garvita-tiwari/neuralgif). However, I met some problems and got bad results.
Here are the logs of the first stage when training on a Cape sequence:
And here are the logs of the first stage when training on a ClothSeq sequence:
I notice that the pretraining of the skinning net converges very quickly on the Cape sequence, while the loss drops slowly on the ClothSeq sequence. I have checked the minimal_body file, the scans and SMPL parameters of each frame, all of them have the same format as the Cape dataset and are aligned well. So I'm wondering what the problem is? Do I need just more epochs in the first stage, or something else? I hope you can give me some advice, thank you!
The text was updated successfully, but these errors were encountered: