You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The current way the data scripts (like data/prepare_data.py) work is very messy. Among many problems:
there are hard-coded paths from the original paper author's machine
the data files which are nested more than one level are not automatically read by the data/prepare_data.py script. One has to manually move these files. This should be automated
Overall the current data preparation process is extremely unclear. There are unanswered questions like:
which datasets from the AMASS should be downloaded
how to train a model using a smaller subset of datasets
after downloading datasets, how exactly the data should be organised for the scripts to work
The text was updated successfully, but these errors were encountered:
The current way the data scripts (like
data/prepare_data.py
) work is very messy. Among many problems:data/prepare_data.py
script. One has to manually move these files. This should be automatedOverall the current data preparation process is extremely unclear. There are unanswered questions like:
The text was updated successfully, but these errors were encountered: