-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ability to save multi-animal pose tracks to single-animal files #71
Conversation
Kudos, SonarCloud Quality Gate passed! 0 Bugs No Coverage information |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @DhruvSkyy, thanks a lot for contributing!
I left several comments. The main two points to be addressed are:
- The two functions are now quite big and hard to parse, I think we would benefit from refactoring repeated bits into separate functions. I have pointed this out in some concrete examples.
- We don't really handle the case where there is one individual in the dataset but the user has chosen
split_individuals=True
. I think in this case we should save one single-animal DataFrame (without the "individuals" level), as there is nothing to split. But, we should raise a warning to tell the user that we are ignoring thesplit_individuals
setting.
Let me know if you think these are sensible choices. Have a shot at implementing them and we can have another round of review then.
tracks_with_scores = np.concatenate( | ||
( | ||
ds.pose_tracks.data, | ||
ds.confidence.data[..., np.newaxis], | ||
), | ||
axis=-1, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This snippet is repeated twice, once with ds
and once with individual_data
. This is a good candidate to be refactored into a separate function that takes the xarray dataset as input, and return the concatenated numpy array.
) | ||
|
||
# Create the DLC-style multi-index columns | ||
index_levels = ["scorer", "individuals", "bodyparts", "coords"] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is also some repetition here. You can define `index_levels = ["scorer", "bodyparts", "coords"] near the top (before the if statements), and then add the "individuals" level in the second position here, only when it's needed.
|
||
""" Create a single DataFrame with | ||
multi-index columns for each individual """ | ||
df = pd.DataFrame( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This bit is also repeated and can be refactored into its own function that takes a numpy array and the columns as arguments.
Optionally this can be combine with the other refactoring suggestion, that takes xarray dataset and return a numpy array. So, for example, you could have one function that takes an xarray dataset and the index levels, and returns a dataframe.
"likelihood", and stored in the "coords" level (as DeepLabCut expects). | ||
The DataFrame(s) will have a multi-index column with the following levels: | ||
"scorer", "individuals", "bodyparts", "coords" | ||
(if multi_individual is True), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this bit is outdated. We no longer have a "multi_individual" argument, it has to be rewritten to reflect the current arguments.
def to_dlc_file( | ||
ds: xr.Dataset, | ||
file_path: Union[str, Path], | ||
split_individuals: Union[bool, None] = None, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would change this to as described in the docstring:
split_individuals: Union[bool, None] = None, | |
split_individuals: Union[bool, Literal["auto"]] = "auto", |
I think "auto" is more explicit and informative than None in this case. You would also have to modify the corresponding if statement of course.
# Sets default behaviour for the function | ||
if split_individuals is None: | ||
individuals = ds.coords["individuals"].data.tolist() | ||
if len(individuals) == 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The splitting is needed when there are more than one individuals (not when there is only one):
if len(individuals) == 1: | |
if len(individuals) > 1: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could also write this as a one-liner, for example:
split_individuals = True if len(individuals) > 1 else False
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We also may want to throw an error if the user passes an invalid type, something like:
if split_individuals == "auto":
individuals = ds.coords["individuals"].data.tolist()
split_individuals = True if len(individuals) > 1 else False
elif not isinstance(split_individuals, bool):
error_msg = (
f"Expected 'split_individuals' to be a boolean or 'auto', but got "
f"{type(split_individuals)}."
)
log_error(ValueError, error_msg)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For the auto function, would we want it to save a single individual xarray as a single individual dataframe and a multi-individual xarray as a multi-individual dataframe, or save both as single individual dataframes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would say this one:
save a single individual xarray as a single individual dataframe and a multi-individual xarray as a multi-individual dataframe
I think that's what the users would expect.
"""If split_individuals is True then it will split the file into a | ||
dictionary of pandas dataframes for each individual.""" | ||
if split_individuals: | ||
dfdict = to_dlc_df(ds, True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, I would always explicitly provide the keyword arguments, so people don't have to look up to understand the meaning of this boolean:
dfdict = to_dlc_df(ds, True) | |
df_dict = to_dlc_df(ds, split_individuals=True) |
|
||
|
||
def to_dlc_file(ds: xr.Dataset, file_path: Union[str, Path]) -> None: | ||
if split_individuals: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You also need to check if there are actually more than one individuals in the data here.
I would say if there is only one individual, then the split_individuals
argument should not matter at all, and we should always output one single-animal DataFrame (with "scorer", "bodyparts", "coords").
We should only care about "split_individuals", when there are actually many of them to be split. In that case:
- if
split_individuals == True
we should output multiple single animal Dataframes in a dictionary (with the individual name as keys and with "scorer", "bodyparts", "coords" as index levels) - if
split_individuals == False
we should output one combined dataframe (with "scorer", "individuals", "bodyparts", "coords")
the docstring also has to be updated to reflect this behavior.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @niksirbi,
If split_individuals == True and we have only one individual, it will just output one single-animal DataFrame (with "scorer", "bodyparts", "coords"). If split_individuals == False and we have only one individual, it will output one single-animal DataFrame (with "scorer", "individuals", "bodyparts", "coords").
Although for the second case when split_individuals == False the individuals column will just be filled with one individual, it might still be important to have this feature. It might be useful in case they wanted to merge the dataframes with other multi-individual dataframes, where pandas would want the dataframes to have the same format.
It also might be unexpected if they ran the function with split_individuals == False on a set of data with a mixture of single and multi-individual xarrays and saw the output being a mixture of single-animal and multi-animal dataframes as the single-individual xarrays would automatically turn into single-individual dataframes with no choice to make it multi-individual.
The auto function in to_dlc_file handles cases when the user might want all single individual xarrays to be stored as single-individual dataframes and multi-individual xarrays to be stored as multi-individual dataframes, I can make a separate function for this auto feature and also use it for to_dlc_df if preferable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If split_individuals == True and we have only one individual, it will just output one single-animal DataFrame (with "scorer", "bodyparts", "coords"). If split_individuals == False and we have only one individual, it will output one single-animal DataFrame (with "scorer", "individuals", "bodyparts", "coords").
Hm, I actually like this suggestion and the flexibility in gives to the user. The way you are proposing means that split_individuals == True
will always give an output csv
in "single-animal" format, while split_individuals == False
will always return a csv in "multi-animal" format, regardless of how many animals are in the project.
The arguments you make for it are convincing, so let's go ahead and do this! We just have to be careful to write the docstrings in an understandable way.
) in dfdict.items(): | ||
"""Iterates over dictionary, the key is the name of the | ||
individual and the value is the corresponding df.""" | ||
filepath = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I find f-strings more readable, so I would rewrite this as:
filepath = f"{file.path.with_suffix('')}_{key}.csv"
df.to_csv(Path(filepath), sep=",")
|
||
"""If split_individuals is True then it will split the file into a | ||
dictionary of pandas dataframes for each individual.""" | ||
if split_individuals: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again here, as in the above function, we have to check if there are more than one individuals to split, otherwise output only one single-animal file.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If split_individuals is True and there is only one individual to split it should already automatically output only one single-animal file.
This PR has been superseded by #83 |
Description
What is this PR
What does this PR do?
This PR adds the functionality to save xarrays as a dictionary of individual pandas dataframes for to_dlc_df and as multiple files for each individual for to_dlc_file. This is based on the split_individuals: bool parameter, if True it will split the data for each individual, if false it will save it as a multi-animal file.
This issue was described in Issue #39.
References
Please reference any existing issues/PRs that relate to this PR.
How has this PR been tested?
The PR was tested on a Juypter Notebook against the four scenarios, (credit to @niksirbi):
Todo
Need to write tests.
Checklist: