Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Loading function for Anipose data #358

Merged
merged 46 commits into from
Dec 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
86b2a77
first draft of loading function
vigji Dec 6, 2024
6badcce
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 6, 2024
e54c600
adapted to new dimensions order
vigji Dec 6, 2024
7534cff
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 6, 2024
1ea3476
adapted to work with new dims arrangement
vigji Dec 6, 2024
68e6a02
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 6, 2024
bc90d1c
anipose loader test
vigji Dec 9, 2024
fdaf4e3
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 9, 2024
5493a65
validator for anipose file
vigji Dec 9, 2024
eb21f0f
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 9, 2024
b6e24b6
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 9, 2024
dd21bc1
anipose validator finished
vigji Dec 9, 2024
6a22f33
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 9, 2024
fae010c
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 9, 2024
ce4a874
linting fixes
vigji Dec 10, 2024
f105619
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 10, 2024
da91251
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2024
e5ea4ab
Update tests/test_unit/test_validators/test_files_validators.py
vigji Dec 10, 2024
7ec6c44
simplified validator test
vigji Dec 10, 2024
e829d60
Update movement/io/load_poses.py
vigji Dec 10, 2024
46a816e
Update movement/validators/files.py
vigji Dec 10, 2024
c5f035a
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2024
4942d2c
Update movement/validators/files.py
vigji Dec 10, 2024
b9fadb6
Update movement/validators/files.py
vigji Dec 10, 2024
1ae4ec3
implementing fixes
vigji Dec 10, 2024
589c80d
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 10, 2024
5f7dadc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2024
1a1b8f3
more consistency fixes
vigji Dec 10, 2024
7117bcf
moved anipose loading test to load_poses
vigji Dec 10, 2024
d204d57
fixed validators tests
vigji Dec 10, 2024
5cea278
tests for anipose loading done properly
vigji Dec 10, 2024
c479298
docstring fixes
vigji Dec 10, 2024
f390e0b
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2024
8addfd5
Implementing direct anipose load from from_file
vigji Dec 10, 2024
c7d1fcd
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 10, 2024
ead27dc
[pre-commit.ci] auto fixes from pre-commit.com hooks
pre-commit-ci[bot] Dec 10, 2024
100008c
ruffed
vigji Dec 10, 2024
481e80b
Merge branch 'anipose-loader' of https://github.com/neuroinformatics-…
vigji Dec 10, 2024
1318a42
trying to fix mypy check
vigji Dec 10, 2024
830d6f8
Update movement/io/load_poses.py
vigji Dec 11, 2024
83d5814
Update movement/io/load_poses.py
vigji Dec 11, 2024
772c9e7
Update movement/io/load_poses.py
vigji Dec 11, 2024
9cd49d8
Update movement/io/load_poses.py
vigji Dec 11, 2024
1f56e75
final touches to docstrings
vigji Dec 11, 2024
78e954f
added entry in input_output docs
vigji Dec 11, 2024
bbb5ab9
define anipose link in conf.py
niksirbi Dec 11, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@
"xarray": "https://docs.xarray.dev/en/stable/{{path}}#{{fragment}}",
"lp": "https://lightning-pose.readthedocs.io/en/stable/{{path}}#{{fragment}}",
"via": "https://www.robots.ox.ac.uk/~vgg/software/via/{{path}}#{{fragment}}",
"anipose": "https://anipose.readthedocs.io/en/latest/",
}

intersphinx_mapping = {
Expand Down
17 changes: 17 additions & 0 deletions docs/source/user_guide/input_output.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ To analyse pose tracks, `movement` supports loading data from various frameworks
- [DeepLabCut](dlc:) (DLC)
- [SLEAP](sleap:) (SLEAP)
- [LightingPose](lp:) (LP)
- [Anipose](anipose:) (Anipose)

To analyse bounding boxes' tracks, `movement` currently supports the [VGG Image Annotator](via:) (VIA) format for [tracks annotation](via:docs/face_track_annotation.html).

Expand Down Expand Up @@ -84,6 +85,22 @@ ds = load_poses.from_file(
```
:::

:::{tab-item} Anipose

To load Anipose files in .csv format:
```python
ds = load_poses.from_anipose_file(
"/path/to/file.analysis.csv", fps=30, individual_name="individual_0"
) # We can optionally specify the individual name, by default it is "individual_0"

# or equivalently
ds = load_poses.from_file(
"/path/to/file.analysis.csv", source_software="Anipose", fps=30, individual_name="individual_0"
)

```
:::

:::{tab-item} From NumPy

In the example below, we create random position data for two individuals, ``Alice`` and ``Bob``,
Expand Down
133 changes: 130 additions & 3 deletions movement/io/load_poses.py
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,12 @@

from movement.utils.logging import log_error, log_warning
from movement.validators.datasets import ValidPosesDataset
from movement.validators.files import ValidDeepLabCutCSV, ValidFile, ValidHDF5
from movement.validators.files import (
ValidAniposeCSV,
ValidDeepLabCutCSV,
ValidFile,
ValidHDF5,
)

logger = logging.getLogger(__name__)

Expand Down Expand Up @@ -91,8 +96,11 @@ def from_numpy(

def from_file(
file_path: Path | str,
source_software: Literal["DeepLabCut", "SLEAP", "LightningPose"],
source_software: Literal[
vigji marked this conversation as resolved.
Show resolved Hide resolved
"DeepLabCut", "SLEAP", "LightningPose", "Anipose"
],
fps: float | None = None,
**kwargs,
) -> xr.Dataset:
"""Create a ``movement`` poses dataset from any supported file.

Expand All @@ -104,11 +112,14 @@ def from_file(
``from_slp_file()`` or ``from_lp_file()`` functions. One of these
these functions will be called internally, based on
the value of ``source_software``.
source_software : "DeepLabCut", "SLEAP" or "LightningPose"
source_software : "DeepLabCut", "SLEAP", "LightningPose", or "Anipose"
The source software of the file.
fps : float, optional
The number of frames per second in the video. If None (default),
the ``time`` coordinates will be in frame numbers.
**kwargs : dict, optional
Additional keyword arguments to pass to the software-specific
loading functions that are listed under "See Also".

Returns
-------
Expand All @@ -121,6 +132,7 @@ def from_file(
movement.io.load_poses.from_dlc_file
movement.io.load_poses.from_sleap_file
movement.io.load_poses.from_lp_file
movement.io.load_poses.from_anipose_file

Examples
--------
Expand All @@ -136,6 +148,8 @@ def from_file(
return from_sleap_file(file_path, fps)
elif source_software == "LightningPose":
return from_lp_file(file_path, fps)
elif source_software == "Anipose":
return from_anipose_file(file_path, fps, **kwargs)
else:
raise log_error(
ValueError, f"Unsupported source software: {source_software}"
Expand Down Expand Up @@ -696,3 +710,116 @@ def _ds_from_valid_data(data: ValidPosesDataset) -> xr.Dataset:
"ds_type": "poses",
},
)


def from_anipose_style_df(
df: pd.DataFrame,
fps: float | None = None,
individual_name: str = "individual_0",
) -> xr.Dataset:
"""Create a ``movement`` poses dataset from an Anipose 3D dataframe.

Parameters
----------
df : pd.DataFrame
Anipose triangulation dataframe
fps : float, optional
The number of frames per second in the video. If None (default),
the ``time`` coordinates will be in frame units.
individual_name : str, optional
Name of the individual, by default "individual_0"

Returns
-------
xarray.Dataset
``movement`` dataset containing the pose tracks, confidence scores,
and associated metadata.


Notes
-----
Reshape dataframe with columns keypoint1_x, keypoint1_y, keypoint1_z,
keypoint1_score,keypoint2_x, keypoint2_y, keypoint2_z,
keypoint2_score...to array of positions with dimensions
time, space, keypoints, individuals, and array of confidence (from scores)
with dimensions time, keypoints, individuals.

"""
keypoint_names = sorted(
list(
set(
[
col.rsplit("_", 1)[0]
vigji marked this conversation as resolved.
Show resolved Hide resolved
for col in df.columns
if any(col.endswith(f"_{s}") for s in ["x", "y", "z"])
]
)
)
)

n_frames = len(df)
n_keypoints = len(keypoint_names)

# Initialize arrays and fill
position_array = np.zeros(
(n_frames, 3, n_keypoints, 1)
) # 1 for single individual
confidence_array = np.zeros((n_frames, n_keypoints, 1))
for i, kp in enumerate(keypoint_names):
for j, coord in enumerate(["x", "y", "z"]):
position_array[:, j, i, 0] = df[f"{kp}_{coord}"]
confidence_array[:, i, 0] = df[f"{kp}_score"]

individual_names = [individual_name]

return from_numpy(
position_array=position_array,
confidence_array=confidence_array,
individual_names=individual_names,
keypoint_names=keypoint_names,
source_software="Anipose",
fps=fps,
)


def from_anipose_file(
file_path: Path | str,
fps: float | None = None,
individual_name: str = "individual_0",
) -> xr.Dataset:
"""Create a ``movement`` poses dataset from an Anipose 3D .csv file.

vigji marked this conversation as resolved.
Show resolved Hide resolved
Parameters
----------
file_path : pathlib.Path
Path to the Anipose triangulation .csv file
fps : float, optional
The number of frames per second in the video. If None (default),
the ``time`` coordinates will be in frame units.
individual_name : str, optional
Name of the individual, by default "individual_0"

Returns
-------
xarray.Dataset
``movement`` dataset containing the pose tracks, confidence scores,
and associated metadata.

Notes
-----
We currently do not load all information, only x, y, z, and score
(confidence) for each keypoint. Future versions will load n of cameras
and error.

"""
file = ValidFile(
file_path,
expected_permission="r",
expected_suffix=[".csv"],
)
anipose_file = ValidAniposeCSV(file.path)
anipose_df = pd.read_csv(anipose_file.path)

return from_anipose_style_df(
anipose_df, fps=fps, individual_name=individual_name
)
88 changes: 88 additions & 0 deletions movement/validators/files.py
Original file line number Diff line number Diff line change
Expand Up @@ -221,6 +221,94 @@ def _file_contains_expected_levels(self, attribute, value):
)


@define
class ValidAniposeCSV:
"""Class for validating Anipose-style 3D pose .csv files.

The validator ensures that the file contains the
expected column names in its header (first row).

Attributes
----------
path : pathlib.Path
Path to the .csv file.

Raises
------
ValueError
If the .csv file does not contain the expected Anipose columns.

"""

path: Path = field(validator=validators.instance_of(Path))

@path.validator
def _file_contains_expected_columns(self, attribute, value):
"""Ensure that the .csv file contains the expected columns."""
expected_column_suffixes = [
"_x",
"_y",
"_z",
"_score",
"_error",
"_ncams",
]
expected_non_keypoint_columns = [
"fnum",
"center_0",
"center_1",
"center_2",
"M_00",
"M_01",
"M_02",
"M_10",
"M_11",
"M_12",
"M_20",
"M_21",
"M_22",
]

# Read the first line of the CSV to get the headers
with open(value) as f:
columns = f.readline().strip().split(",")

# Check that all expected headers are present
if not all(col in columns for col in expected_non_keypoint_columns):
raise log_error(
ValueError,
"CSV file is missing some expected columns."
f"Expected: {expected_non_keypoint_columns}.",
)

# For other headers, check they have expected suffixes and base names
other_columns = [
col for col in columns if col not in expected_non_keypoint_columns
]
for column in other_columns:
# Check suffix
if not any(
column.endswith(suffix) for suffix in expected_column_suffixes
):
raise log_error(
ValueError,
f"Column {column} ends with an unexpected suffix.",
)
# Get base name by removing suffix
base = column.rsplit("_", 1)[0]
# Check base name has all expected suffixes
if not all(
f"{base}{suffix}" in columns
for suffix in expected_column_suffixes
):
raise log_error(
ValueError,
f"Keypoint {base} is missing some expected suffixes."
f"Expected: {expected_column_suffixes};"
f"Got: {columns}.",
)


@define
class ValidVIATracksCSV:
"""Class for validating VIA tracks .csv files.
Expand Down
55 changes: 55 additions & 0 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -199,6 +199,61 @@ def dlc_style_df():
return pd.read_hdf(pytest.DATA_PATHS.get("DLC_single-wasp.predictions.h5"))


@pytest.fixture
def missing_keypoint_columns_anipose_csv_file(tmp_path):
"""Return the file path for a fake single-individual .csv file."""
file_path = tmp_path / "missing_keypoint_columns.csv"
columns = [
"fnum",
"center_0",
"center_1",
"center_2",
"M_00",
"M_01",
"M_02",
"M_10",
"M_11",
"M_12",
"M_20",
"M_21",
"M_22",
]
# Here we are missing kp0_z:
columns.extend(["kp0_x", "kp0_y", "kp0_score", "kp0_error", "kp0_ncams"])
with open(file_path, "w") as f:
f.write(",".join(columns))
f.write("\n")
f.write(",".join(["1"] * len(columns)))
return file_path


@pytest.fixture
def spurious_column_anipose_csv_file(tmp_path):
"""Return the file path for a fake single-individual .csv file."""
file_path = tmp_path / "spurious_column.csv"
columns = [
"fnum",
"center_0",
"center_1",
"center_2",
"M_00",
"M_01",
"M_02",
"M_10",
"M_11",
"M_12",
"M_20",
"M_21",
"M_22",
]
columns.extend(["funny_column"])
with open(file_path, "w") as f:
f.write(",".join(columns))
f.write("\n")
f.write(",".join(["1"] * len(columns)))
return file_path


@pytest.fixture(
params=[
"SLEAP_single-mouse_EPM.analysis.h5",
Expand Down
Loading
Loading