Parameters ---------- file_path : pathlib.Path or str
- Path to the file containing the SLEAP predictions in ".h5"
- (analysis) format. Alternatively, an ".slp" (labels) file can
+ Path to the file containing the SLEAP predictions in .h5
+ (analysis) format. Alternatively, a .slp (labels) file can also be supplied (but this feature is experimental, see Notes). fps : float, optional The number of frames per second in the video. If None (default),
@@ -490,18 +516,18 @@
Source code for movement.io.load_poses
Notes -----
- The SLEAP predictions are normally saved in ".slp" files, e.g.
+ The SLEAP predictions are normally saved in .slp files, e.g. "v1.predictions.slp". An analysis file, suffixed with ".h5" can be exported
- from the ".slp" file, using either the command line tool `sleap-convert`
+ from the .slp file, using either the command line tool `sleap-convert` (with the "--format analysis" option enabled) or the SLEAP GUI (Choose "Export Analysis HDF5…" from the "File" menu) [1]_. This is the preferred format for loading pose tracks from SLEAP into *movement*.
- You can also directly load the ".slp" file. However, if the file contains
+ You can also directly load the .slp file. However, if the file contains multiple videos, only the pose tracks from the first video will be loaded. If the file contains a mix of user-labelled and predicted instances, user labels are prioritised over predicted instances to mirror SLEAP's approach
- when exporting ".h5" analysis files [2]_.
+ when exporting .h5 analysis files [2]_. *movement* expects the tracks to be assigned and proofread before loading them, meaning each track is interpreted as a single individual/animal. If
@@ -548,6 +574,36 @@
Source code for movement.io.load_poses
returnds
+
[docs]deffrom_lp_file(
+ file_path:Union[Path,str],fps:Optional[float]=None
+)->xr.Dataset:
+"""Load pose tracking data from a LightningPose (LP) output file
+ into an xarray Dataset.
+
+ Parameters
+ ----------
+ file_path : pathlib.Path or str
+ Path to the file containing the LP predicted poses, in .csv format.
+ fps : float, optional
+ The number of frames per second in the video. If None (default),
+ the `time` coordinates will be in frame numbers.
+
+ Returns
+ -------
+ xarray.Dataset
+ Dataset containing the pose tracks, confidence scores, and metadata.
+
+ Examples
+ --------
+ >>> from movement.io import load_poses
+ >>> ds = load_poses.from_lp_file("path/to/file.csv", fps=30)
+ """
+
+ return_from_lp_or_dlc_file(
+ file_path=file_path,source_software="LightningPose",fps=fps
+ )
Parameters ---------- file_path : pathlib.Path or str
- Path to the file containing the DLC predicted poses, either in ".h5"
- or ".csv" format.
+ Path to the file containing the DLC predicted poses, either in .h5
+ or .csv format. fps : float, optional The number of frames per second in the video. If None (default), the `time` coordinates will be in frame numbers.
@@ -578,10 +634,41 @@
+
+
+def_from_lp_or_dlc_file(
+ file_path:Union[Path,str],
+ source_software:Literal["LightningPose","DeepLabCut"],
+ fps:Optional[float]=None,
+)->xr.Dataset:
+"""Loads pose tracking data from a DeepLabCut (DLC) or
+ a LightningPose (LP) output file into an xarray Dataset.
+
+ Parameters
+ ----------
+ file_path : pathlib.Path or str
+ Path to the file containing the DLC predicted poses, either in .h5
+ or .csv format.
+ source_software : {'LightningPose', 'DeepLabCut'}
+ fps : float, optional
+ The number of frames per second in the video. If None (default),
+ the `time` coordinates will be in frame numbers.
+
+ Returns
+ -------
+ xarray.Dataset
+ Dataset containing the pose tracks, confidence scores, and metadata.
+ """
+
+ expected_suffix=[".csv"]
+ ifsource_software=="DeepLabCut":
+ expected_suffix.append(".h5")
+
file=ValidFile(
- file_path,
- expected_permission="r",
- expected_suffix=[".csv",".h5"],
+ file_path,expected_permission="r",expected_suffix=expected_suffix)# Load the DLC poses into a DataFrame
@@ -595,12 +682,18 @@
Source code for movement.io.load_poses
ds=from_dlc_df(df=df,fps=fps)# Add metadata as attrs
- ds.attrs["source_software"]="DeepLabCut"
+ ds.attrs["source_software"]=source_softwareds.attrs["source_file"]=file.path.as_posix()
+ # If source_software="LightningPose", we need to re-validate (because the
+ # validation call in from_dlc_df was run with source_software="DeepLabCut")
+ # This rerun enforces a single individual for LightningPose datasets.
+ ifsource_software=="LightningPose":
+ ds.poses.validate()
+
logger.info(f"Loaded pose tracks from {file.path}:")logger.info(ds)
- returnds
This function only considers SLEAP instances in the first video of the SLEAP `Labels` object. User-labelled instances are prioritised over predicted instances, mirroring SLEAP's approach
- when exporting ".h5" analysis files [1]_.
+ when exporting .h5 analysis files [1]_. This function is adapted from `Labels.numpy()` from the `sleap_io` package [2]_.
@@ -906,23 +1001,15 @@
to_dlc_file : Save the xarray dataset containing pose tracks directly to a DeepLabCut-style .h5 or .csv file. """
- ifnotisinstance(ds,xr.Dataset):
- raiselog_error(
- ValueError,f"Expected an xarray Dataset, but got {type(ds)}."
- )
-
- ds.poses.validate()# validate the dataset
+ _validate_dataset(ds)scorer=["movement"]bodyparts=ds.coords["keypoints"].data.tolist()coords=ds.coords["space"].data.tolist()+["likelihood"]
@@ -558,17 +580,18 @@
Source code for movement.io.save_poses
file_path : pathlib.Path or str Path to the file to save the DLC poses to. The file extension must be either .h5 (recommended) or .csv.
- split_individuals : bool, optional
+ split_individuals : bool or "auto", optional
+ Whether to save individuals to separate files or to the same file.\n If True, each individual will be saved to a separate file, formatted as in a single-animal DeepLabCut project - i.e. without the "individuals" column level. The individual's name will be appended to the file path, just before the file extension, i.e.
- "/path/to/filename_individual1.h5".
+ "/path/to/filename_individual1.h5".\n If False, all individuals will be saved to the same file, formatted as in a multi-animal DeepLabCut project - i.e. the columns will include the "individuals" level. The file path will not be
- modified.
- If "auto" the argument's value be determined based on the number of
+ modified.\n
+ If "auto" the argument's value is determined based on the number of individuals in the dataset: True if there is only one, and False if there are more than one. This is the default.
@@ -581,19 +604,11 @@
ifisinstance(df_all,pd.DataFrame):_save_dlc_df(file.path,df_all)logger.info(f"Saved PoseTracks dataset to {file.path}.")
+
+
+
[docs]defto_lp_file(
+ ds:xr.Dataset,
+ file_path:Union[str,Path],
+)->None:
+"""Save the xarray dataset containing pose tracks to a LightningPose-style
+ .csv file. See Notes for more details.
+
+ Parameters
+ ----------
+ ds : xarray.Dataset
+ Dataset containing pose tracks, confidence scores, and metadata.
+ file_path : pathlib.Path or str
+ Path to the .csv file to save the poses to.
+
+ Notes
+ -----
+ LightningPose saves pose estimation outputs as .csv files, using the same
+ format as single-animal DeepLabCut projects. Therefore, under the hood,
+ this function calls ``to_dlc_file`` with ``split_individuals=True``. This
+ setting means that each individual is saved to a separate file, with
+ the individual's name appended to the file path, just before the file
+ extension, i.e. "/path/to/filename_individual1.csv".
+
+ See Also
+ --------
+ to_dlc_file : Save the xarray dataset containing pose tracks to a
+ DeepLabCut-style .h5 or .csv file.
+ """
+
+ file=_validate_file_path(file_path=file_path,expected_suffix=[".csv"])
+ _validate_dataset(ds)
+ to_dlc_file(ds,file.path,split_individuals=True)
+
+
+
[docs]defto_sleap_analysis_file(
+ ds:xr.Dataset,file_path:Union[str,Path]
+)->None:
+"""Save the xarray dataset containing pose tracks to a SLEAP-style
+ .h5 analysis file.
+
+ Parameters
+ ----------
+ ds : xarray.Dataset
+ Dataset containing pose tracks, confidence scores, and metadata.
+ file_path : pathlib.Path or str
+ Path to the file to save the poses to. The file extension must be .h5.
+
+ Notes
+ -----
+ The output file will contain the following keys (as in SLEAP .h5 analysis
+ files):
+ "track_names", "node_names", "tracks", "track_occupancy", "point_scores",
+ "instance_scores", "tracking_scores", "labels_path", "edge_names",
+ "edge_inds", "video_path", "video_ind", "provenance" [1]_.
+ However, only "track_names", "node_names", "tracks", "track_occupancy"
+ and "point_scores" will contain data extracted from the input dataset.
+ "labels_path" will contain the path to the input file only if the source
+ file of the dataset is a SLEAP .slp file. Otherwise, it will be an empty
+ string.
+ The other attributes and data variables that are not present in the input
+ dataset will contain default (empty) values.
+
+ References
+ ----------
+ .. [1] https://sleap.ai/api/sleap.info.write_tracking_h5.html
+
+ Examples
+ --------
+ >>> from movement.io import save_poses, load_poses
+ >>> ds = load_poses.from_dlc_file("path/to/file.h5")
+ >>> save_poses.to_sleap_analysis_file(
+ ... ds, "/path/to/file_sleap.analysis.h5"
+ ... )
+ """
+
+ file=_validate_file_path(file_path=file_path,expected_suffix=[".h5"])
+ _validate_dataset(ds)
+
+ ds=_remove_unoccupied_tracks(ds)
+
+ # Target shapes:
+ # "track_occupancy" n_frames * n_individuals
+ # "tracks" n_individuals * n_space * n_keypoints * n_frames
+ # "track_names" n_individuals
+ # "point_scores" n_individuals * n_keypoints * n_frames
+ # "instance_scores" n_individuals * n_frames
+ # "tracking_scores" n_individuals * n_frames
+ individual_names=ds.individuals.values.tolist()
+ n_individuals=len(individual_names)
+ keypoint_names=ds.keypoints.values.tolist()
+ # Compute frame indices from fps, if set
+ ifds.fpsisnotNone:
+ frame_idxs=np.rint(ds.time.values*ds.fps).astype(int).tolist()
+ else:
+ frame_idxs=ds.time.values.astype(int).tolist()
+ n_frames=frame_idxs[-1]-frame_idxs[0]+1
+ pos_x=ds.pose_tracks.sel(space="x").values
+ # Mask denoting which individuals are present in each frame
+ track_occupancy=(~np.all(np.isnan(pos_x),axis=2)).astype(int)
+ tracks=np.transpose(ds.pose_tracks.data,(1,3,2,0))
+ point_scores=np.transpose(ds.confidence.data,(1,2,0))
+ instance_scores=np.full((n_individuals,n_frames),np.nan,dtype=float)
+ tracking_scores=np.full((n_individuals,n_frames),np.nan,dtype=float)
+ labels_path=(
+ ds.source_fileifPath(ds.source_file).suffix==".slp"else""
+ )
+ data_dict=dict(
+ track_names=individual_names,
+ node_names=keypoint_names,
+ tracks=tracks,
+ track_occupancy=track_occupancy,
+ point_scores=point_scores,
+ instance_scores=instance_scores,
+ tracking_scores=tracking_scores,
+ labels_path=labels_path,
+ edge_names=[],
+ edge_inds=[],
+ video_path="",
+ video_ind=0,
+ provenance="{}",
+ )
+ withh5py.File(file.path,"w")asf:
+ forkey,valindata_dict.items():
+ ifisinstance(val,np.ndarray):
+ f.create_dataset(
+ key,
+ data=val,
+ compression="gzip",
+ compression_opts=9,
+ )
+ else:
+ f.create_dataset(key,data=val)
+ logger.info(f"Saved PoseTracks dataset to {file.path}.")
+
+
+def_remove_unoccupied_tracks(ds:xr.Dataset):
+"""Remove tracks that are completely unoccupied in the xarray dataset.
+
+ Parameters
+ ----------
+ ds : xarray.Dataset
+ Dataset containing pose tracks, confidence scores, and metadata.
+
+ Returns
+ -------
+ xarray.Dataset
+ The input dataset without the unoccupied tracks.
+ """
+
+ all_nan=ds.pose_tracks.isnull().all(dim=["keypoints","space","time"])
+ returnds.where(~all_nan,drop=True)
+
+
+def_validate_file_path(
+ file_path:Union[str,Path],expected_suffix:list[str]
+)->ValidFile:
+"""Validate the input file path by checking that the file has
+ write permission and expected suffix(es). If the file is not valid,
+ an appropriate error is raised.
+
+ Parameters
+ ----------
+ file_path : pathlib.Path or str
+ Path to the file to validate.
+ expected_suffix : list of str
+ Expected suffix(es) for the file.
+
+ Returns
+ -------
+ ValidFile
+ The validated file.
+
+ Raises
+ ------
+ OSError
+ If the file cannot be written.
+ ValueError
+ If the file does not have the expected suffix.
+ """
+
+ try:
+ file=ValidFile(
+ file_path,
+ expected_permission="w",
+ expected_suffix=expected_suffix,
+ )
+ except(OSError,ValueError)aserror:
+ logger.error(error)
+ raiseerror
+ returnfile
+
+
+def_validate_dataset(ds:xr.Dataset)->None:
+"""Validate the input dataset is an xarray Dataset with valid PoseTracks.
+
+ Parameters
+ ----------
+ ds : xarray.Dataset
+ Dataset to validate.
+
+ Raises
+ ------
+ ValueError
+ If `ds` is not an xarray Dataset with valid PoseTracks.
+ """
+
+ ifnotisinstance(ds,xr.Dataset):
+ raiselog_error(
+ ValueError,f"Expected an xarray Dataset, but got {type(ds)}."
+ )
+ ds.poses.validate()# validate the dataset
etc. fps : float, optional Frames per second of the video. Defaults to None.
+ source_software : str, optional
+ Name of the software from which the pose tracks were loaded.
+ Defaults to None. """# Define class attributes
@@ -673,6 +702,10 @@
diff --git a/_sources/api/movement.io.load_poses.from_lp_file.rst.txt b/_sources/api/movement.io.load_poses.from_lp_file.rst.txt
new file mode 100644
index 00000000..8c807f45
--- /dev/null
+++ b/_sources/api/movement.io.load_poses.from_lp_file.rst.txt
@@ -0,0 +1,6 @@
+movement.io.load\_poses.from\_lp\_file
+======================================
+
+.. currentmodule:: movement.io.load_poses
+
+.. autofunction:: from_lp_file
\ No newline at end of file
diff --git a/_sources/api/movement.io.save_poses.to_lp_file.rst.txt b/_sources/api/movement.io.save_poses.to_lp_file.rst.txt
new file mode 100644
index 00000000..7e293669
--- /dev/null
+++ b/_sources/api/movement.io.save_poses.to_lp_file.rst.txt
@@ -0,0 +1,6 @@
+movement.io.save\_poses.to\_lp\_file
+====================================
+
+.. currentmodule:: movement.io.save_poses
+
+.. autofunction:: to_lp_file
\ No newline at end of file
diff --git a/_sources/api/movement.io.save_poses.to_sleap_analysis_file.rst.txt b/_sources/api/movement.io.save_poses.to_sleap_analysis_file.rst.txt
new file mode 100644
index 00000000..197ac9af
--- /dev/null
+++ b/_sources/api/movement.io.save_poses.to_sleap_analysis_file.rst.txt
@@ -0,0 +1,6 @@
+movement.io.save\_poses.to\_sleap\_analysis\_file
+=================================================
+
+.. currentmodule:: movement.io.save_poses
+
+.. autofunction:: to_sleap_analysis_file
\ No newline at end of file
diff --git a/_sources/api/movement.io.validators.ValidPoseTracks.rst.txt b/_sources/api/movement.io.validators.ValidPoseTracks.rst.txt
index 244571fc..d02a8d8f 100644
--- a/_sources/api/movement.io.validators.ValidPoseTracks.rst.txt
+++ b/_sources/api/movement.io.validators.ValidPoseTracks.rst.txt
@@ -28,5 +28,6 @@
~ValidPoseTracks.individual_names
~ValidPoseTracks.keypoint_names
~ValidPoseTracks.fps
+ ~ValidPoseTracks.source_software
\ No newline at end of file
diff --git a/_sources/api_index.rst.txt b/_sources/api_index.rst.txt
index b51367eb..c191e086 100644
--- a/_sources/api_index.rst.txt
+++ b/_sources/api_index.rst.txt
@@ -13,6 +13,7 @@ Input/Output
from_sleap_file
from_dlc_file
from_dlc_df
+ from_lp_file
.. currentmodule:: movement.io.save_poses
.. autosummary::
@@ -20,6 +21,8 @@ Input/Output
to_dlc_file
to_dlc_df
+ to_sleap_analysis_file
+ to_lp_file
.. currentmodule:: movement.io.validators
.. autosummary::
diff --git a/_sources/community/roadmap.md.txt b/_sources/community/roadmap.md.txt
index e5e5629b..799f4c95 100644
--- a/_sources/community/roadmap.md.txt
+++ b/_sources/community/roadmap.md.txt
@@ -19,7 +19,7 @@ The following features are being considered for the first stable version `v1.0`.
## Short-term milestone - `v0.1`
We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include the following features:
-- Importing pose tracks from [DeepLabCut](dlc:) and [SLEAP](sleap:) into a common `xarray.Dataset` structure. This has been largely accomplished, but some remaining work is required to handle special cases.
+- Importing pose tracks from [DeepLabCut](dlc:), [SLEAP](sleap:) and [LightningPose](lp:) into a common `xarray.Dataset` structure. This has been already accomplished.
- Visualisation of pose tracks using [napari](napari:). We aim to represent pose tracks via the [napari tracks layer](napari:howtos/layers/tracks) and overlay them on a video frame. This should be accompanied by a minimal GUI widget to allow selection of a subset of the tracks to plot. This line of work is still in a pilot phase. We may decide to use a different visualisation framework if we encounter roadblocks.
- At least one function for cleaning the pose tracks. Once the first one is in place, it can serve as a template for others.
- Computing velocity and acceleration from pose tracks. Again, this should serve as a template for other kinematic variables.
diff --git a/_sources/examples/load_and_explore_poses.rst.txt b/_sources/examples/load_and_explore_poses.rst.txt
index fb605a3e..f6f8292b 100644
--- a/_sources/examples/load_and_explore_poses.rst.txt
+++ b/_sources/examples/load_and_explore_poses.rst.txt
@@ -148,7 +148,7 @@ Load the dataset
.. GENERATED FROM PYTHON SOURCE LINES 40-43
The loaded dataset contains two data variables:
-``pose_tracks`` and ``confidence```
+``pose_tracks`` and ``confidence``.
To get the pose tracks:
.. GENERATED FROM PYTHON SOURCE LINES 43-45
@@ -224,7 +224,7 @@ using ``xarray``'s built-in plotting methods:
.. code-block:: none
-
+
@@ -255,7 +255,7 @@ for all individuals:
.. code-block:: none
-
+
@@ -299,14 +299,14 @@ For example, we can use ``matplotlib`` to plot trajectories
.. code-block:: none
-
+
.. rst-class:: sphx-glr-timing
- **Total running time of the script:** (0 minutes 0.720 seconds)
+ **Total running time of the script:** (0 minutes 0.741 seconds)
.. _sphx_glr_download_examples_load_and_explore_poses.py:
diff --git a/_sources/examples/sg_execution_times.rst.txt b/_sources/examples/sg_execution_times.rst.txt
index 0dedf237..1fa84721 100644
--- a/_sources/examples/sg_execution_times.rst.txt
+++ b/_sources/examples/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
Computation times
=================
-**00:00.720** total execution time for 1 file **from examples**:
+**00:00.741** total execution time for 1 file **from examples**:
.. container::
@@ -33,5 +33,5 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_examples_load_and_explore_poses.py` (``load_and_explore_poses.py``)
- - 00:00.720
+ - 00:00.741
- 0.0
diff --git a/_sources/getting_started.md.txt b/_sources/getting_started.md.txt
index d3513543..bf7b7a26 100644
--- a/_sources/getting_started.md.txt
+++ b/_sources/getting_started.md.txt
@@ -61,13 +61,13 @@ First import the `movement.io.load_poses` module:
from movement.io import load_poses
```
-Then, use the `from_dlc_file` or `from_sleap_file` functions to load the data.
+Then, depending on the source of your data, use one of the following functions:
::::{tab-set}
:::{tab-item} SLEAP
-Load from [SLEAP analysis files](sleap:tutorials/analysis) (`.h5`):
+Load from [SLEAP analysis files](sleap:tutorials/analysis) (.h5):
```python
ds = load_poses.from_sleap_file("/path/to/file.analysis.h5", fps=30)
```
@@ -75,12 +75,12 @@ ds = load_poses.from_sleap_file("/path/to/file.analysis.h5", fps=30)
:::{tab-item} DeepLabCut
-Load pose estimation outputs from `.h5` files:
+Load pose estimation outputs from .h5 files:
```python
ds = load_poses.from_dlc_file("/path/to/file.h5", fps=30)
```
-You may also load `.csv` files (assuming they are formatted as DeepLabCut expects them):
+You may also load .csv files (assuming they are formatted as DeepLabCut expects them):
```python
ds = load_poses.from_dlc_file("/path/to/file.csv", fps=30)
```
@@ -95,6 +95,14 @@ ds = load_poses.from_dlc_df(df, fps=30)
```
:::
+:::{tab-item} LightningPose
+
+Load from LightningPose (LP) files (.csv):
+```python
+ds = load_poses.from_lp_file("/path/to/file.analysis.csv", fps=30)
+```
+:::
+
::::
You can also try movement out on some sample data included in the package.
@@ -144,6 +152,8 @@ representation of the dataset by simply typing its name - e.g. `ds` - in a cell.
### Dataset structure
+![](_static/dataset_structure.png)
+
The movement `xarray.Dataset` has the following dimensions:
- `time`: the number of frames in the video
- `individuals`: the number of individuals in the video
@@ -206,19 +216,67 @@ to visualise the data. Check out the [Load and explore pose tracks](./examples/l
example for inspiration.
## Saving data
-You can save movement datasets to disk in a variety of formats.
-Currently, only saving to DeepLabCut-style files is supported.
+You can save movement datasets to disk in a variety of formats, including
+DeepLabCut-style files (.h5 or .csv) and [SLEAP-style analysis files](sleap:tutorials/analysis) (.h5).
+
+First import the `movement.io.save_poses` module:
```python
from movement.io import save_poses
+```
-save_poses.to_dlc_file(ds, "/path/to/file.h5") # preferred
+Then, depending on the desired format, use one of the following functions:
+
+:::::{tab-set}
+
+::::{tab-item} SLEAP
+
+Save to SLEAP-style analysis files (.h5):
+```python
+save_poses.to_sleap_analysis_file(ds, "/path/to/file.h5")
+```
+
+:::{note}
+When saving to SLEAP-style files, only `track_names`, `node_names`, `tracks`, `track_occupancy`,
+and `point_scores` are saved. `labels_path` will only be saved if the source
+file of the dataset is a SLEAP .slp file. Otherwise, it will be an empty string.
+Other attributes and data variables
+(i.e., `instance_scores`, `tracking_scores`, `edge_names`, `edge_inds`, `video_path`,
+`video_ind`, and `provenance`) are not currently supported. To learn more about what
+each attribute and data variable represents, see the
+[SLEAP documentation](sleap:api/sleap.info.write_tracking_h5.html#module-sleap.info.write_tracking_h5).
+:::
+::::
+
+::::{tab-item} DeepLabCut
+
+Save to DeepLabCut-style files (.h5 or .csv):
+```python
+save_poses.to_dlc_file(ds, "/path/to/file.h5") # preferred format
save_poses.to_dlc_file(ds, "/path/to/file.csv")
```
-Instead of saving to file directly, you can also convert the dataset to a
-DeepLabCut-style `pandas.DataFrame` first:
+Alternatively, you can first convert the dataset to a
+DeepLabCut-style `pandas.DataFrame` using the `to_dlc_df` function:
```python
df = save_poses.to_dlc_df(ds)
```
and then save it to file using any `pandas` method, e.g. `to_hdf` or `to_csv`.
+::::
+
+::::{tab-item} LightningPose
+
+Save to LightningPose (LP) files (.csv).
+```python
+save_poses.to_lp_file(ds, "/path/to/file.csv")
+```
+:::{note}
+Because LP saves pose estimation outputs in the same format as single-animal
+DeepLabCut projects, the above command is equivalent to:
+```python
+save_poses.to_dlc_file(ds, "/path/to/file.csv", split_individuals=True)
+```
+:::
+
+::::
+:::::
diff --git a/_sources/index.md.txt b/_sources/index.md.txt
index da59b6f0..95863aaa 100644
--- a/_sources/index.md.txt
+++ b/_sources/index.md.txt
@@ -1,7 +1,7 @@
(target-movement)=
# movement
-Python tools for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
+A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
::::{grid} 1 2 2 3
:gutter: 3
@@ -28,6 +28,8 @@ Get in touch and contribute.
:::
::::
+![](_static/movement_overview.png)
+
## Overview
Pose estimation tools, such as [DeepLabCut](dlc:) and [SLEAP](sleap:) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the *pose tracks* produced from these software packages.
diff --git a/_sources/sg_execution_times.rst.txt b/_sources/sg_execution_times.rst.txt
index 5ef4ce2c..f8c672db 100644
--- a/_sources/sg_execution_times.rst.txt
+++ b/_sources/sg_execution_times.rst.txt
@@ -6,7 +6,7 @@
Computation times
=================
-**00:00.720** total execution time for 1 file **from all galleries**:
+**00:00.741** total execution time for 1 file **from all galleries**:
.. container::
@@ -33,5 +33,5 @@ Computation times
- Time
- Mem (MB)
* - :ref:`sphx_glr_examples_load_and_explore_poses.py` (``../../examples/load_and_explore_poses.py``)
- - 00:00.720
+ - 00:00.741
- 0.0
diff --git a/_sources/snippets/get-in-touch.md.txt b/_sources/snippets/get-in-touch.md.txt
index ab5f95c3..6e10dc3e 100644
--- a/_sources/snippets/get-in-touch.md.txt
+++ b/_sources/snippets/get-in-touch.md.txt
@@ -1,3 +1,3 @@
:::{admonition} Get in touch
-You are welcome to chat with the team on [Zulip](movement-zulip:). You may also [open an issue](movement-github:issues) to report a bug or request a new feature.
+You are welcome to chat with the team on [Zulip](movement-zulip:). You can also [open an issue](movement-github:issues) to report a bug or request a new feature.
:::
diff --git a/_sources/snippets/status-warning.md.txt b/_sources/snippets/status-warning.md.txt
index 64ba8431..a9ccbbb7 100644
--- a/_sources/snippets/status-warning.md.txt
+++ b/_sources/snippets/status-warning.md.txt
@@ -1,6 +1,4 @@
:::{admonition} Status
:class: warning
-- 🏗️ The package is currently in early development. Stay tuned ⌛
-- It is not sufficiently tested to be used for scientific analysis.
-- The interface is subject to changes.
+The package is currently in early development and the interface is subject to change. Feel free to play around and provide feedback.
:::
diff --git a/_static/css/custom.css b/_static/css/custom.css
new file mode 100644
index 00000000..40e09e63
--- /dev/null
+++ b/_static/css/custom.css
@@ -0,0 +1,32 @@
+html[data-theme=dark] {
+ --pst-color-primary: #04B46D;
+ --pst-color-link: var(--pst-color-primary);
+ }
+
+html[data-theme=light] {
+ --pst-color-primary: #03A062;
+ --pst-color-link: var(--pst-color-primary);
+}
+
+body .bd-article-container {
+max-width: 100em !important;
+}
+
+.col {
+flex: 0 0 50%;
+max-width: 50%;
+}
+
+.img-sponsor {
+height: 50px;
+padding-top: 5px;
+padding-right: 5px;
+padding-bottom: 5px;
+padding-left: 5px;
+}
+
+.things-in-a-row {
+display: flex;
+flex-wrap: wrap;
+justify-content: space-between;
+}
diff --git a/_static/dark-logo-gatsby.png b/_static/dark-logo-gatsby.png
new file mode 100644
index 00000000..97cb5036
Binary files /dev/null and b/_static/dark-logo-gatsby.png differ
diff --git a/_static/dark-logo-niu.png b/_static/dark-logo-niu.png
new file mode 100644
index 00000000..324b1bca
Binary files /dev/null and b/_static/dark-logo-niu.png differ
diff --git a/_static/dark-logo-swc.png b/_static/dark-logo-swc.png
new file mode 100644
index 00000000..845c93cd
Binary files /dev/null and b/_static/dark-logo-swc.png differ
diff --git a/_static/dark-logo-ucl.png b/_static/dark-logo-ucl.png
new file mode 100644
index 00000000..2f9b77c2
Binary files /dev/null and b/_static/dark-logo-ucl.png differ
diff --git a/_static/dark-wellcome-logo.png b/_static/dark-wellcome-logo.png
new file mode 100644
index 00000000..9af15310
Binary files /dev/null and b/_static/dark-wellcome-logo.png differ
diff --git a/_static/dataset_structure.png b/_static/dataset_structure.png
new file mode 100644
index 00000000..36bc595d
Binary files /dev/null and b/_static/dataset_structure.png differ
diff --git a/_static/documentation_options.js b/_static/documentation_options.js
index f69e53b4..6270b2c8 100644
--- a/_static/documentation_options.js
+++ b/_static/documentation_options.js
@@ -1,6 +1,6 @@
var DOCUMENTATION_OPTIONS = {
URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'),
- VERSION: '0.0.11',
+ VERSION: '0.0.12',
LANGUAGE: 'en',
COLLAPSE_INDEX: false,
BUILDER: 'html',
diff --git a/_static/light-logo-gatsby.png b/_static/light-logo-gatsby.png
new file mode 100644
index 00000000..d191f1aa
Binary files /dev/null and b/_static/light-logo-gatsby.png differ
diff --git a/_static/light-logo-niu.png b/_static/light-logo-niu.png
new file mode 100644
index 00000000..efba9404
Binary files /dev/null and b/_static/light-logo-niu.png differ
diff --git a/_static/light-logo-swc.png b/_static/light-logo-swc.png
new file mode 100644
index 00000000..98f29491
Binary files /dev/null and b/_static/light-logo-swc.png differ
diff --git a/_static/light-logo-ucl.png b/_static/light-logo-ucl.png
new file mode 100644
index 00000000..9222f876
Binary files /dev/null and b/_static/light-logo-ucl.png differ
diff --git a/_static/light-wellcome-logo.png b/_static/light-wellcome-logo.png
new file mode 100644
index 00000000..762cb392
Binary files /dev/null and b/_static/light-wellcome-logo.png differ
diff --git a/_static/movement_overview.png b/_static/movement_overview.png
new file mode 100644
index 00000000..8af12daa
Binary files /dev/null and b/_static/movement_overview.png differ
diff --git a/api/movement.datasets.fetch_pose_data_path.html b/api/movement.datasets.fetch_pose_data_path.html
index 2330741c..2eaa907b 100644
--- a/api/movement.datasets.fetch_pose_data_path.html
+++ b/api/movement.datasets.fetch_pose_data_path.html
@@ -28,6 +28,7 @@
+
@@ -40,13 +41,14 @@
-
+
+
@@ -124,7 +126,7 @@
-
You can also directly load the “.slp” file. However, if the file contains
+
You can also directly load the .slp file. However, if the file contains
multiple videos, only the pose tracks from the first video will be loaded.
If the file contains a mix of user-labelled and predicted instances, user
labels are prioritised over predicted instances to mirror SLEAP’s approach
-when exporting “.h5” analysis files [2].
movement expects the tracks to be assigned and proofread before loading
them, meaning each track is interpreted as a single individual/animal. If
no tracks are found in the file, movement assumes that this is a
@@ -570,23 +599,15 @@
file_path (pathlib.Path or str) – Path to the .csv file to save the poses to.
+
+
+
Return type:
+
None
+
+
+
Notes
+
LightningPose saves pose estimation outputs as .csv files, using the same
+format as single-animal DeepLabCut projects. Therefore, under the hood,
+this function calls to_dlc_file with split_individuals=True. This
+setting means that each individual is saved to a separate file, with
+the individual’s name appended to the file path, just before the file
+extension, i.e. “/path/to/filename_individual1.csv”.
file_path (pathlib.Path or str) – Path to the file to save the poses to. The file extension must be .h5.
+
+
+
Return type:
+
None
+
+
+
Notes
+
The output file will contain the following keys (as in SLEAP .h5 analysis
+files):
+“track_names”, “node_names”, “tracks”, “track_occupancy”, “point_scores”,
+“instance_scores”, “tracking_scores”, “labels_path”, “edge_names”,
+“edge_inds”, “video_path”, “video_ind”, “provenance” [1].
+However, only “track_names”, “node_names”, “tracks”, “track_occupancy”
+and “point_scores” will contain data extracted from the input dataset.
+“labels_path” will contain the path to the input file only if the source
+file of the dataset is a SLEAP .slp file. Otherwise, it will be an empty
+string.
+The other attributes and data variables that are not present in the input
+dataset will contain default (empty) values.
Convert an xarray dataset containing pose tracks into a single DeepLabCut-style pandas DataFrame or a dictionary of DataFrames per individual, depending on the 'split_individuals' argument.
At its core, movement handles trajectories of keypoints, which are specific body parts of an individual. An individual’s posture or pose is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms pose tracks. In neuroscience, these tracks are typically extracted from video data using software like DeepLabCut or SLEAP.
+
At its core, movement handles trajectories of keypoints, which are specific body parts of an individual. An individual’s posture or pose is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms pose tracks. In neuroscience, these tracks are typically extracted from video data using software like DeepLabCut or SLEAP.
With movement, our vision is to present a consistent interface for pose tracks and to analyze them using modular and accessible tools. We aim to accommodate data from a range of pose estimation packages, in 2D or 3D, tracking single or multiple individuals. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the Roadmap for details).
While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools.
The roadmap outlines current development priorities and aims to guide core developers and to encourage community contributions. It is a living document and will be updated as the project evolves.
-
The roadmap is not meant to limit movement features, as we are open to suggestions and contributions. Join our Zulip chat to share your ideas. We will take community demand and feedback into account when planning future releases.
+
The roadmap is not meant to limit movement features, as we are open to suggestions and contributions. Join our Zulip chat to share your ideas. We will take community demand and feedback into account when planning future releases.
We plan to release version v0.1 of movement in early 2024, providing a minimal set of features to demonstrate the project’s potential and to gather feedback from users. At minimum, it should include the following features:
-
Importing pose tracks from DeepLabCut and SLEAP into a common xarray.Dataset structure. This has been largely accomplished, but some remaining work is required to handle special cases.
+
Importing pose tracks from DeepLabCut, SLEAP and LightningPose into a common xarray.Dataset structure. This has been already accomplished.
Visualisation of pose tracks using napari. We aim to represent pose tracks via the napari tracks layer and overlay them on a video frame. This should be accompanied by a minimal GUI widget to allow selection of a subset of the tracks to plot. This line of work is still in a pilot phase. We may decide to use a different visualisation framework if we encounter roadblocks.
At least one function for cleaning the pose tracks. Once the first one is in place, it can serve as a template for others.
Computing velocity and acceleration from pose tracks. Again, this should serve as a template for other kinematic variables.