Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update overview, mission, scope, and roadmaps #352

Merged
merged 12 commits into from
Nov 28, 2024
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ repos:
hooks:
- id: check-manifest
args: [--no-build-isolation]
additional_dependencies: [setuptools-scm]
additional_dependencies: [setuptools-scm, wheel]
- repo: https://github.com/codespell-project/codespell
# Configuration for codespell is in pyproject.toml
rev: v2.3.0
Expand Down
17 changes: 12 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

# movement

A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
A Python toolbox for analysing animal body movements across space and time.


![](docs/source/_static/movement_overview.png)
Expand All @@ -27,10 +27,17 @@ conda activate movement-env

## Overview

Pose estimation tools, such as [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the pose tracks produced from these software packages.

movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
Machine learning-based tools such as
[DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and
[SLEAP](https://sleap.ai/) have become commonplace for tracking the
movements of animals and their body parts in videos.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
However, there is still a need for a standardized, easy-to-use method
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
to process the tracks generated by these tools.

`movement` aims to provide a consistent, modular interface for analyzing
motion tracks, enabling steps such as data cleaning, visualization,
and motion quantification. We aim to support all popular animal tracking
frameworks and common file formats.

Find out more on our [mission and scope](https://movement.neuroinformatics.dev/community/mission-scope.html) statement and our [roadmap](https://movement.neuroinformatics.dev/community/roadmaps.html).

Expand Down
2 changes: 1 addition & 1 deletion docs/source/community/index.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Community

Contributions to movement are absolutely encouraged, whether to fix a bug,
Contributions to `movement` are absolutely encouraged, whether to fix a bug,
develop a new feature, or improve the documentation.
To help you get started, we have prepared a statement on the project's [mission and scope](target-mission),
a [roadmap](target-roadmaps) outlining our current priorities, and a detailed [contributing guide](target-contributing).
Expand Down
41 changes: 33 additions & 8 deletions docs/source/community/mission-scope.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,25 +3,50 @@

## Mission

[movement](target-movement) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time.
`movement` aims to **facilitate the study of animal behaviour**
by providing a suite of **Python tools to analyse body movements**
across space and time.

## Scope

At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](dlc:) or [SLEAP](sleap:).

With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmaps) for details).

While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools.
At its core, `movement` handles the positions of one or more individuals
tracked over time. An individual's position at a given time can be represented
in various ways: a single keypoint (usually the centroid), a set of keypoints
(also known as the pose), a bounding box, or a segmentation mask.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
The spatial coordinates of these representations may be defined in 2D (x, y)
or 3D (x, y, z). The pose and mask representations also carry some information
about the individual's posture.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
or 3D (x, y, z). The pose and mask representations also carry some information
about the individual's posture.
or 3D (x, y, z).


Animal tracking frameworks such as [DeepLabCut](dlc:) or [SLEAP](sleap:) can
generate these representations from video data by detecting body parts and
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
generate these representations from video data by detecting body parts and
generate keypoint representations from video data by detecting body parts and

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(but maybe I would consider removing this sentence entirely)

Copy link
Member Author

@niksirbi niksirbi Nov 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well the two examples mentioned here do produce "keypoints", but not all "animal tracking frameworks" do...
I will decided on whether to keep this sentence depending on what happens to the above section (the one with the long discussion).

tracking them across frames. In the context of `movement`, we refer to the
resulting tracks according to their respective representations—for
example, pose tracks, bounding boxes' tracks, or motion tracks in general.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved

Our vision is to present a **consistent interface for motion tracks** paired
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
with **modular and accessible analysis tools**. We aim to accommodate data
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
from a range of animal tracking frameworks, in **2D or 3D**, tracking
**single or multiple individuals**. As such, `movement` can be considered as
downstream of tools like DeepLabCut and SLEAP. The focus is on providing
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
functionalities for data cleaning, visualization, and motion quantification
(see the [Roadmap](target-roadmaps) for details).

In the study of animal behavior, motion tracks are often used to extract and
label discrete actions, sometimes referred to as behavioral syllables or
states. While `movement` is not designed for such tasks, it may generate
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
features useful for action segmentation and recognition. We may develop
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
packages specialized for this purpose, which will be compatible with
`movement` and the existing ecosystem of related tools.

## Design principles

movement is committed to:
`movement` is committed to:
- __Ease of installation and use__. We aim for a cross-platform installation and are mindful of dependencies that may compromise this goal.
- __User accessibility__, catering to varying coding expertise by offering both a GUI and a Python API.
- __Comprehensive documentation__, enriched with tutorials and examples.
- __Robustness and maintainability__ through high test coverage.
- __Scientific accuracy and reproducibility__ by validating inputs and outputs.
- __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate.
- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.
- __Modularity and flexibility__. We envision `movement` as a platform for new tools and analyses, offering users the building blocks to craft their own workflows.

Some of these principles are shared with, and were inspired by, napari's [Mission and Values](napari:community/mission_and_values) statement.
23 changes: 13 additions & 10 deletions docs/source/community/roadmaps.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,31 @@
(target-roadmaps)=
# Roadmaps

The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.
This page outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves.

The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases.
The roadmaps are **not meant to limit** `movement` features, as we are open to suggestions and contributions. Join our [Zulip chat](movement-zulip:) to share your ideas. We will take community demand and feedback into account when planning future releases.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved

## Long-term vision
The following features are being considered for the first stable version `v1.0`.

- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats.
- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise pose tracks__. We are considering [napari](napari:) as a visualisation and GUI framework.
- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience.
- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
- __Import/Export motion tracks from/to diverse formats__. We aim to interoperate with leading tools for animal tracking and behaviour classification, and to enable conversions between their formats.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
- __Standardise the representation of motion tracks__. We represent tracks as [xarray data structures](xarray:user-guide/data-structures.html) to allow for labelled dimensions and performant processing.
- __Interactively visualise motion tracks__. We are experimenting with [napari](napari:) as a visualisation and GUI framework.
- __Clean motion tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling.
- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience and ethology.
- __Integrate spatial data about the animal's environment__ for combined analysis with motion tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it.
- __Define and transform coordinate systems__. Coordinates can be relative to the camera, environment, or the animal itself (egocentric).
- __Provide common metrics for specialised applications__. These applications could include gait analysis, pupillometry, spatial
navigation, social interactions, etc.
- __Integrate with neurophysiological data analysis tools__. We eventually aim to facilitate combined analysis of motion and neural data.

## Short-term milestone - `v0.1`
We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:
We plan to release version `v0.1` of `movement` in early 2025, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include:

- [x] Ability to import pose tracks from [DeepLabCut](dlc:), [SLEAP](sleap:) and [LightningPose](lp:) into a common `xarray.Dataset` structure.
- [x] At least one function for cleaning the pose tracks.
- [x] Ability to compute velocity and acceleration from pose tracks.
- [x] Public website with [documentation](target-movement).
- [x] Package released on [PyPI](https://pypi.org/project/movement/).
- [x] Package released on [conda-forge](https://anaconda.org/conda-forge/movement).
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks via napari's [Points](napari:howtos/layers/points) and [Tracks](napari:howtos/layers/tracks) layers and overlay them on video frames.
- [ ] Ability to visualise pose tracks using [napari](napari:). We aim to represent pose tracks as napari [layers](napari:howtos/layers/index.html), overlaid on video frames.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved
15 changes: 10 additions & 5 deletions docs/source/index.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
(target-movement)=
# movement

A Python toolbox for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience.
A Python toolbox for analysing animal body movements across space and time.

::::{grid} 1 2 2 3
:gutter: 3
Expand All @@ -17,7 +17,7 @@ Installation, first steps and key concepts.
:link: examples/index
:link-type: doc

A gallery of examples using movement.
A gallery of examples using `movement`.
:::

:::{grid-item-card} {fas}`comments;sd-text-primary` Join the movement
Expand All @@ -32,10 +32,15 @@ How to get in touch and contribute.

## Overview

Pose estimation tools, such as [DeepLabCut](dlc:) and [SLEAP](sleap:) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the *pose tracks* produced from these software packages.
Machine learning-based tools such as [DeepLabCut](dlc:) and [SLEAP](sleap:)
have become commonplace for tracking the movements of animals and their body
parts in videos. However, there is still a need for a standardized, easy-to-use method
to process the tracks generated by these tools.

movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification.
We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple individuals.
``movement`` aims to provide a consistent, modular interface for analyzing
motion tracks, enabling steps such as data cleaning, visualization,
and motion quantification. We aim to support all popular animal tracking
frameworks and common file formats.
niksirbi marked this conversation as resolved.
Show resolved Hide resolved

Find out more on our [mission and scope](target-mission) statement and our [roadmap](target-roadmaps).

Expand Down
Loading