diff --git a/.github/workflows/docs_build_and_deploy.yml b/.github/workflows/docs_build_and_deploy.yml index a3a4995f..58f0942b 100644 --- a/.github/workflows/docs_build_and_deploy.yml +++ b/.github/workflows/docs_build_and_deploy.yml @@ -1,4 +1,4 @@ -name: Build Sphinx docs and deploy to GitHub Pages +name: Docs # Generate the documentation on all merges to main, all pull requests, or by # manual workflow dispatch. The build job can be used as a CI check that the @@ -19,7 +19,9 @@ jobs: name: Build Sphinx Docs runs-on: ubuntu-latest steps: - - uses: neuroinformatics-unit/actions/build_sphinx_docs@v2 + - uses: neuroinformatics-unit/actions/build_sphinx_docs@main + with: + python-version: 3.11 deploy_sphinx_docs: name: Deploy Sphinx Docs @@ -29,6 +31,6 @@ jobs: if: github.event_name == 'push' && github.ref_type == 'tag' runs-on: ubuntu-latest steps: - - uses: neuroinformatics-unit/actions/deploy_sphinx_docs@v2 + - uses: neuroinformatics-unit/actions/deploy_sphinx_docs@main with: secret_input: ${{ secrets.GITHUB_TOKEN }} diff --git a/.gitignore b/.gitignore index 085834ba..34ae639d 100644 --- a/.gitignore +++ b/.gitignore @@ -58,8 +58,8 @@ instance/ # Sphinx documentation docs/build/ -docs/source/auto_examples/ -docs/source/auto_api/ +docs/source/examples/ +docs/source/api/ # MkDocs documentation /site/ diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index aeaae3b5..d434628b 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -1,11 +1,5 @@ # How to Contribute -**Contributors to movement are absolutely encouraged**, whether to fix a bug, -develop a new feature, or improve the documentation. -If you're unsure about any part of the contributing process, please get in touch. -It's best to reach out in public, e.g. by [opening an issue](https://github.com/neuroinformatics-unit/movement/issues) -so that others can benefit from the discussion. - ## Contributing code ### Creating a development environment @@ -193,7 +187,7 @@ My new module -------------- .. currentmodule:: movement.new_module .. autosummary:: - :toctree: auto_api + :toctree: api new_function NewClass @@ -204,7 +198,7 @@ that follow the [numpydoc](https://numpydoc.readthedocs.io/en/latest/format.html ### Updating the examples We use [sphinx-gallery](https://sphinx-gallery.github.io/stable/index.html) -to create the [examples](https://movement.neuroinformatics.dev/auto_examples/index.html). +to create the [examples](https://movement.neuroinformatics.dev/examples/index.html). To add new examples, you will need to create a new `.py` file in `examples/`. The file should be structured as specified in the relevant [sphinx-gallery documentation](https://sphinx-gallery.github.io/stable/syntax.html). diff --git a/LICENSE b/LICENSE index 0e0c07a8..3fee1ac4 100644 --- a/LICENSE +++ b/LICENSE @@ -1,5 +1,6 @@ -Copyright (c) 2023, Niko Sirmpilatze +Copyright (c) 2023, University College London + All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/README.md b/README.md index 5017d535..d5c319af 100644 --- a/README.md +++ b/README.md @@ -8,11 +8,26 @@ # movement -Kinematic analysis of animal 🐝 🦀 🐀 🐒 body movements for neuroscience and ethology research 🔬. +Python tools for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience. -- Read the [documentation](https://movement.neuroinformatics.dev) for more information. -- If you wish to contribute, please read the [contributing guide](./CONTRIBUTING.md). -- Join us on [zulip](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!) to chat with the team. We welcome your questions and suggestions. +> **Note** +> Read the [documentation](https://movement.neuroinformatics.dev) for more information, including [installation instructions](https://movement.neuroinformatics.dev/getting_started.html#installation) and [examples](https://movement.neuroinformatics.dev/examples/index.html). + +- [Overview](#overview) +- [Status](#status) +- [Join the movement](#join-the-movement) +- [License](#license) +- [Package template](#package-template) + + +## Overview + +Pose estimation tools, such as [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the pose tracks produced from these software packages. + +movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification. +We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple animals. + +Find out more on our [mission and scope](https://movement.neuroinformatics.dev/community/mission-scope.html) statement and our [roadmap](https://movement.neuroinformatics.dev/community/roadmap.html). ## Status > **Warning** @@ -20,21 +35,15 @@ Kinematic analysis of animal 🐝 🦀 🐀 🐒 body movements for neuroscience > - It is not sufficiently tested to be used for scientific analysis > - The interface is subject to changes. -## Aims -* Load pose tracks from pose estimation software packages (e.g. [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/)) -* Evaluate the quality of the tracks and perform data cleaning operations -* Calculate kinematic variables (e.g. speed, acceleration, joint angles, etc.) -* Produce reports and visualise the results +## Join the movement + +Contributions to movement are absolutely encouraged, whether to fix a bug, develop a new feature, or improve the documentation. +To help you get started, we have prepared a detailed [contributing guide](https://movement.neuroinformatics.dev/community/contributing.html). -## Related projects -The following projects cover related needs and served as inspiration for this project: -* [DLC2Kinematics](https://github.com/AdaptiveMotorControlLab/DLC2Kinematics) -* [PyRat](https://github.com/pyratlib/pyrat) -* [Kino](https://github.com/BrancoLab/Kino) -* [WAZP](https://github.com/SainsburyWellcomeCentre/WAZP) +You are welcome to chat with the team on [zulip](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!). You may also [open an issue](https://github.com/neuroinformatics-unit/movement/issues) to report a bug or request a new feature. ## License ⚖️ [BSD 3-Clause](./LICENSE) -## Template +## Package template This package layout and configuration (including pre-commit hooks and GitHub actions) have been copied from the [python-cookiecutter](https://github.com/neuroinformatics-unit/python-cookiecutter) template. diff --git a/docs/source/api_index.rst b/docs/source/api_index.rst index 32b9a7ce..cbec1bc9 100644 --- a/docs/source/api_index.rst +++ b/docs/source/api_index.rst @@ -6,7 +6,7 @@ Input/Output ------------ .. currentmodule:: movement.io.load_poses .. autosummary:: - :toctree: auto_api + :toctree: api from_sleap_file from_dlc_file @@ -14,14 +14,14 @@ Input/Output .. currentmodule:: movement.io.save_poses .. autosummary:: - :toctree: auto_api + :toctree: api to_dlc_file to_dlc_df .. currentmodule:: movement.io.validators .. autosummary:: - :toctree: auto_api + :toctree: api ValidFile ValidHDF5 @@ -32,7 +32,7 @@ Datasets -------- .. currentmodule:: movement.datasets .. autosummary:: - :toctree: auto_api + :toctree: api list_pose_data fetch_pose_data_path @@ -41,7 +41,7 @@ Logging ------- .. currentmodule:: movement.logging .. autosummary:: - :toctree: auto_api + :toctree: api configure_logging log_error diff --git a/docs/source/community/contributing.rst b/docs/source/community/contributing.rst new file mode 100644 index 00000000..97af5203 --- /dev/null +++ b/docs/source/community/contributing.rst @@ -0,0 +1,3 @@ +.. _target-contributing: +.. include:: ../../../CONTRIBUTING.md + :parser: myst_parser.sphinx_ diff --git a/docs/source/community/index.md b/docs/source/community/index.md new file mode 100644 index 00000000..44d0832a --- /dev/null +++ b/docs/source/community/index.md @@ -0,0 +1,20 @@ +# Community + +Contributions to movement are absolutely encouraged, whether to fix a bug, +develop a new feature, or improve the documentation. +To help you get started, we have prepared a statement on the project's [mission and scope](target-mission), +a [roadmap](target-roadmap) outlining our current priorities, and a detailed [contributing guide](target-contributing). + +```{include} ../snippets/get-in-touch.md +``` + +```{toctree} +:maxdepth: 2 +:hidden: + +mission-scope +roadmap +contributing +related-projects +license +``` diff --git a/docs/source/community/license.md b/docs/source/community/license.md new file mode 100644 index 00000000..cd32ea81 --- /dev/null +++ b/docs/source/community/license.md @@ -0,0 +1,6 @@ +# License + +[The 3-Clause BSD License](https://opensource.org/licenses/BSD-3-Clause) + +```{include} ../../../LICENSE +``` diff --git a/docs/source/community/mission-scope.md b/docs/source/community/mission-scope.md new file mode 100644 index 00000000..ba6d4b65 --- /dev/null +++ b/docs/source/community/mission-scope.md @@ -0,0 +1,27 @@ +(target-mission)= +# Mission & Scope + +## Mission + +[movement](https://movement.neuroinformatics.dev/) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time. + +## Scope + +At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/). + +With movement, our vision is to present a **consistent interface for pose tracks** and to **analyze them using modular and accessible tools**. We aim to accommodate data from a range of pose estimation packages, in **2D or 3D**, tracking **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification (see the [Roadmap](target-roadmap) for details). + +While movement is not designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools. + +## Design principles + +movement is committed to: +- __Ease of installation and use__. We aim for a cross-platform installation and are mindful of dependencies that may compromise this goal. +- __User accessibility__, catering to varying coding expertise by offering both a GUI and a Python API. +- __Comprehensive documentation__, enriched with tutorials and examples. +- __Robustness and maintainability__ through high test coverage. +- __Scientific accuracy and reproducibility__ by validating inputs and outputs. +- __Performance and responsiveness__, especially for large datasets, using parallel processing where appropriate. +- __Modularity and flexibility__. We envision movement as a platform for new tools and analyses, offering users the building blocks to craft their own workflows. + +Some of these principles are shared with, and were inspired by, napari's [Mission and Values](https://napari.org/stable/community/mission_and_values.html) statement. diff --git a/docs/source/community/related-projects.md b/docs/source/community/related-projects.md new file mode 100644 index 00000000..0b339b65 --- /dev/null +++ b/docs/source/community/related-projects.md @@ -0,0 +1,7 @@ +# Related projects + +The following projects cover related needs and served as inspiration for this project: +* [DLC2Kinematics](https://github.com/AdaptiveMotorControlLab/DLC2Kinematics) +* [PyRat](https://github.com/pyratlib/pyrat) +* [Kino](https://github.com/BrancoLab/Kino) +* [WAZP](https://github.com/SainsburyWellcomeCentre/WAZP) diff --git a/docs/source/community/roadmap.md b/docs/source/community/roadmap.md new file mode 100644 index 00000000..d04a5d4c --- /dev/null +++ b/docs/source/community/roadmap.md @@ -0,0 +1,26 @@ +(target-roadmap)= +# Roadmap + +The roadmap outlines **current development priorities** and aims to **guide core developers** and to **encourage community contributions**. It is a living document and will be updated as the project evolves. + +The roadmap is **not meant to limit** movement features, as we are open to suggestions and contributions. Join our [Zulip chat](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!) to share your ideas. We will take community demand and feedback into account when planning future releases. + +## Long-term vision +The following features are being considered for the first stable version `v1.0`. + +- __Import/Export pose tracks from/to diverse formats__. We aim to interoperate with leading tools for animal pose estimation and behaviour classification, and to enable conversions between their formats. +- __Standardise the representation of pose tracks__. We represent pose tracks as [xarray data structures](https://docs.xarray.dev/en/latest/user-guide/data-structures.html) to allow for labelled dimensions and performant processing. +- __Interactively visualise pose tracks__. We are considering [napari](https://napari.org/) as a visualisation and GUI framework. +- __Clean pose tracks__, including, but not limited to, handling of missing values, filtering, smoothing, and resampling. +- __Derive kinematic variables__ like velocity, acceleration, joint angles, etc., focusing on those prevalent in neuroscience. +- __Integrate spatial data about the animal's environment__ for combined analysis with pose tracks. This covers regions of interest (ROIs) such as the arena in which the animal is moving and the location of objects within it. +- __Define and transform coordinate systems__. Coordinates can be relative to the camera, environment, or the animal itself (egocentric). + +## Short-term milestone - `v0.1` +We plan to release version `v0.1` of movement in early 2024, providing a minimal set of features to demonstrate the project's potential and to gather feedback from users. At minimum, it should include the following features: + +- Importing pose tracks from [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) into a common `xarray.Dataset` structure. This has been largely accomplished, but some remaining work is required to handle special cases. +- Visualisation of pose tracks using [napari](https://napari.org/). We aim to represent pose tracks via the [napari tracks layer](https://napari.org/stable/howtos/layers/tracks.html) and overlay them on a video frame. This should be accompanied by a minimal GUI widget to allow selection of a subset of the tracks to plot. This line of work is still in a pilot phase. We may decide to use a different visualisation framework if we encounter roadblocks. +- At least one function for cleaning the pose tracks. Once the first one is in place, it can serve as a template for others. +- Computing velocity and acceleration from pose tracks. Again, this should serve as a template for other kinematic variables. +- Package release on PyPI and conda-forge, along with documentation. The package is already available on [PyPI](https://pypi.org/project/movement/) and the [documentation website](https://movement.neuroinformatics.dev/) is up and running. We plan to also release it on conda-forge to enable one-line installation. diff --git a/docs/source/conf.py b/docs/source/conf.py index f183e110..2320e432 100644 --- a/docs/source/conf.py +++ b/docs/source/conf.py @@ -21,7 +21,7 @@ sys.path.insert(0, os.path.abspath("../..")) project = "movement" -copyright = "2022, UCL" +copyright = "2023, University College London" author = "Niko Sirmpilatze" try: release = setuptools_scm.get_version(root="../..", relative_to=__file__) @@ -83,16 +83,17 @@ # to ensure that include files (partial pages) aren't built, exclude them # https://github.com/sphinx-doc/sphinx/issues/1965#issuecomment-124732907 "**/includes/**", - # exclude .py and .ipynb files in auto_examples generated by sphinx-gallery + # exclude .py and .ipynb files in examples generated by sphinx-gallery # this is to prevent sphinx from complaining about duplicate source files - "auto_examples/*.ipynb", - "auto_examples/*.py", + "examples/*.ipynb", + "examples/*.py", ] # Configure Sphinx gallery sphinx_gallery_conf = { "examples_dirs": ["../../examples"], "filename_pattern": "/*.py", # which files to execute before inclusion + "gallery_dirs": ["examples"], # output directory } # -- Options for HTML output ------------------------------------------------- diff --git a/docs/source/contributing.rst b/docs/source/contributing.rst deleted file mode 100644 index 498202d4..00000000 --- a/docs/source/contributing.rst +++ /dev/null @@ -1,13 +0,0 @@ -.. include:: ../../CONTRIBUTING.md - :parser: myst_parser.sphinx_ - :end-before: **Contributors - -.. important:: - .. include:: ../../CONTRIBUTING.md - :parser: myst_parser.sphinx_ - :start-after: How to Contribute - :end-before: ## Contributing code - -.. include:: ../../CONTRIBUTING.md - :parser: myst_parser.sphinx_ - :start-after: from the discussion. diff --git a/docs/source/getting_started.md b/docs/source/getting_started.md index 981f5358..c3ca7220 100644 --- a/docs/source/getting_started.md +++ b/docs/source/getting_started.md @@ -45,7 +45,7 @@ pip install -e '.[dev]' # works on zsh (the default shell on macOS) ``` This will install the package in editable mode, including all `dev` dependencies. -Please see the [contributing guide](./contributing.rst) for more information. +Please see the [contributing guide](target-contributing) for more information. ::: :::: @@ -202,7 +202,7 @@ You may also use all the other powerful [indexing and selection](https://docs.xa ### Plotting You can also use the built-in [`xarray` plotting methods](https://docs.xarray.dev/en/latest/user-guide/plotting.html) -to visualise the data. Check out the [Load and explore pose tracks](./auto_examples/load_and_explore_poses.rst) +to visualise the data. Check out the [Load and explore pose tracks](./examples/load_and_explore_poses.rst) example for inspiration. ## Saving data diff --git a/docs/source/index.md b/docs/source/index.md index e03541ee..5d04856d 100644 --- a/docs/source/index.md +++ b/docs/source/index.md @@ -1,6 +1,6 @@ # movement -Kinematic analysis of animal 🐝 🦀 🐀 🐒 body movements for neuroscience and ethology research. +Python tools for analysing body movements across space and time, to aid the study of animal behaviour in neuroscience. ::::{grid} 1 2 2 3 :gutter: 3 @@ -13,52 +13,38 @@ Install and try it out. ::: :::{grid-item-card} {fas}`chalkboard-user;sd-text-primary` Examples -:link: auto_examples/index +:link: examples/index :link-type: doc Example use cases. ::: -:::{grid-item-card} {fas}`code;sd-text-primary` API Reference -:link: api_index +:::{grid-item-card} {fas}`comments;sd-text-primary` Join the movement +:link: community/index :link-type: doc -Index of all functions, classes, and methods. +Get in touch and contribute. ::: :::: -:::{admonition} Chat with us! -We welcome your questions and suggestions. Join us on [zulip](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!) to chat with the team. -::: - -## Status -:::{warning} -- 🏗️ The package is currently in early development. Stay tuned ⌛ -- It is not sufficiently tested to be used for scientific analysis -- The interface is subject to changes -::: +## Overview +Pose estimation tools, such as [DeepLabCut](https://www.mackenziemathislab.org/deeplabcut) and [SLEAP](https://sleap.ai/) are now commonplace when processing video data of animal behaviour. There is not yet a standardised, easy-to-use way to process the *pose tracks* produced from these software packages. -## Aims -* Load pose tracks from pose estimation software packages (e.g. [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/)) -* Evaluate the quality of the tracks and perform data cleaning operations -* Calculate kinematic variables (e.g. speed, acceleration, joint angles, etc.) -* Produce reports and visualise the results +movement aims to provide a consistent modular interface to analyse pose tracks, allowing steps such as data cleaning, visualisation and motion quantification. +We aim to support a range of pose estimation packages, along with 2D or 3D tracking of single or multiple animals. -## Related projects -The following projects cover related needs and served as inspiration for this project: -* [DLC2Kinematics](https://github.com/AdaptiveMotorControlLab/DLC2Kinematics) -* [PyRat](https://github.com/pyratlib/pyrat) -* [Kino](https://github.com/BrancoLab/Kino) -* [WAZP](https://github.com/SainsburyWellcomeCentre/WAZP) +Find out more on our [mission and scope](target-mission) statement and our [roadmap](target-roadmap). +```{include} /snippets/status-warning.md +``` ```{toctree} :maxdepth: 2 :hidden: getting_started -auto_examples/index +examples/index +community/index api_index -contributing ``` diff --git a/docs/source/snippets/get-in-touch.md b/docs/source/snippets/get-in-touch.md new file mode 100644 index 00000000..262331f1 --- /dev/null +++ b/docs/source/snippets/get-in-touch.md @@ -0,0 +1,3 @@ +:::{admonition} Get in touch +You are welcome to chat with the team on [zulip](https://neuroinformatics.zulipchat.com/#narrow/stream/406001-Movement/topic/Welcome!). You may also [open an issue](https://github.com/neuroinformatics-unit/movement/issues) to report a bug or request a new feature. +::: diff --git a/docs/source/snippets/status-warning.md b/docs/source/snippets/status-warning.md new file mode 100644 index 00000000..64ba8431 --- /dev/null +++ b/docs/source/snippets/status-warning.md @@ -0,0 +1,6 @@ +:::{admonition} Status +:class: warning +- 🏗️ The package is currently in early development. Stay tuned ⌛ +- It is not sufficiently tested to be used for scientific analysis. +- The interface is subject to changes. +:::