-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
4 additions
and
4 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,10 +1,10 @@ | ||
# Behaviour | ||
|
||
## movement | ||
[movement](https://movement.neuroinformatics.dev/) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to process body movement data** across space and time. | ||
[movement](https://movement.neuroinformatics.dev/) aims to **facilitate the study of animal behaviour in neuroscience** by providing a suite of **Python tools to analyse body movements** across space and time. | ||
|
||
At its core, movement processes trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* in a given frame is represented by a set of keypoint coordinates, which are given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time is referred to as *pose tracks*. In the field of neuroscience, pose tracks are typically extracted from video data using tools like [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/). | ||
At its core, movement handles trajectories of *keypoints*, which are specific body parts of an *individual*. An individual's posture or *pose* is represented by a set of keypoint coordinates, given in 2D (x,y) or 3D (x,y,z). The sequential collection of poses over time forms *pose tracks*. In neuroscience, these tracks are typically extracted from video data using software like [DeepLabCut](http://www.mackenziemathislab.org/deeplabcut) or [SLEAP](https://sleap.ai/). | ||
|
||
movement's goal is to offer a **unified interface for pose tracks** and to **process them via modular and accessible tools**. We aim to support data produced by a variety of pose estimation tools, in **2D or 3D space**, tracking a **single or multiple individuals**. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification. | ||
With movement, our vision is to present a consistent interface for pose tracks and to analyse them using modular and accessible tools. We aim to accommodate data from a range of pose estimation packages, in 2D or 3D, tracking a single or multiple individuals. The focus will be on providing functionalities for data cleaning, visualisation and motion quantification. | ||
|
||
Movement is not designed for behaviour classification or action segmentation, but it may extract features useful for these tasks. We are planning to develop separate tools for these purposes, which will be compatible with movement. | ||
While movement isn't designed for behaviour classification or action segmentation, it may extract features useful for these tasks. We are planning to develop separate packages for this purpose, which will be compatible with movement and the existing ecosystem of related tools. |