diff --git a/develop/.buildinfo b/develop/.buildinfo index 068b73894..ff92ce9d0 100644 --- a/develop/.buildinfo +++ b/develop/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: bafb023a743a251419508374ad5197e6 +config: a7b7f9800faa1bcc2746e1b3ba7ad6cf tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/develop/CODE_OF_CONDUCT.html b/develop/CODE_OF_CONDUCT.html index 1027a9c2d..1c50c1822 100644 --- a/develop/CODE_OF_CONDUCT.html +++ b/develop/CODE_OF_CONDUCT.html @@ -9,7 +9,7 @@ - Contributor Covenant Code of Conduct — SLEAP (v1.4.1a1) + Contributor Covenant Code of Conduct — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/CONTRIBUTING.html b/develop/CONTRIBUTING.html index ebcee91b8..f247fa178 100644 --- a/develop/CONTRIBUTING.html +++ b/develop/CONTRIBUTING.html @@ -9,7 +9,7 @@ - Contributing to SLEAP — SLEAP (v1.4.1a1) + Contributing to SLEAP — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/_sources/installation.md b/develop/_sources/installation.md index e50b19e18..c0ab66580 100644 --- a/develop/_sources/installation.md +++ b/develop/_sources/installation.md @@ -1,21 +1,10 @@ # Installation -SLEAP can be installed as a Python package on Windows, Linux, and Mac OS. For quick install using conda, see below: +SLEAP can be installed as a Python package on Windows, Linux, Mac OS X, and Mac OS Apple Silicon. -````{tabs} - ```{group-tab} Windows and Linux - ```bash - conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.4.1a1 - ``` - ``` - ```{group-tab} Mac OS - ```bash - conda create -y -n sleap -c conda-forge -c anaconda -c sleap sleap=1.4.1a1 - ``` - ``` -```` +SLEAP requires many complex dependencies, so we **strongly** recommend using [Mambaforge](https://mamba.readthedocs.io/en/latest/installation.html) to install it in its own isolated environment. See {ref}`Installing Mambaforge` below for more instructions. -. For more in-depth installation instructions, see the [installation methods](installation-methods). The newest version of SLEAP can always be found in the [Releases page](https://github.com/talmolab/sleap/releases). +The newest version of SLEAP can always be found in the [Releases page](https://github.com/talmolab/sleap/releases). ```{contents} Contents --- @@ -23,30 +12,66 @@ local: --- ``` -`````{hint} - Installation requires entering commands in a terminal. To open one: - ````{tabs} - ```{tab} Windows - Open the *Start menu* and search for the *Anaconda Prompt* (if using Miniconda) or the *Command Prompt* if not. - ```{note} - On Windows, our personal preference is to use alternative terminal apps like [Cmder](https://cmder.net) or [Windows Terminal](https://aka.ms/terminal). - ``` - ``` - ```{tab} Linux - Launch a new terminal by pressing Ctrl + Alt + T. - ``` - ```{group-tab} Mac OS - Launch a new terminal by pressing Cmd + Space and searching for _Terminal_. - ``` +````{hint} +Installation requires entering commands in a terminal. To open one: + +**Windows:** Open the *Start menu* and search for the *Miniforge Prompt* (if using Mambaforge) or the *Command Prompt* if not. +```{note} +On Windows, our personal preference is to use alternative terminal apps like [Cmder](https://cmder.net) or [Windows Terminal](https://aka.ms/terminal). +``` + +**Linux:** Launch a new terminal by pressing Ctrl + Alt + T. + +**Mac:** Launch a new terminal by pressing Cmd + Space and searching for _Terminal_. + +```` + +(apple-silicon)= + +### Macs Pre-M1 (Pre-Installation) + +SLEAP can be installed on Macs by following these instructions: + +1. Make sure you're on **macOS Monterey** or later, i.e., version 12+. + +2. If you don't have it yet, [install **homebrew**](https://brew.sh/), a convenient package manager for Macs (skip this if you can run `brew` from the terminal): + + ```bash + /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" + ``` + + This might take a little while since it'll also install Xcode (which we'll need later). Once it's finished, your terminal should give you two extra commands to run listed under **Next Steps**. + + ````{note} + We recommend running the commands given in your terminal which will be similar to (but may differ slightly) from the commands below: + ```bash + echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile + ``` + + ```bash + eval "$(/opt/homebrew/bin/brew shellenv)" + ``` + ```` -````` -## Package Manager + Then, close and re-open the terminal for it to take effect. + +3. Install wget, a CLI downloading utility (also makes sure your homebrew setup worked): + + ```bash + brew install wget + ``` + +(mambaforge)= -SLEAP requires many complex dependencies, so we **strongly** recommend using a package manager such as [Miniconda](https://docs.anaconda.com/free/miniconda/) to install SLEAP in its own isolated environment. See the [Miniconda website](https://docs.anaconda.com/free/miniconda/) for installation instructions. The Anaconda or Mamba package managers will also work well; however, take care not to install multiple different conda-based package managers - choose one and stick with it. +## Installing Mambaforge + +**Anaconda** is a Python environment manager that makes it easy to install SLEAP and its necessary dependencies without affecting other Python software on your computer. + +[**Mambaforge**](https://mamba.readthedocs.io/en/latest/installation.html) is a lightweight installer of Anaconda with speedy package resolution that we recommend. ````{note} -If you already have Anaconda on your computer (and it is an [older installation](https://conda.org/blog/2023-11-06-conda-23-10-0-release/)), then make sure to [set the solver to `libmamba`](https://www.anaconda.com/blog/a-faster-conda-for-a-growing-community) in the `base` environment. +If you already have Anaconda on your computer, then you can [set the solver to `libmamba`](https://www.anaconda.com/blog/a-faster-conda-for-a-growing-community) in the `base` environment (and skip the Mambaforge installation): ```bash conda update -n base conda @@ -55,144 +80,195 @@ conda config --set solver libmamba ``` ```{warning} -Any subsequent `conda` commands in the docs will need to be replaced with `mamba` if you have [Mamba](https://mamba.readthedocs.io/en/latest/) installed instead of Anaconda or Miniconda. +Any subsequent `mamba` commands in the docs will need to be replaced with `conda` if you choose to use your existing Anaconda installation. ``` ```` -(installation-methods)= +Otherwise, to install Mamba: + +**On Windows**, just click through the installation steps. + +1. Go to: https://github.com/conda-forge/miniforge#mambaforge +2. Download the latest version for your OS. +3. Follow the installer instructions. + +We recommend using the following settings: + +- Install for: All Users (requires admin privileges) +- Destination folder: `C:\mambaforge` +- Advanced Options: Add MambaForge to the system PATH environment variable +- Advanced Options: Register MambaForge as the system Python 3.X + These will make sure that MambaForge is easily accessible from most places on your computer. + +**On Linux**, it might be easier to do this straight from the terminal (Ctrl + Alt + T) with this one-liner: + +```bash +wget -nc https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-Linux-x86_64.sh && bash Mambaforge-Linux-x86_64.sh -b && ~/mambaforge/bin/conda init bash +``` + +Restart the terminal after running this command. + +```{note} +For other Linux architectures (arm64 and POWER8/9), replace the `.sh` filenames above with the correct installer name for your architecture. See the Download column in [this table](https://github.com/conda-forge/miniforge#mambaforge) for the correct filename. + +``` + +**On Macs (pre-M1)**, you can run the installer using this terminal command: + +```bash +wget -nc https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-MacOSX-x86_64.sh && bash Mambaforge-MacOSX-x86_64.sh -b && ~/mambaforge/bin/conda init zsh +``` + +**On Macs (Apple Silicon)**, use this terminal command: + +```bash +curl -fsSL --compressed https://github.com/conda-forge/miniforge/releases/latest/download/Mambaforge-MacOSX-arm64.sh -o Mambaforge3-MacOSX-arm64.sh && chmod +x Mambaforge3-MacOSX-arm64.sh && ./Mambaforge3-MacOSX-arm64.sh -b -p ~/mambaforge3 && rm Mambaforge3-MacOSX-arm64.sh && ~/mambaforge3/bin/conda init "$(basename "${SHELL}")" && source "$HOME/.$(basename "${SHELL}")rc" +``` + ## Installation methods SLEAP can be installed three different ways: via {ref}`conda package`, {ref}`conda from source`, or {ref}`pip package`. Select one of the methods below to install SLEAP. We recommend {ref}`conda package`. -````{tabs} - ```{tab} conda package - **This is the recommended installation method**. - ````{tabs} - ```{group-tab} Windows and Linux - ```bash - conda create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.4.1a1 - ``` - ```{note} - - This comes with CUDA to enable GPU support. All you need is to have an NVIDIA GPU and [updated drivers](https://nvidia.com/drivers). - - If you already have CUDA installed on your system, this will not conflict with it. - - This will also work in CPU mode if you don't have a GPU on your machine. - ``` - ``` - ```{group-tab} Mac OS - ```bash - conda create -y -n sleap -c conda-forge -c anaconda -c sleap sleap=1.4.1a1 - ``` - ```{note} - This will also work in CPU mode if you don't have a GPU on your machine. - ``` - ``` - ```` +(condapackage)= + +### `conda` package + +**Windows** and **Linux** + +```bash +mamba create -y -n sleap -c conda-forge -c nvidia -c sleap -c anaconda sleap=1.4.1a2 +``` + +**Mac OS X** and **Apple Silicon** + +```bash +mamba create -y -n sleap -c conda-forge -c anaconda -c sleap sleap=1.4.1a2 +``` + +**This is the recommended installation method**. + +```{note} +- This comes with CUDA to enable GPU support. All you need is to have an NVIDIA GPU and [updated drivers](https://nvidia.com/drivers). +- If you already have CUDA installed on your system, this will not conflict with it. +- This will also work in CPU mode if you don't have a GPU on your machine. +``` + +(condasource)= + +### `conda` from source + +1. First, ensure git is installed: + + ```bash + git --version + ``` + + If 'git' is not recognized, then [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). + +2. Then, clone the repository: + + ```bash + git clone https://github.com/talmolab/sleap && cd sleap + ``` + +3. Finally, install from the environment file (differs based on OS and GPU): + + **Windows** and **Linux** + + ```bash + mamba env create -f environment.yml -n sleap ``` - ```{tab} conda from source - This is the **recommended method for development**. - 1. First, ensure git is installed: - ```bash - git --version - ``` - If `git` is not recognized, then [install git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git). - 2. Then, clone the repository: - ```bash - git clone https://github.com/talmolab/sleap && cd sleap - ``` - 3. Finally, install SLEAP from the environment file: - ````{tabs} - ```{group-tab} Windows and Linux - ````{tabs} - ```{group-tab} NVIDIA GPU - ```bash - conda env create -f environment.yml -n sleap - ``` - ``` - ```{group-tab} CPU or other GPU - ```bash - conda env create -f environment_no_cuda.yml -n sleap - ``` - ``` - ```` - ``` - ```{group-tab} Mac OS - ```bash - conda env create -f environment_mac.yml -n sleap - ``` - ``` - ```` - ```{note} - - This installs SLEAP in development mode, which means that edits to the source code will be applied the next time you run SLEAP. - - Change the `-n sleap` in the command to create an environment with a different name (e.g., `-n sleap_develop`). - ``` + + If you do not have a NVIDIA GPU, then you should use the no CUDA environment file: + + ```bash + mamba env create -f environment_no_cuda.yml -n sleap ``` - ```{tab} pip package - This is the **recommended method for Google Colab only**. - ```{warning} - This will uninstall existing libraries and potentially install conflicting ones. - - We strongly recommend that you **only use this method if you know what you're doing**! - ``` - ````{tabs} - ```{group-tab} Windows and Linux - ```{note} - - Requires Python 3.7 - - To enable GPU support, make sure that you have **CUDA Toolkit v11.3** and **cuDNN v8.2** installed. - ``` - Although you do not need Miniconda installed to perform a `pip install`, we recommend [installing Miniconda](https://docs.anaconda.com/free/miniconda/) to create a new environment where we can isolate the `pip install`. Alternatively, you can use a venv if you have an existing Python 3.7 installation. If you are working on **Google Colab**, skip to step 3 to perform the `pip install` without using a conda environment. - 1. Otherwise, create a new conda environment where we will `pip install sleap`: - ````{tabs} - ```{group-tab} NVIDIA GPU - ```bash - conda create --name sleap pip python=3.7.12 cudatoolkit=11.3 cudnn=8.2 - ``` - ``` - ```{group-tab} CPU or other GPU - ```bash - conda create --name sleap pip python=3.7.12 - ``` - ``` - ```` - 2. Then activate the environment to isolate the `pip install` from other environments on your computer: - ```bash - conda activate sleap - ``` - ```{warning} - Refrain from installing anything into the `base` environment. Always create a new environment to install new packages. - ``` - 3. Finally, we can perform the `pip install`: - ```bash - pip install sleap[pypi]==1.4.1a1 - ``` - ```{note} - The pypi distributed package of SLEAP ships with the following extras: - - **pypi**: For installation without an conda environment file. All dependencies come from PyPI. - - **jupyter**: This installs all *pypi* and jupyter lab dependencies. - - **dev**: This installs all *jupyter* dependencies and developement tools for testing and building docs. - - **conda_jupyter**: For installation using a conda environment file included in the source code. Most dependencies are listed as conda packages in the environment file and only a few come from PyPI to allow jupyter lab support. - - **conda_dev**: For installation using [a conda environment](https://github.com/search?q=repo%3Atalmolab%2Fsleap+path%3Aenvironment*.yml&type=code) with a few PyPI dependencies for development tools. - ``` - ``` - ```{group-tab} Mac OS - Not supported. - ``` - ```` + + **Mac OS X** and **Apple Silicon** + + ```bash + mamba env create -f environment_mac.yml -n sleap + ``` + + This is the **recommended method for development**. + +```{note} +- This installs SLEAP in development mode, which means that edits to the source code will be applied the next time you run SLEAP. +- Change the `-n sleap` in the command to create an environment with a different name (e.g., `-n sleap_develop`). +``` + +(pippackage)= + +### `pip` package + +Although you do not need Mambaforge installed to perform a `pip install`, we recommend {ref}`installing Mambaforge` to create a new environment where we can isolate the `pip install`. Alternatively, you can use a venv if you have an existing python installation. If you are working on **Google Colab**, skip to step 3 to perform the `pip install` without using a conda environment. + +1. Otherwise, create a new conda environment where we will `pip install sleap`: + + either without GPU support: + + ```bash + mamba create --name sleap pip python=3.7.12 + ``` + + or with GPU support: + + ```bash + mamba create --name sleap pip python=3.7.12 cudatoolkit=11.3 cudnn=8.2 + ``` + +2. Then activate the environment to isolate the `pip install` from other environments on your computer: + + ```bash + mamba activate sleap + ``` + + ```{warning} + Refrain from installing anything into the `base` environment. Always create a new environment to install new packages. + ``` + +3. Finally, we can perform the `pip install`: + + ```bash + pip install sleap[pypi]==1.4.1a2 + ``` + + This works on **any OS except Apple silicon** and on **Google Colab**. + + ```{note} + The pypi distributed package of SLEAP ships with the following extras: + - **pypi**: For installation without an mamba environment file. All dependencies come from PyPI. + - **jupyter**: This installs all *pypi* and jupyter lab dependencies. + - **dev**: This installs all *jupyter* dependencies and developement tools for testing and building docs. + - **conda_jupyter**: For installation using a mamba environment file included in the source code. Most dependencies are listed as conda packages in the environment file and only a few come from PyPI to allow jupyter lab support. + - **conda_dev**: For installation using [a mamba environment](https://github.com/search?q=repo%3Atalmolab%2Fsleap+path%3Aenvironment*.yml&type=code) with a few PyPI dependencies for development tools. + ``` + + ```{note} + - Requires Python 3.7 + - To enable GPU support, make sure that you have **CUDA Toolkit v11.3** and **cuDNN v8.2** installed. + ``` + + ```{warning} + This will uninstall existing libraries and potentially install conflicting ones. + + We strongly recommend that you **only use this method if you know what you're doing**! ``` -```` ## Testing that things are working -If you installed using `conda`, first activate the `sleap` environment by opening a terminal and typing: +If you installed using `mamba`, first activate the `sleap` environment by opening a terminal and typing: ```bash -conda activate sleap +mamba activate sleap ``` ````{hint} -Not sure what `conda` environments you already installed? You can get a list of the environments on your system with: +Not sure what `mamba` environments you already installed? You can get a list of the environments on your system with: ``` -conda env list +mamba env list ``` ```` @@ -225,7 +301,7 @@ python -c "import sleap; sleap.versions()" ### GPU support -Assuming you installed using either of the `conda`-based methods on Windows or Linux, SLEAP should automatically have GPU support enabled. +Assuming you installed using either of the `mamba`-based methods on Windows or Linux, SLEAP should automatically have GPU support enabled. To check, verify that SLEAP can detect the GPUs on your system: @@ -286,7 +362,7 @@ file: No such file or directory then activate the environment: ```bash -conda activate sleap +mamba activate sleap ``` and run the commands: @@ -315,13 +391,13 @@ We **strongly recommend** installing SLEAP in a fresh environment when updating. To uninstall an existing environment named `sleap`: ```bash -conda env remove -n sleap +mamba env remove -n sleap ``` ````{hint} -Not sure what `conda` environments you already installed? You can get a list of the environments on your system with: +Not sure what `mamba` environments you already installed? You can get a list of the environments on your system with: ```bash -conda env list +mamba env list ``` ```` @@ -337,10 +413,10 @@ If you get any errors or the GUI fails to launch, try running the diagnostics to sleap-diagnostic ``` -If you were not able to get SLEAP installed, activate the conda environment it is in and generate a list of the package versions installed: +If you were not able to get SLEAP installed, activate the mamba environment it is in and generate a list of the package versions installed: ```bash -conda list +mamba list ``` Then, [open a new Issue](https://github.com/talmolab/sleap/issues) providing the versions from either command above, as well as any errors you saw in the console during the installation. Or [start a discussion](https://github.com/talmolab/sleap/discussions) to get help from the community. diff --git a/develop/_static/css/tabs.css b/develop/_static/css/tabs.css deleted file mode 100644 index 10914e8a4..000000000 --- a/develop/_static/css/tabs.css +++ /dev/null @@ -1,93 +0,0 @@ -.sphinx-tabs { - margin-bottom: 1rem; - } - - [role="tablist"] { - border-bottom: 1px solid #a0b3bf; - } - - .sphinx-tabs-tab { - position: relative; - font-family: Lato,'Helvetica Neue',Arial,Helvetica,sans-serif; - color: var(--pst-color-link); - line-height: 24px; - margin: 3px; - font-size: 16px; - font-weight: 400; - background-color: var(--bs-body-color); - border-radius: 5px 5px 0 0; - border: 0; - padding: 1rem 1.5rem; - margin-bottom: 0; - } - - .sphinx-tabs-tab[aria-selected="true"] { - font-weight: 700; - border: 1px solid #a0b3bf; - border-bottom: 1px solid rgb(50, 50, 50); - margin: -1px; - background-color: rgb(50, 50, 50); - } - - .admonition .sphinx-tabs-tab[aria-selected="true"]:last-child { - margin-bottom: -1px; - } - - .sphinx-tabs-tab:focus { - z-index: 1; - outline-offset: 1px; - } - - .sphinx-tabs-panel { - position: relative; - padding: 1rem; - border: 1px solid #a0b3bf; - margin: 0px -1px -1px -1px; - border-radius: 0 0 5px 5px; - border-top: 0; - background: rgb(50, 50, 50); - } - - .sphinx-tabs-panel.code-tab { - padding: 0.4rem; - } - - .sphinx-tab img { - margin-bottom: 24 px; - } - - /* Dark theme preference styling */ - - @media (prefers-color-scheme: dark) { - body[data-theme="auto"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); - } - - body[data-theme="auto"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); - } - - body[data-theme="auto"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 1px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); - } - } - - /* Explicit dark theme styling */ - - body[data-theme="dark"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); - } - - body[data-theme="dark"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); - } - - body[data-theme="dark"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 2px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); - } \ No newline at end of file diff --git a/develop/_static/documentation_options.js b/develop/_static/documentation_options.js index aa3a1677a..c1be250f4 100644 --- a/develop/_static/documentation_options.js +++ b/develop/_static/documentation_options.js @@ -1,6 +1,6 @@ var DOCUMENTATION_OPTIONS = { URL_ROOT: document.getElementById("documentation_options").getAttribute('data-url_root'), - VERSION: 'v1.4.1a1', + VERSION: 'v1.4.1a2', LANGUAGE: 'None', COLLAPSE_INDEX: false, BUILDER: 'html', diff --git a/develop/_static/tabs.css b/develop/_static/tabs.css deleted file mode 100644 index 957ba60d6..000000000 --- a/develop/_static/tabs.css +++ /dev/null @@ -1,89 +0,0 @@ -.sphinx-tabs { - margin-bottom: 1rem; -} - -[role="tablist"] { - border-bottom: 1px solid #a0b3bf; -} - -.sphinx-tabs-tab { - position: relative; - font-family: Lato,'Helvetica Neue',Arial,Helvetica,sans-serif; - color: #1D5C87; - line-height: 24px; - margin: 0; - font-size: 16px; - font-weight: 400; - background-color: rgba(255, 255, 255, 0); - border-radius: 5px 5px 0 0; - border: 0; - padding: 1rem 1.5rem; - margin-bottom: 0; -} - -.sphinx-tabs-tab[aria-selected="true"] { - font-weight: 700; - border: 1px solid #a0b3bf; - border-bottom: 1px solid white; - margin: -1px; - background-color: white; -} - -.sphinx-tabs-tab:focus { - z-index: 1; - outline-offset: 1px; -} - -.sphinx-tabs-panel { - position: relative; - padding: 1rem; - border: 1px solid #a0b3bf; - margin: 0px -1px -1px -1px; - border-radius: 0 0 5px 5px; - border-top: 0; - background: white; -} - -.sphinx-tabs-panel.code-tab { - padding: 0.4rem; -} - -.sphinx-tab img { - margin-bottom: 24 px; -} - -/* Dark theme preference styling */ - -@media (prefers-color-scheme: dark) { - body[data-theme="auto"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); - } - - body[data-theme="auto"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); - } - - body[data-theme="auto"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 1px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); - } -} - -/* Explicit dark theme styling */ - -body[data-theme="dark"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); -} - -body[data-theme="dark"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); -} - -body[data-theme="dark"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 2px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); -} diff --git a/develop/_static/tabs.js b/develop/_static/tabs.js deleted file mode 100644 index 48dc303c8..000000000 --- a/develop/_static/tabs.js +++ /dev/null @@ -1,145 +0,0 @@ -try { - var session = window.sessionStorage || {}; -} catch (e) { - var session = {}; -} - -window.addEventListener("DOMContentLoaded", () => { - const allTabs = document.querySelectorAll('.sphinx-tabs-tab'); - const tabLists = document.querySelectorAll('[role="tablist"]'); - - allTabs.forEach(tab => { - tab.addEventListener("click", changeTabs); - }); - - tabLists.forEach(tabList => { - tabList.addEventListener("keydown", keyTabs); - }); - - // Restore group tab selection from session - const lastSelected = session.getItem('sphinx-tabs-last-selected'); - if (lastSelected != null) selectNamedTabs(lastSelected); -}); - -/** - * Key focus left and right between sibling elements using arrows - * @param {Node} e the element in focus when key was pressed - */ -function keyTabs(e) { - const tab = e.target; - let nextTab = null; - if (e.keyCode === 39 || e.keyCode === 37) { - tab.setAttribute("tabindex", -1); - // Move right - if (e.keyCode === 39) { - nextTab = tab.nextElementSibling; - if (nextTab === null) { - nextTab = tab.parentNode.firstElementChild; - } - // Move left - } else if (e.keyCode === 37) { - nextTab = tab.previousElementSibling; - if (nextTab === null) { - nextTab = tab.parentNode.lastElementChild; - } - } - } - - if (nextTab !== null) { - nextTab.setAttribute("tabindex", 0); - nextTab.focus(); - } -} - -/** - * Select or deselect clicked tab. If a group tab - * is selected, also select tab in other tabLists. - * @param {Node} e the element that was clicked - */ -function changeTabs(e) { - // Use this instead of the element that was clicked, in case it's a child - const notSelected = this.getAttribute("aria-selected") === "false"; - const positionBefore = this.parentNode.getBoundingClientRect().top; - const notClosable = !this.parentNode.classList.contains("closeable"); - - deselectTabList(this); - - if (notSelected || notClosable) { - selectTab(this); - const name = this.getAttribute("name"); - selectNamedTabs(name, this.id); - - if (this.classList.contains("group-tab")) { - // Persist during session - session.setItem('sphinx-tabs-last-selected', name); - } - } - - const positionAfter = this.parentNode.getBoundingClientRect().top; - const positionDelta = positionAfter - positionBefore; - // Scroll to offset content resizing - window.scrollTo(0, window.scrollY + positionDelta); -} - -/** - * Select tab and show associated panel. - * @param {Node} tab tab to select - */ -function selectTab(tab) { - tab.setAttribute("aria-selected", true); - - // Show the associated panel - document - .getElementById(tab.getAttribute("aria-controls")) - .removeAttribute("hidden"); -} - -/** - * Hide the panels associated with all tabs within the - * tablist containing this tab. - * @param {Node} tab a tab within the tablist to deselect - */ -function deselectTabList(tab) { - const parent = tab.parentNode; - const grandparent = parent.parentNode; - - Array.from(parent.children) - .forEach(t => t.setAttribute("aria-selected", false)); - - Array.from(grandparent.children) - .slice(1) // Skip tablist - .forEach(panel => panel.setAttribute("hidden", true)); -} - -/** - * Select grouped tabs with the same name, but no the tab - * with the given id. - * @param {Node} name name of grouped tab to be selected - * @param {Node} clickedId id of clicked tab - */ -function selectNamedTabs(name, clickedId=null) { - const groupedTabs = document.querySelectorAll(`.sphinx-tabs-tab[name="${name}"]`); - const tabLists = Array.from(groupedTabs).map(tab => tab.parentNode); - - tabLists - .forEach(tabList => { - // Don't want to change the tabList containing the clicked tab - const clickedTab = tabList.querySelector(`[id="${clickedId}"]`); - if (clickedTab === null ) { - // Select first tab with matching name - const tab = tabList.querySelector(`.sphinx-tabs-tab[name="${name}"]`); - deselectTabList(tab); - selectTab(tab); - } - }) -} - -if (typeof exports === 'undefined') { - exports = {}; -} - -exports.keyTabs = keyTabs; -exports.changeTabs = changeTabs; -exports.selectTab = selectTab; -exports.deselectTabList = deselectTabList; -exports.selectNamedTabs = selectNamedTabs; diff --git a/develop/api.html b/develop/api.html index e2f8a6e0c..4d24d74bb 100644 --- a/develop/api.html +++ b/develop/api.html @@ -9,7 +9,7 @@ - Developer API — SLEAP (v1.4.1a1) + Developer API — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/api/sleap.info.align.html b/develop/api/sleap.info.align.html index 01bd53d8c..0eb9626c1 100644 --- a/develop/api/sleap.info.align.html +++ b/develop/api/sleap.info.align.html @@ -9,7 +9,7 @@ - sleap.info.align — SLEAP (v1.4.1a1) + sleap.info.align — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -344,49 +343,49 @@

sleap.info.align

which doesn’t yet have all points).

-sleap.info.align.align_instance_points(source_points_array, target_points_array)[source]#
+sleap.info.align.align_instance_points(source_points_array, target_points_array)[source]#

Transforms source for best fit on to target.

-sleap.info.align.align_instances(all_points_arrays: numpy.ndarray, node_a: int, node_b: int, rotate_on_node_a: bool = False) numpy.ndarray[source]#
+sleap.info.align.align_instances(all_points_arrays: numpy.ndarray, node_a: int, node_b: int, rotate_on_node_a: bool = False) numpy.ndarray[source]#

Rotates every instance so that line from node_a to node_b aligns.

-sleap.info.align.align_instances_on_most_stable(all_points_arrays: numpy.ndarray, min_stable_dist: float = 4.0) numpy.ndarray[source]#
+sleap.info.align.align_instances_on_most_stable(all_points_arrays: numpy.ndarray, min_stable_dist: float = 4.0) numpy.ndarray[source]#

Gets most stable pair of nodes and aligned instances along these nodes.

-sleap.info.align.get_instances_points(instances: List[sleap.instance.Instance]) numpy.ndarray[source]#
+sleap.info.align.get_instances_points(instances: List[sleap.instance.Instance]) numpy.ndarray[source]#

Returns single (instance, node, 2) matrix with points for all instances.

-sleap.info.align.get_mean_and_std_for_points(aligned_points_arrays: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray][source]#
+sleap.info.align.get_mean_and_std_for_points(aligned_points_arrays: numpy.ndarray) Tuple[numpy.ndarray, numpy.ndarray][source]#

Returns mean and standard deviation for every node given aligned points.

-sleap.info.align.get_most_stable_node_pair(all_points_arrays: numpy.ndarray, min_dist: float = 0.0) Tuple[int, int][source]#
+sleap.info.align.get_most_stable_node_pair(all_points_arrays: numpy.ndarray, min_dist: float = 0.0) Tuple[int, int][source]#

Returns pair of nodes which are at stable distance (over min threshold).

-sleap.info.align.get_stable_node_pairs(all_points_arrays: numpy.ndarray, node_names, min_dist: float = 0.0)[source]#
+sleap.info.align.get_stable_node_pairs(all_points_arrays: numpy.ndarray, node_names, min_dist: float = 0.0)[source]#

Returns sorted list of node pairs with mean and standard dev distance.

-sleap.info.align.get_template_points_array(instances: List[sleap.instance.Instance]) numpy.ndarray[source]#
+sleap.info.align.get_template_points_array(instances: List[sleap.instance.Instance]) numpy.ndarray[source]#

Returns mean of aligned points for instances.

diff --git a/develop/api/sleap.info.feature_suggestions.html b/develop/api/sleap.info.feature_suggestions.html index 5fd5363f1..e2f42130d 100644 --- a/develop/api/sleap.info.feature_suggestions.html +++ b/develop/api/sleap.info.feature_suggestions.html @@ -9,7 +9,7 @@ - sleap.info.feature_suggestions — SLEAP (v1.4.1a1) + sleap.info.feature_suggestions — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.info.feature_suggestions

Module for generating lists of frames using frame features, pca, kmeans, etc.

-class sleap.info.feature_suggestions.FrameGroupSet(method: str, item_group: Dict[sleap.info.feature_suggestions.FrameItem, int] = NOTHING, group_data: Dict[int, dict] = NOTHING, groupset_data: Dict = NOTHING)[source]#
+class sleap.info.feature_suggestions.FrameGroupSet(method: str, item_group: Dict[sleap.info.feature_suggestions.FrameItem, int] = NOTHING, group_data: Dict[int, dict] = NOTHING, groupset_data: Dict = NOTHING)[source]#

Class for a set of groups of FrameItem objects.

Each item can have at most one group; each group is represented as an int.

@@ -379,19 +378,19 @@

sleap.info.feature_suggestions

-append_to_group(group: int, item: sleap.info.feature_suggestions.FrameItem)[source]#
+append_to_group(group: int, item: sleap.info.feature_suggestions.FrameItem)[source]#

Adds item to group.

-extend_group_items(group: int, item_list: List[sleap.info.feature_suggestions.FrameItem])[source]#
+extend_group_items(group: int, item_list: List[sleap.info.feature_suggestions.FrameItem])[source]#

Adds all items in list to group.

-get_item_group(item: sleap.info.feature_suggestions.FrameItem)[source]#
+get_item_group(item: sleap.info.feature_suggestions.FrameItem)[source]#

Returns group that contain item.

@@ -403,7 +402,7 @@

sleap.info.feature_suggestions

-sample(per_group: int, unique_samples: bool = True)[source]#
+sample(per_group: int, unique_samples: bool = True)[source]#

Returns new FrameGroupSet with groups sampled from current groups.

Note that the order of items in the new groups will not match order of items in the groups from which samples are drawn.

@@ -425,13 +424,13 @@

sleap.info.feature_suggestions

-class sleap.info.feature_suggestions.FrameItem(video: sleap.io.video.Video, frame_idx: int)[source]#
+class sleap.info.feature_suggestions.FrameItem(video: sleap.io.video.Video, frame_idx: int)[source]#

Just a simple wrapper for (video, frame_idx), plus method to get image.

-class sleap.info.feature_suggestions.ItemStack(items: List = NOTHING, data: Optional[numpy.ndarray] = None, ownership: Optional[List[tuple]] = None, meta: List = NOTHING, group_sets: List[sleap.info.feature_suggestions.FrameGroupSet] = NOTHING)[source]#
+class sleap.info.feature_suggestions.ItemStack(items: List = NOTHING, data: Optional[numpy.ndarray] = None, ownership: Optional[List[tuple]] = None, meta: List = NOTHING, group_sets: List[sleap.info.feature_suggestions.FrameGroupSet] = NOTHING)[source]#

Container for items, each item can “own” one or more rows of data.

@@ -491,7 +490,7 @@

sleap.info.feature_suggestions

-brisk_bag_of_features(brisk_threshold=40, vocab_size=20)[source]#
+brisk_bag_of_features(brisk_threshold=40, vocab_size=20)[source]#

Transform data using bag of features based on brisk features.

@@ -503,67 +502,67 @@

sleap.info.feature_suggestions

-extend_ownership(ownership, row_count)[source]#
+extend_ownership(ownership, row_count)[source]#

Extends an ownership list with number of rows owned by next item.

-flatten()[source]#
+flatten()[source]#

Flattens each row of data to 1-d array.

-get_all_items_from_group()[source]#
+get_all_items_from_group()[source]#

Sets items for Stack to all items from current GroupSet.

-get_item_data(item)[source]#
+get_item_data(item)[source]#

Returns rows of data which belong to item.

-get_item_data_idxs(item)[source]#
+get_item_data_idxs(item)[source]#

Returns indexes of rows in data which belong to item.

-get_raw_images(scale=0.5)[source]#
+get_raw_images(scale=0.5)[source]#

Sets data to raw image for each FrameItem.

-hog_bag_of_features(brisk_threshold=40, vocab_size=20)[source]#
+hog_bag_of_features(brisk_threshold=40, vocab_size=20)[source]#

Transforms data into bag of features vector of hog descriptors.

-kmeans(n_clusters: int)[source]#
+kmeans(n_clusters: int)[source]#

Adds GroupSet using k-means clustering on data.

-make_sample_group(videos: List[sleap.io.video.Video], samples_per_video: int, sample_method: str = 'stride')[source]#
+make_sample_group(videos: List[sleap.io.video.Video], samples_per_video: int, sample_method: str = 'stride')[source]#

Adds GroupSet by sampling frames from each video.

-pca(n_components: int)[source]#
+pca(n_components: int)[source]#

Transforms data by applying PCA.

-sample_groups(samples_per_group: int)[source]#
+sample_groups(samples_per_group: int)[source]#

Adds GroupSet by sampling items from current GroupSet.

@@ -571,7 +570,7 @@

sleap.info.feature_suggestions

-class sleap.info.feature_suggestions.ParallelFeaturePipeline(pipeline: sleap.info.feature_suggestions.FeatureSuggestionPipeline, videos_as_dicts: List[Dict])[source]#
+class sleap.info.feature_suggestions.ParallelFeaturePipeline(pipeline: sleap.info.feature_suggestions.FeatureSuggestionPipeline, videos_as_dicts: List[Dict])[source]#

Enables easy per-video pipeline parallelization for feature suggestions.

Create a FeatureSuggestionPipeline with the desired parameters, and then call ParallelFeaturePipeline.run() with the pipeline and the list @@ -580,25 +579,25 @@

sleap.info.feature_suggestions

the results back into a single list of SuggestionFrame objects.

-get(video_idx)[source]#
+get(video_idx)[source]#

Apply pipeline to single video by idx. Can be called in process.

-classmethod make(pipeline, videos)[source]#
+classmethod make(pipeline, videos)[source]#

Make class object from pipeline and list of videos.

-classmethod run(pipeline, videos, parallel=True)[source]#
+classmethod run(pipeline, videos, parallel=True)[source]#

Runs pipeline on all videos in parallel and returns suggestions.

-classmethod tuples_to_suggestions(tuples, videos)[source]#
+classmethod tuples_to_suggestions(tuples, videos)[source]#

Converts serialized data from processes back into SuggestionFrames.

diff --git a/develop/api/sleap.info.labels.html b/develop/api/sleap.info.labels.html index cbb49ea6f..d6c715684 100644 --- a/develop/api/sleap.info.labels.html +++ b/develop/api/sleap.info.labels.html @@ -9,7 +9,7 @@ - sleap.info.labels — SLEAP (v1.4.1a1) + sleap.info.labels — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/api/sleap.info.metrics.html b/develop/api/sleap.info.metrics.html index 9cebf10e2..579c1f2f1 100644 --- a/develop/api/sleap.info.metrics.html +++ b/develop/api/sleap.info.metrics.html @@ -9,7 +9,7 @@ - sleap.info.metrics — SLEAP (v1.4.1a1) + sleap.info.metrics — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,40 +322,40 @@

sleap.info.metrics

Module for producing prediction metrics for SLEAP datasets.

-sleap.info.metrics.calculate_pairwise_cost(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], cost_function: Callable) numpy.ndarray[source]#
+sleap.info.metrics.calculate_pairwise_cost(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], cost_function: Callable) numpy.ndarray[source]#

Calculate (a * b) matrix of pairwise costs using cost function.

-sleap.info.metrics.compare_instance_lists(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]) numpy.ndarray[source]#
+sleap.info.metrics.compare_instance_lists(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]) numpy.ndarray[source]#

Given two lists of corresponding Instances, returns (instances * nodes) matrix of distances between corresponding nodes.

-sleap.info.metrics.list_points_array(instances: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]) numpy.ndarray[source]#
+sleap.info.metrics.list_points_array(instances: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]) numpy.ndarray[source]#

Given list of Instances, returns (instances * nodes * 2) matrix.

-sleap.info.metrics.match_instance_lists(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], cost_function: Callable) Tuple[List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]][source]#
+sleap.info.metrics.match_instance_lists(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], cost_function: Callable) Tuple[List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]][source]#

Sorts two lists of Instances to find best overall correspondence for a given cost function (e.g., total distance between points).

-sleap.info.metrics.match_instance_lists_nodewise(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], thresh: float = 5) Tuple[List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]][source]#
+sleap.info.metrics.match_instance_lists_nodewise(instances_a: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], instances_b: List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], thresh: float = 5) Tuple[List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]], List[Union[sleap.instance.Instance, sleap.instance.PredictedInstance]]][source]#

For each node for each instance in the first list, pairs it with the closest corresponding node from any instance in the second list.

-sleap.info.metrics.matched_instance_distances(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, match_lists_function: typing.Callable = <function match_instance_lists_nodewise>, frame_range: typing.Optional[range] = None) Tuple[List[int], numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#
+sleap.info.metrics.matched_instance_distances(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, match_lists_function: typing.Callable = <function match_instance_lists_nodewise>, frame_range: typing.Optional[range] = None) Tuple[List[int], numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#

Distances between ground truth and predicted nodes over a set of frames.

Parameters
@@ -387,26 +386,26 @@

sleap.info.metrics

-sleap.info.metrics.nodeless_point_dist(inst_a: Union[sleap.instance.Instance, sleap.instance.PredictedInstance], inst_b: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) numpy.ndarray[source]#
+sleap.info.metrics.nodeless_point_dist(inst_a: Union[sleap.instance.Instance, sleap.instance.PredictedInstance], inst_b: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) numpy.ndarray[source]#

Given two instances, returns array of distances for closest points ignoring node identities.

-sleap.info.metrics.point_dist(inst_a: Union[sleap.instance.Instance, sleap.instance.PredictedInstance], inst_b: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) numpy.ndarray[source]#
+sleap.info.metrics.point_dist(inst_a: Union[sleap.instance.Instance, sleap.instance.PredictedInstance], inst_b: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) numpy.ndarray[source]#

Given two instances, returns array of distances for corresponding nodes.

-sleap.info.metrics.point_match_count(dist_array: numpy.ndarray, thresh: float = 5) int[source]#
+sleap.info.metrics.point_match_count(dist_array: numpy.ndarray, thresh: float = 5) int[source]#

Given an array of distances, returns number which are <= threshold.

-sleap.info.metrics.point_nonmatch_count(dist_array: numpy.ndarray, thresh: float = 5) int[source]#
+sleap.info.metrics.point_nonmatch_count(dist_array: numpy.ndarray, thresh: float = 5) int[source]#

Given an array of distances, returns number which are not <= threshold.

diff --git a/develop/api/sleap.info.summary.html b/develop/api/sleap.info.summary.html index 30379c72e..b51ab9283 100644 --- a/develop/api/sleap.info.summary.html +++ b/develop/api/sleap.info.summary.html @@ -9,7 +9,7 @@ - sleap.info.summary — SLEAP (v1.4.1a1) + sleap.info.summary — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.info.summary

data for each frame of some labeled video.

-class sleap.info.summary.StatisticSeries(labels: sleap.io.dataset.Labels)[source]#
+class sleap.info.summary.StatisticSeries(labels: sleap.io.dataset.Labels)[source]#

Class to calculate various statistical series for labeled frames.

Each method returns a series which is a dictionary in which keys are frame index and value are some numerical value for the frame.

@@ -335,7 +334,7 @@

sleap.info.summary

-get_instance_score_series(video, reduction='sum') Dict[int, float][source]#
+get_instance_score_series(video, reduction='sum') Dict[int, float][source]#

Get series with statistic of instance scores in each frame.

Parameters
@@ -354,13 +353,13 @@

sleap.info.summary

-get_point_count_series(video: sleap.io.video.Video) Dict[int, float][source]#
+get_point_count_series(video: sleap.io.video.Video) Dict[int, float][source]#

Get series with total number of labeled points in each frame.

-get_point_displacement_series(video, reduction='sum') Dict[int, float][source]#
+get_point_displacement_series(video, reduction='sum') Dict[int, float][source]#

Get series with statistic of point displacement in each frame.

Point displacement is the distance between the point location in frame and the location of the corresponding point (same node, @@ -383,7 +382,7 @@

sleap.info.summary

-get_point_score_series(video: sleap.io.video.Video, reduction: str = 'sum') Dict[int, float][source]#
+get_point_score_series(video: sleap.io.video.Video, reduction: str = 'sum') Dict[int, float][source]#

Get series with statistic of point scores in each frame.

Parameters
@@ -402,7 +401,7 @@

sleap.info.summary

-get_primary_point_displacement_series(video, reduction='sum', primary_node=None)[source]#
+get_primary_point_displacement_series(video, reduction='sum', primary_node=None)[source]#

Get sum of displacement for single node of each instance per frame.

Parameters
diff --git a/develop/api/sleap.info.trackcleaner.html b/develop/api/sleap.info.trackcleaner.html index 1304041d3..109ae6a7e 100644 --- a/develop/api/sleap.info.trackcleaner.html +++ b/develop/api/sleap.info.trackcleaner.html @@ -9,7 +9,7 @@ - sleap.info.trackcleaner — SLEAP (v1.4.1a1) + sleap.info.trackcleaner — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -329,7 +328,7 @@

sleap.info.trackcleaner

it will be better to use the sleap-track CLI.

-sleap.info.trackcleaner.fit_tracks(filename: str, instance_count: int)[source]#
+sleap.info.trackcleaner.fit_tracks(filename: str, instance_count: int)[source]#

Wraps TrackCleaner for easier cli api.

diff --git a/develop/api/sleap.info.write_tracking_h5.html b/develop/api/sleap.info.write_tracking_h5.html index 9de7e4c3d..962ac1075 100644 --- a/develop/api/sleap.info.write_tracking_h5.html +++ b/develop/api/sleap.info.write_tracking_h5.html @@ -9,7 +9,7 @@ - sleap.info.write_tracking_h5 — SLEAP (v1.4.1a1) + sleap.info.write_tracking_h5 — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -344,19 +343,19 @@

sleap.info.write_tracking_h5

Note: the datasets are stored column-major as expected by MATLAB.

-sleap.info.write_tracking_h5.get_edges_as_np_strings(labels: sleap.io.dataset.Labels) List[Tuple[numpy.bytes_, numpy.bytes_]][source]#
+sleap.info.write_tracking_h5.get_edges_as_np_strings(labels: sleap.io.dataset.Labels) List[Tuple[numpy.bytes_, numpy.bytes_]][source]#

Get list of edge names as np.string_.

-sleap.info.write_tracking_h5.get_nodes_as_np_strings(labels: sleap.io.dataset.Labels) List[numpy.bytes_][source]#
+sleap.info.write_tracking_h5.get_nodes_as_np_strings(labels: sleap.io.dataset.Labels) List[numpy.bytes_][source]#

Get list of node names as np.string_.

-sleap.info.write_tracking_h5.get_occupancy_and_points_matrices(labels: sleap.io.dataset.Labels, all_frames: bool, video: Optional[sleap.io.video.Video] = None) Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#
+sleap.info.write_tracking_h5.get_occupancy_and_points_matrices(labels: sleap.io.dataset.Labels, all_frames: bool, video: Optional[sleap.io.video.Video] = None) Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#

Builds numpy matrices with track occupancy and point location data.

Note: This function assumes either all instances have tracks or no instances have tracks.

@@ -391,13 +390,13 @@

sleap.info.write_tracking_h5

-sleap.info.write_tracking_h5.get_tracks_as_np_strings(labels: sleap.io.dataset.Labels) List[numpy.bytes_][source]#
+sleap.info.write_tracking_h5.get_tracks_as_np_strings(labels: sleap.io.dataset.Labels) List[numpy.bytes_][source]#

Get list of track names as np.string_.

-sleap.info.write_tracking_h5.main(labels: sleap.io.dataset.Labels, output_path: str, labels_path: Optional[str] = None, all_frames: bool = True, video: Optional[sleap.io.video.Video] = None, csv: bool = False)[source]#
+sleap.info.write_tracking_h5.main(labels: sleap.io.dataset.Labels, output_path: str, labels_path: Optional[str] = None, all_frames: bool = True, video: Optional[sleap.io.video.Video] = None, csv: bool = False)[source]#

Writes HDF5 file with matrices of track occupancy and coordinates.

Parameters
@@ -423,7 +422,7 @@

sleap.info.write_tracking_h5

-sleap.info.write_tracking_h5.remove_empty_tracks_from_matrices(track_names: List, occupancy_matrix: numpy.ndarray, locations_matrix: numpy.ndarray, point_scores: numpy.ndarray, instance_scores: numpy.ndarray, tracking_scores: numpy.ndarray) Tuple[List, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#
+sleap.info.write_tracking_h5.remove_empty_tracks_from_matrices(track_names: List, occupancy_matrix: numpy.ndarray, locations_matrix: numpy.ndarray, point_scores: numpy.ndarray, instance_scores: numpy.ndarray, tracking_scores: numpy.ndarray) Tuple[List, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#

Removes matrix rows/columns for unoccupied tracks.

Parameters
@@ -445,7 +444,7 @@

sleap.info.write_tracking_h5

-sleap.info.write_tracking_h5.write_csv_file(output_path, data_dict)[source]#
+sleap.info.write_tracking_h5.write_csv_file(output_path, data_dict)[source]#

Write CSV file with data from given dictionary.

Parameters
@@ -463,7 +462,7 @@

sleap.info.write_tracking_h5

-sleap.info.write_tracking_h5.write_occupancy_file(output_path: str, data_dict: Dict[str, Any], transpose: bool = True)[source]#
+sleap.info.write_tracking_h5.write_occupancy_file(output_path: str, data_dict: Dict[str, Any], transpose: bool = True)[source]#

Write HDF5 file with data from given dictionary.

Parameters
diff --git a/develop/api/sleap.instance.html b/develop/api/sleap.instance.html index 89e421947..81b3fe62b 100644 --- a/develop/api/sleap.instance.html +++ b/develop/api/sleap.instance.html @@ -9,7 +9,7 @@ - sleap.instance — SLEAP (v1.4.1a1) + sleap.instance — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -335,7 +334,7 @@

sleap.instance

-class sleap.instance.Instance(skeleton: sleap.skeleton.Skeleton, track: sleap.instance.Track = None, from_predicted: Optional[PredictedInstance] = None, points: sleap.instance.PointArray = None, nodes: List = None, frame: Optional[LabeledFrame] = None)[source]#
+class sleap.instance.Instance(skeleton: sleap.skeleton.Skeleton, track: sleap.instance.Track = None, from_predicted: Optional[PredictedInstance] = None, points: sleap.instance.PointArray = None, nodes: List = None, frame: Optional[LabeledFrame] = None)[source]#

This class represents a labeled instance.

Parameters
@@ -373,7 +372,7 @@

sleap.instance

-fill_missing(max_x: Optional[float] = None, max_y: Optional[float] = None)[source]#
+fill_missing(max_x: Optional[float] = None, max_y: Optional[float] = None)[source]#

Add points for skeleton nodes that are missing in the instance.

This is useful when modifying the skeleton so the nodes appears in the GUI.

@@ -394,7 +393,7 @@

sleap.instance

-classmethod from_numpy(points: numpy.ndarray, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.Instance[source]#
+classmethod from_numpy(points: numpy.ndarray, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.Instance[source]#

Create an instance from a numpy array.

Parameters
@@ -419,7 +418,7 @@

sleap.instance

-classmethod from_pointsarray(points: numpy.ndarray, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.Instance[source]#
+classmethod from_pointsarray(points: numpy.ndarray, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.Instance[source]#

Create an instance from an array of points.

Parameters
@@ -440,7 +439,7 @@

sleap.instance

-get_points_array(copy: bool = True, invisible_as_nan: bool = False, full: bool = False) Union[numpy.ndarray, numpy.recarray][source]#
+get_points_array(copy: bool = True, invisible_as_nan: bool = False, full: bool = False) Union[numpy.ndarray, numpy.recarray][source]#

Return the instance’s points in array form.

Parameters
@@ -468,7 +467,7 @@

sleap.instance

-matches(other: sleap.instance.Instance) bool[source]#
+matches(other: sleap.instance.Instance) bool[source]#

Whether two instances match by value.

Checks the types, points, track, and frame index.

@@ -507,7 +506,7 @@

sleap.instance

-numpy() numpy.ndarray[source]#
+numpy() numpy.ndarray[source]#

Return the instance node coordinates as a numpy array.

Alias for points_array.

@@ -540,7 +539,7 @@

sleap.instance

-transform_points(transformation_matrix)[source]#
+transform_points(transformation_matrix)[source]#

Apply affine transformation matrix to points in the instance.

Parameters
@@ -558,9 +557,79 @@

sleap.instance

+
+
+class sleap.instance.InstancesList(*args, labeled_frame: Optional[sleap.instance.LabeledFrame] = None)[source]#
+

A list of Instance`s associated with a `LabeledFrame.

+

This class should only be used for the LabeledFrame.instances attribute.

+
+
+append(instance: Union[sleap.instance.Instance, sleap.instance.PredictedInstance])[source]#
+

Append an Instance or PredictedInstance to the list, setting the frame.

+
+
Parameters
+

item – The Instance or PredictedInstance to append to the list.

+
+
+
+ +
+
+clear() None[source]#
+

Remove all instances from list, setting instance.frame to None.

+
+ +
+
+copy() list[source]#
+

Return a shallow copy of the list of instances as a list.

+

Note: This will not return an InstancesList object, but a normal list.

+
+ +
+
+extend(instances: List[Union[sleap.instance.PredictedInstance, sleap.instance.Instance]])[source]#
+

Extend the list with a list of `Instance`s or `PredictedInstance`s.

+
+
Parameters
+

instances – A list of Instance or PredictedInstance objects to add to the +list.

+
+
Returns
+

None

+
+
+
+ +
+
+insert(index: int, instance: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) None[source]#
+

Insert object before index.

+
+ +
+
+property labeled_frame: sleap.instance.LabeledFrame#
+

Return the LabeledFrame associated with this list of instances.

+
+ +
+
+pop(index: int) Union[sleap.instance.Instance, sleap.instance.PredictedInstance][source]#
+

Remove and return instance at index, setting instance.frame to None.

+
+ +
+
+remove(instance: Union[sleap.instance.Instance, sleap.instance.PredictedInstance]) None[source]#
+

Remove instance from list, setting instance.frame to None.

+
+ +
+
-class sleap.instance.LabeledFrame(video: sleap.io.video.Video, frame_idx, instances: Union[List[sleap.instance.Instance], List[sleap.instance.PredictedInstance]] = NOTHING)[source]#
+class sleap.instance.LabeledFrame(video: sleap.io.video.Video, frame_idx, instances: sleap.instance.InstancesList = NOTHING)[source]#

Holds labeled data for a single frame of a video.

Parameters
@@ -573,7 +642,7 @@

sleap.instance

-classmethod complex_frame_merge(base_frame: sleap.instance.LabeledFrame, new_frame: sleap.instance.LabeledFrame) Tuple[List[sleap.instance.Instance], List[sleap.instance.Instance], List[sleap.instance.Instance]][source]#
+classmethod complex_frame_merge(base_frame: sleap.instance.LabeledFrame, new_frame: sleap.instance.LabeledFrame) Tuple[List[sleap.instance.Instance], List[sleap.instance.Instance], List[sleap.instance.Instance]][source]#

Merge two frames, return conflicts if any.

A conflict occurs when * each frame has Instances which don’t perfectly match those

@@ -607,7 +676,7 @@

sleap.instance

-classmethod complex_merge_between(base_labels: Labels, new_frames: List[LabeledFrame]) Tuple[Dict[sleap.io.video.Video, Dict[int, List[sleap.instance.Instance]]], List[sleap.instance.Instance], List[sleap.instance.Instance]][source]#
+classmethod complex_merge_between(base_labels: Labels, new_frames: List[LabeledFrame]) Tuple[Dict[sleap.io.video.Video, Dict[int, List[sleap.instance.Instance]]], List[sleap.instance.Instance], List[sleap.instance.Instance]][source]#

Merge data from new frames into a Labels object.

Everything that can be merged cleanly is merged, any conflicts are returned.

@@ -623,7 +692,7 @@

sleap.instance

Returns
-

The merged list of :class:`LabeledFrame`s.

+

The merged list of :class:`LabeledFrame`s.

@@ -765,13 +834,13 @@

sleap.instance

-numpy() numpy.ndarray[source]#
+numpy() numpy.ndarray[source]#

Return the instances as an array of shape (instances, nodes, 2).

-plot(image: bool = True, scale: float = 1.0)[source]#
+plot(image: bool = True, scale: float = 1.0)[source]#

Plot the frame with all instances.

Parameters
@@ -791,7 +860,7 @@

sleap.instance

-plot_predicted(image: bool = True, scale: float = 1.0)[source]#
+plot_predicted(image: bool = True, scale: float = 1.0)[source]#

Plot the frame with all predicted instances.

Parameters
@@ -817,13 +886,13 @@

sleap.instance

-remove_empty_instances()[source]#
+remove_empty_instances()[source]#

Remove instances with no visible nodes from the labeled frame.

-remove_untracked()[source]#
+remove_untracked()[source]#

Removes any instances without a track assignment.

@@ -857,7 +926,7 @@

sleap.instance

-class sleap.instance.Point(x: float = nan, y: float = nan, visible: bool = True, complete: bool = False)[source]#
+class sleap.instance.Point(x: float = nan, y: float = nan, visible: bool = True, complete: bool = False)[source]#

A labelled point and any metadata associated with it.

Parameters
@@ -876,7 +945,7 @@

sleap.instance

-isnan() bool[source]#
+isnan() bool[source]#

Are either of the coordinates a NaN value.

Returns
@@ -887,7 +956,7 @@

sleap.instance

-numpy() numpy.ndarray[source]#
+numpy() numpy.ndarray[source]#

Return the point as a numpy array.

@@ -895,12 +964,12 @@

sleap.instance

-class sleap.instance.PointArray(shape, buf=None, offset=0, strides=None, formats=None, names=None, titles=None, byteorder=None, aligned=False, order='C')[source]#
+class sleap.instance.PointArray(shape, buf=None, offset=0, strides=None, formats=None, names=None, titles=None, byteorder=None, aligned=False, order='C')[source]#

PointArray is a sub-class of numpy recarray which stores Point objects as records.

-classmethod from_array(a: sleap.instance.PointArray) sleap.instance.PointArray[source]#
+classmethod from_array(a: sleap.instance.PointArray) sleap.instance.PointArray[source]#

Converts a PointArray (or child) to a new instance.

This will convert an object to the same type as itself, so a PredictedPointArray will result in the same.

@@ -918,7 +987,7 @@

sleap.instance

-classmethod make_default(size: int) sleap.instance.PointArray[source]#
+classmethod make_default(size: int) sleap.instance.PointArray[source]#

Construct a point array where points are all set to default.

The constructed PointArray will have specified size and each value in the array is assigned the default values for @@ -937,7 +1006,7 @@

sleap.instance

-class sleap.instance.PredictedInstance(skeleton: sleap.skeleton.Skeleton, track: sleap.instance.Track = None, from_predicted: Optional[PredictedInstance] = None, points: sleap.instance.PointArray = None, nodes: List = None, frame: Optional[LabeledFrame] = None, score=0.0, tracking_score=0.0)[source]#
+class sleap.instance.PredictedInstance(skeleton: sleap.skeleton.Skeleton, track: sleap.instance.Track = None, from_predicted: Optional[PredictedInstance] = None, points: sleap.instance.PointArray = None, nodes: List = None, frame: Optional[LabeledFrame] = None, score=0.0, tracking_score=0.0)[source]#

A predicted instance is an output of the inference procedure.

Parameters
@@ -949,7 +1018,7 @@

sleap.instance

-classmethod from_arrays(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#
+classmethod from_arrays(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#

Create a predicted instance from data arrays.

Parameters
@@ -974,7 +1043,7 @@

sleap.instance

-classmethod from_instance(instance: sleap.instance.Instance, score: float) sleap.instance.PredictedInstance[source]#
+classmethod from_instance(instance: sleap.instance.Instance, score: float) sleap.instance.PredictedInstance[source]#

Create a PredictedInstance from an Instance.

The fields are copied in a shallow manner with the exception of points. For each point in the instance a PredictedPoint is created with score set to default @@ -994,7 +1063,7 @@

sleap.instance

-classmethod from_numpy(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#
+classmethod from_numpy(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#

Create a predicted instance from data arrays.

Parameters
@@ -1019,7 +1088,7 @@

sleap.instance

-classmethod from_pointsarray(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#
+classmethod from_pointsarray(points: numpy.ndarray, point_confidences: numpy.ndarray, instance_score: float, skeleton: sleap.skeleton.Skeleton, track: Optional[sleap.instance.Track] = None) sleap.instance.PredictedInstance[source]#

Create a predicted instance from data arrays.

Parameters
@@ -1061,7 +1130,7 @@

sleap.instance

-class sleap.instance.PredictedPoint(x: float = nan, y: float = nan, visible: bool = True, complete: bool = False, score: float = 0.0)[source]#
+class sleap.instance.PredictedPoint(x: float = nan, y: float = nan, visible: bool = True, complete: bool = False, score: float = 0.0)[source]#

A predicted point is an output of the inference procedure.

It has all the properties of a labeled point, plus a score.

@@ -1082,7 +1151,7 @@

sleap.instance

-classmethod from_point(point: sleap.instance.Point, score: float = 0.0) sleap.instance.PredictedPoint[source]#
+classmethod from_point(point: sleap.instance.Point, score: float = 0.0) sleap.instance.PredictedPoint[source]#

Create a PredictedPoint from a Point

Parameters
@@ -1101,12 +1170,12 @@

sleap.instance

-class sleap.instance.PredictedPointArray(shape, buf=None, offset=0, strides=None, formats=None, names=None, titles=None, byteorder=None, aligned=False, order='C')[source]#
+class sleap.instance.PredictedPointArray(shape, buf=None, offset=0, strides=None, formats=None, names=None, titles=None, byteorder=None, aligned=False, order='C')[source]#

PredictedPointArray is analogous to PointArray except for predicted points.

-classmethod to_array(a: sleap.instance.PredictedPointArray) sleap.instance.PointArray[source]#
+classmethod to_array(a: sleap.instance.PredictedPointArray) sleap.instance.PointArray[source]#

Convert a PredictedPointArray to a normal PointArray.

Parameters
@@ -1122,7 +1191,7 @@

sleap.instance

-class sleap.instance.Track(spawned_on=0, name='')[source]#
+class sleap.instance.Track(spawned_on=0, name='')[source]#

A track object is associated with a set of animal/object instances across multiple frames of video. This allows tracking of unique entities in the video over time and space.

@@ -1136,7 +1205,7 @@

sleap.instance

-matches(other: sleap.instance.Track)[source]#
+matches(other: sleap.instance.Track)[source]#

Check if two tracks match by value.

Parameters
@@ -1152,7 +1221,7 @@

sleap.instance

-sleap.instance.make_instance_cattr() cattr.converters.Converter[source]#
+sleap.instance.make_instance_cattr() cattr.converters.Converter[source]#

Create a cattr converter for Lists of Instances/PredictedInstances.

This is required because cattrs doesn’t automatically detect the class when the attributes of one class are a subset of another.

diff --git a/develop/api/sleap.io.asyncvideo.html b/develop/api/sleap.io.asyncvideo.html index 80b29ea19..3c10f1676 100644 --- a/develop/api/sleap.io.asyncvideo.html +++ b/develop/api/sleap.io.asyncvideo.html @@ -9,7 +9,7 @@ - sleap.io.asyncvideo — SLEAP (v1.4.1a1) + sleap.io.asyncvideo — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.asyncvideo

Support for loading video frames (by chunk) in background process.

-class sleap.io.asyncvideo.AsyncVideo(base_port: int = 9010)[source]#
+class sleap.io.asyncvideo.AsyncVideo(base_port: int = 9010)[source]#

Supports fetching chunks from video in background process.

@@ -339,19 +338,19 @@

sleap.io.asyncvideo

-close()[source]#
+close()[source]#

Close the async video server and communication ports.

-classmethod from_video(video: sleap.io.video.Video, frame_idxs: Optional[Iterable[int]] = None, frames_per_chunk: int = 64) sleap.io.asyncvideo.AsyncVideo[source]#
+classmethod from_video(video: sleap.io.video.Video, frame_idxs: Optional[Iterable[int]] = None, frames_per_chunk: int = 64) sleap.io.asyncvideo.AsyncVideo[source]#

Create object and start loading frames in background process.

-load_by_chunk(video: sleap.io.video.Video, frame_idxs: Optional[Iterable[int]] = None, frames_per_chunk: int = 64)[source]#
+load_by_chunk(video: sleap.io.video.Video, frame_idxs: Optional[Iterable[int]] = None, frames_per_chunk: int = 64)[source]#

Sends request for loading video in background process.

Parameters
@@ -372,13 +371,13 @@

sleap.io.asyncvideo

-class sleap.io.asyncvideo.AsyncVideoServer(base_port: int)[source]#
+class sleap.io.asyncvideo.AsyncVideoServer(base_port: int)[source]#

Class which loads video frames in background on request.

All interactions with video server should go through AsyncVideo which runs in local thread.

-run()[source]#
+run()[source]#

Method to be run in sub-process; can be overridden in sub-class

diff --git a/develop/api/sleap.io.convert.html b/develop/api/sleap.io.convert.html index 9654fc948..dbde3f7fd 100644 --- a/develop/api/sleap.io.convert.html +++ b/develop/api/sleap.io.convert.html @@ -9,7 +9,7 @@ - sleap.io.convert — SLEAP (v1.4.1a1) + sleap.io.convert — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -355,7 +354,7 @@

sleap.io.convert

first transpose the datasets so they matche the shapes described above.

-sleap.io.convert.main(args: Optional[list] = None)[source]#
+sleap.io.convert.main(args: Optional[list] = None)[source]#

Entrypoint for sleap-convert CLI for converting .slp to different formats.

Parameters
diff --git a/develop/api/sleap.io.dataset.html b/develop/api/sleap.io.dataset.html index c43a2ab05..3a22b6894 100644 --- a/develop/api/sleap.io.dataset.html +++ b/develop/api/sleap.io.dataset.html @@ -9,7 +9,7 @@ - sleap.io.dataset — SLEAP (v1.4.1a1) + sleap.io.dataset — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -356,7 +355,7 @@

sleap.io.dataset

default extension to use if none is provided in the filename.

-class sleap.io.dataset.Labels(labeled_frames: List[sleap.instance.LabeledFrame] = NOTHING, videos: List[sleap.io.video.Video] = NOTHING, skeletons: List[sleap.skeleton.Skeleton] = NOTHING, nodes: List[sleap.skeleton.Node] = NOTHING, tracks: List[sleap.instance.Track] = NOTHING, suggestions: List[sleap.gui.suggestions.SuggestionFrame] = NOTHING, negative_anchors: Dict[sleap.io.video.Video, list] = NOTHING, provenance: Dict[str, Union[str, int, float, bool]] = NOTHING)[source]#
+class sleap.io.dataset.Labels(labeled_frames: List[sleap.instance.LabeledFrame] = NOTHING, videos: List[sleap.io.video.Video] = NOTHING, skeletons: List[sleap.skeleton.Skeleton] = NOTHING, nodes: List[sleap.skeleton.Node] = NOTHING, tracks: List[sleap.instance.Track] = NOTHING, suggestions: List[sleap.gui.suggestions.SuggestionFrame] = NOTHING, negative_anchors: Dict[sleap.io.video.Video, list] = NOTHING, provenance: Dict[str, Union[str, int, float, bool]] = NOTHING)[source]#

The Labels class collects the data for a SLEAP project.

This class is front-end for all interactions with loading, writing, and modifying these labels. The actual storage backend for the data @@ -449,13 +448,13 @@

sleap.io.dataset

-add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#
+add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#

Add instance to frame, updating track occupancy.

-add_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]#
+add_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]#

Add a suggested frame to the labels.

Parameters
@@ -469,13 +468,13 @@

sleap.io.dataset

-add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]#
+add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]#

Add track to labels, updating occupancy.

-add_video(video: sleap.io.video.Video)[source]#
+add_video(video: sleap.io.video.Video)[source]#

Add a video to the labels if it is not already in it.

Video instances are added automatically when adding labeled frames, but this function allows for adding videos to the labels before any @@ -495,25 +494,25 @@

sleap.io.dataset

-append(value: sleap.instance.LabeledFrame)[source]#
+append(value: sleap.instance.LabeledFrame)[source]#

Add labeled frame to list of labeled frames.

-append_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]#
+append_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]#

Append the suggested frames.

-clear_suggestions()[source]#
+clear_suggestions()[source]#

Delete all suggestions.

-classmethod complex_merge_between(base_labels: sleap.io.dataset.Labels, new_labels: sleap.io.dataset.Labels, unify: bool = True) tuple[source]#
+classmethod complex_merge_between(base_labels: sleap.io.dataset.Labels, new_labels: sleap.io.dataset.Labels, unify: bool = True) tuple[source]#

Merge frames and other data from one dataset into another.

Anything that can be merged cleanly is merged into base_labels.

Frames conflict just in case each labels object has a matching @@ -557,7 +556,7 @@

sleap.io.dataset

-copy() sleap.io.dataset.Labels[source]#
+copy() sleap.io.dataset.Labels[source]#

Return a full deep copy of the labels. .. admonition:: Notes

@@ -569,19 +568,19 @@

sleap.io.dataset

-delete_suggestions(video)[source]#
+delete_suggestions(video)[source]#

Delete suggestions for specified video.

-describe()[source]#
+describe()[source]#

Print basic statistics about the labels dataset.

-export(filename: str)[source]#
+export(filename: str)[source]#

Export labels to analysis HDF5 format.

This expects the labels to contain data for a single video (e.g., predictions).

@@ -611,7 +610,7 @@

sleap.io.dataset

-export_csv(filename: str)[source]#
+export_csv(filename: str)[source]#

Export labels to CSV format.

Parameters
@@ -626,7 +625,7 @@

sleap.io.dataset

-export_nwb(filename: str, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]#
+export_nwb(filename: str, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]#

Export all PredictedInstance objects in a Labels object to an NWB file.

Use Labels.numpy to create a pynwb.NWBFile with a separate @@ -686,7 +685,7 @@

sleap.io.dataset

-extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]#
+extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]#

Merge data from another Labels object or LabeledFrame list.

Arg:

new_frames: the object from which to copy data @@ -705,7 +704,7 @@

sleap.io.dataset

-extract(inds, copy: bool = False) sleap.io.dataset.Labels[source]#
+extract(inds, copy: bool = False) sleap.io.dataset.Labels[source]#

Extract labeled frames from indices and return a new Labels object. :param inds: Any valid indexing keys, e.g., a range, slice, list of label indices,

@@ -738,7 +737,7 @@

sleap.io.dataset

-find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) List[sleap.instance.LabeledFrame][source]#
+find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) List[sleap.instance.LabeledFrame][source]#

Search for labeled frames given video and/or frame index.

Parameters
@@ -763,7 +762,7 @@

sleap.io.dataset

-find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) Optional[sleap.instance.LabeledFrame][source]#
+find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) Optional[sleap.instance.LabeledFrame][source]#

Find the first occurrence of a matching labeled frame.

Matches on frames for the given video and/or frame index.

@@ -787,7 +786,7 @@

sleap.io.dataset

-find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) Optional[sleap.instance.LabeledFrame][source]#
+find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) Optional[sleap.instance.LabeledFrame][source]#

Find the last occurrence of a matching labeled frame.

Matches on frames for the given video and/or frame index.

@@ -808,13 +807,13 @@

sleap.io.dataset

-find_suggestion(video, frame_idx)[source]#
+find_suggestion(video, frame_idx)[source]#

Find SuggestionFrame by video and frame index.

-find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) List[sleap.instance.Instance][source]#
+find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) List[sleap.instance.Instance][source]#

Get instances for a given video, track, and range of frames.

Parameters
@@ -833,7 +832,7 @@

sleap.io.dataset

-static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]#
+static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]#

Finish conflicted merge from complex_merge_between.

Parameters
@@ -847,7 +846,7 @@

sleap.io.dataset

-frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]#
+frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]#

Return an iterator over all labeled frames in a video.

Parameters
@@ -866,7 +865,7 @@

sleap.io.dataset

-get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]#
+get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]#

Return labeled frames matching key or return None if not found.

This is a safe version of labels[...] that will not raise an exception if the item is not found.

@@ -899,31 +898,31 @@

sleap.io.dataset

-get_next_suggestion(video, frame_idx, seek_direction=1)[source]#
+get_next_suggestion(video, frame_idx, seek_direction=1)[source]#

Return a (video, frame_idx) tuple seeking from given frame.

-get_suggestions() List[sleap.gui.suggestions.SuggestionFrame][source]#
+get_suggestions() List[sleap.gui.suggestions.SuggestionFrame][source]#

Return all suggestions as a list of SuggestionFrame items.

-get_track_count(video: sleap.io.video.Video) int[source]#
+get_track_count(video: sleap.io.video.Video) int[source]#

Return the number of occupied tracks for a given video.

-get_track_occupancy(video: sleap.io.video.Video) List[source]#
+get_track_occupancy(video: sleap.io.video.Video) List[source]#

Return track occupancy list for given video.

-get_unlabeled_suggestion_inds() List[int][source]#
+get_unlabeled_suggestion_inds() List[int][source]#

Find labeled frames for unlabeled suggestions and return their indices.

This is useful for generating a list of example indices for inference on unlabeled suggestions.

@@ -941,7 +940,7 @@

sleap.io.dataset

-get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) List[int][source]#
+get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) List[int][source]#

Return a list of suggested frame indices.

Parameters
@@ -960,7 +959,7 @@

sleap.io.dataset

-has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) bool[source]#
+has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) bool[source]#

Check if the labels contain a specified frame.

Parameters
@@ -997,25 +996,25 @@

sleap.io.dataset

-index(value) int[source]#
+index(value) int[source]#

Return index of labeled frame in list of labeled frames.

-insert(index, value: sleap.instance.LabeledFrame)[source]#
+insert(index, value: sleap.instance.LabeledFrame)[source]#

Insert labeled frame at given index.

-instance_count(video: sleap.io.video.Video, frame_idx: int) int[source]#
+instance_count(video: sleap.io.video.Video, frame_idx: int) int[source]#

Return number of instances matching video/frame index.

-instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]#
+instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]#

Iterate over instances in the labels, optionally with filters.

Parameters
@@ -1044,13 +1043,13 @@

sleap.io.dataset

-classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]#
+classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]#

Load file, detecting format from filename.

-classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) Callable[source]#
+classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) Callable[source]#

Create a callback for finding missing videos.

The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have @@ -1073,13 +1072,13 @@

sleap.io.dataset

-static merge_container_dicts(dict_a: Dict, dict_b: Dict) Dict[source]#
+static merge_container_dicts(dict_a: Dict, dict_b: Dict) Dict[source]#

Merge data from dict_b into dict_a.

-merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]#
+merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]#

Merge LabeledFrame objects that are for the same video frame.

Parameters
@@ -1090,7 +1089,7 @@

sleap.io.dataset

-merge_nodes(base_node: str, merge_node: str)[source]#
+merge_nodes(base_node: str, merge_node: str)[source]#

Merge two nodes and update data accordingly.

Parameters
@@ -1114,7 +1113,7 @@

sleap.io.dataset

-numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) numpy.ndarray[source]#
+numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) numpy.ndarray[source]#

Construct a numpy array from instance points.

Parameters
@@ -1156,25 +1155,25 @@

sleap.io.dataset

-remove(value: sleap.instance.LabeledFrame)[source]#
+remove(value: sleap.instance.LabeledFrame)[source]#

Remove given labeled frame.

-remove_all_tracks()[source]#
+remove_all_tracks()[source]#

Remove all tracks from labels, updating (but not removing) instances.

-remove_empty_frames()[source]#
+remove_empty_frames()[source]#

Remove frames with no instances.

-remove_empty_instances(keep_empty_frames: bool = True)[source]#
+remove_empty_instances(keep_empty_frames: bool = True)[source]#

Remove instances with no visible points.

Parameters
@@ -1191,7 +1190,7 @@

sleap.io.dataset

-remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]#
+remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]#

Remove a given labeled frame.

Parameters
@@ -1206,7 +1205,7 @@

sleap.io.dataset

-remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]#
+remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]#

Remove a list of frames from the labels.

Parameters
@@ -1217,13 +1216,13 @@

sleap.io.dataset

-remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]#
+remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]#

Remove instance from frame, updating track occupancy.

-remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]#
+remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]#

Clear predicted instances from the labels.

Useful prior to merging operations to prevent overlapping instances from new predictions.

@@ -1246,7 +1245,7 @@

sleap.io.dataset

-remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]#
+remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]#

Remove a suggestion from the list by video and frame index.

Parameters
@@ -1260,13 +1259,13 @@

sleap.io.dataset

-remove_track(track: sleap.instance.Track)[source]#
+remove_track(track: sleap.instance.Track)[source]#

Remove a track from the labels, updating (but not removing) instances.

-remove_untracked_instances(remove_empty_frames: bool = True)[source]#
+remove_untracked_instances(remove_empty_frames: bool = True)[source]#

Remove instances that do not have a track assignment.

Parameters
@@ -1278,13 +1277,13 @@

sleap.io.dataset

-remove_unused_tracks()[source]#
+remove_unused_tracks()[source]#

Remove tracks that are not used by any instances.

-remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]#
+remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]#

Clear user instances from the labels.

Useful prior to merging operations to prevent overlapping instances from new labels.

@@ -1307,7 +1306,7 @@

sleap.io.dataset

-remove_video(video: sleap.io.video.Video)[source]#
+remove_video(video: sleap.io.video.Video)[source]#

Remove a video from the labels and all associated labeled frames.

Parameters
@@ -1318,7 +1317,7 @@

sleap.io.dataset

-save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]#
+save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]#

Save the labels to a file.

Parameters
@@ -1346,7 +1345,7 @@

sleap.io.dataset

-classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]#
+classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]#

Save file, detecting format from filename.

Parameters
@@ -1367,7 +1366,7 @@

sleap.io.dataset

-save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[sleap.io.video.HDF5Video][source]#
+save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[sleap.io.video.HDF5Video][source]#

Write images for labeled frames from all videos to hdf5 file.

Note that this will make an HDF5 video, not an HDF5 labels dataset.

@@ -1399,7 +1398,7 @@

sleap.io.dataset

-save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[sleap.io.video.ImgStoreVideo][source]#
+save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) List[sleap.io.video.ImgStoreVideo][source]#

Write images for labeled frames from all videos to imgstore datasets.

This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores.

@@ -1433,7 +1432,7 @@

sleap.io.dataset

-set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]#
+set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]#

Set the suggested frames.

@@ -1445,7 +1444,7 @@

sleap.io.dataset

-split(n: Union[float, int], copy: bool = True) Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]#
+split(n: Union[float, int], copy: bool = True) Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]#

Split labels randomly.

Parameters
@@ -1476,7 +1475,7 @@

sleap.io.dataset

-to_dict(skip_labels: bool = False) Dict[str, Any][source]#
+to_dict(skip_labels: bool = False) Dict[str, Any][source]#

Serialize all labels to dicts.

Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of @@ -1507,7 +1506,7 @@

sleap.io.dataset

-to_json()[source]#
+to_json()[source]#

Serialize all labels in the underling list of LabeledFrame(s) to JSON.

Returns
@@ -1518,7 +1517,7 @@

sleap.io.dataset

-to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) sleap.pipelines.Pipeline[source]#
+to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) sleap.pipelines.Pipeline[source]#

Create a pipeline for reading the dataset.

Parameters
@@ -1543,13 +1542,13 @@

sleap.io.dataset

-track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]#
+track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]#

Set track on given instance, updating occupancy.

-track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]#
+track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]#

Swap track assignment for instances in two tracks.

If you need to change the track to or from None, you’ll need to use track_set_instance() for each specific @@ -1601,7 +1600,7 @@

sleap.io.dataset

-with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) sleap.io.dataset.Labels[source]#
+with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) sleap.io.dataset.Labels[source]#

Return a new Labels containing only user labels.

This is useful as a preprocessing step to train on only user-labeled data.

@@ -1627,89 +1626,89 @@

sleap.io.dataset

-class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]#
+class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]#

Class for maintaining cache of data in labels dataset.

-add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#
+add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#

Add an instance to the labels.

-add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]#
+add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]#

Add a track to the labels.

-find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]#
+find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]#

Return a list of frame idxs, with optional start position/order.

-find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) Optional[List[sleap.instance.LabeledFrame]][source]#
+find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) Optional[List[sleap.instance.LabeledFrame]][source]#

Return list of LabeledFrames matching video/frame_idx, or None.

-get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') Set[Tuple[int, int]][source]#
+get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') Set[Tuple[int, int]][source]#

Return list of (video_idx, frame_idx) tuples matching video/filter.

-get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') int[source]#
+get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') int[source]#

Return (possibly cached) count of frames matching video/filter.

-get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) sleap.rangelist.RangeList[source]#
+get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) sleap.rangelist.RangeList[source]#

Access track occupancy cache that adds video/track as needed.

-get_video_track_occupancy(video: sleap.io.video.Video) Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]#
+get_video_track_occupancy(video: sleap.io.video.Video) Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]#

Return track occupancy information for specified video.

-remove_frame(frame: sleap.instance.LabeledFrame)[source]#
+remove_frame(frame: sleap.instance.LabeledFrame)[source]#

Remove frame and update cache as needed.

-remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#
+remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]#

Remove an instance and update the cache as needed.

-remove_video(video: sleap.io.video.Video)[source]#
+remove_video(video: sleap.io.video.Video)[source]#

Remove video and update cache as needed.

-track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]#
+track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]#

Swap tracks and update cache as needed.

-update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]#
+update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]#

Build (or rebuilds) various caches.

-update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]#
+update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]#

Updated the cached count. Should be called after frame is modified.

@@ -1717,7 +1716,7 @@

sleap.io.dataset

-sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) str[source]#
+sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) str[source]#

Find a path to a missing file given a set of paths to search in.

Parameters
@@ -1734,7 +1733,7 @@

sleap.io.dataset

-sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) sleap.io.dataset.Labels[source]#
+sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) sleap.io.dataset.Labels[source]#

Load a SLEAP labels file.

SLEAP labels files (slp) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, diff --git a/develop/api/sleap.io.format.adaptor.html b/develop/api/sleap.io.format.adaptor.html index 59c9ad70f..aeb787e8e 100644 --- a/develop/api/sleap.io.format.adaptor.html +++ b/develop/api/sleap.io.format.adaptor.html @@ -9,7 +9,7 @@ - sleap.io.format.adaptor — SLEAP (v1.4.1a1) + sleap.io.format.adaptor — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.format.adaptor

File format adaptor base class.

-class sleap.io.format.adaptor.Adaptor[source]#
+class sleap.io.format.adaptor.Adaptor[source]#

File format adaptor base class.

An adaptor handles reading and/or writing a specific file format. To add support for a new file format, you’ll create a new class which inherits from @@ -336,13 +335,13 @@

sleap.io.format.adaptor

-can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -354,19 +353,19 @@

sleap.io.format.adaptor

-does_match_ext(filename: str) bool[source]#
+does_match_ext(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -392,13 +391,13 @@

sleap.io.format.adaptor

-read(file: sleap.io.format.filehandle.FileHandle) object[source]#
+read(file: sleap.io.format.filehandle.FileHandle) object[source]#

Reads the file and returns the appropriate deserialized object.

-write(filename: str, source_object: object)[source]#
+write(filename: str, source_object: object)[source]#

Writes the object to a file.

@@ -406,7 +405,7 @@

sleap.io.format.adaptor

-class sleap.io.format.adaptor.SleapObjectType(value)[source]#
+class sleap.io.format.adaptor.SleapObjectType(value)[source]#

Types of files that an adaptor could read/write.

diff --git a/develop/api/sleap.io.format.alphatracker.html b/develop/api/sleap.io.format.alphatracker.html index 56161a67f..b96b39528 100644 --- a/develop/api/sleap.io.format.alphatracker.html +++ b/develop/api/sleap.io.format.alphatracker.html @@ -9,7 +9,7 @@ - sleap.io.format.alphatracker — SLEAP (v1.4.1a1) + sleap.io.format.alphatracker — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -330,7 +329,7 @@

sleap.io.format.alphatracker

create a video object which wraps the individual frame images.

-class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]#
+class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]#

Reads AlphaTracker JSON file with annotations for both single and multiple animals.

@@ -340,7 +339,7 @@

sleap.io.format.alphatracker

-can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#

Returns whether this adaptor can read this file.

Checks the format of the file at three different levels: - First, the upper-level format of file.json must be a list of dictionaries. @@ -365,7 +364,7 @@

sleap.io.format.alphatracker

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -377,19 +376,19 @@

sleap.io.format.alphatracker

-does_match_ext(filename: str) bool[source]#
+does_match_ext(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -401,7 +400,7 @@

sleap.io.format.alphatracker

-get_alpha_tracker_frame_dict(filename: str = '')[source]#
+get_alpha_tracker_frame_dict(filename: str = '')[source]#

Returns a deep copy of the dictionary used for frames.

Parameters
@@ -419,7 +418,7 @@

sleap.io.format.alphatracker

-get_alpha_tracker_instance_dict(width: int = 200, x: float = 200.0, y: float = 200.0) dict[source]#
+get_alpha_tracker_instance_dict(width: int = 200, x: float = 200.0, y: float = 200.0) dict[source]#

Returns a deep copy of the dictionary used for instances.

Parameters
@@ -440,7 +439,7 @@

sleap.io.format.alphatracker

-get_alpha_tracker_point_dict(y: float = 200.0) dict[source]#
+get_alpha_tracker_point_dict(y: float = 200.0) dict[source]#

Returns a deep copy of the dictionary used for nodes.

Parameters
@@ -466,7 +465,7 @@

sleap.io.format.alphatracker

-make_video_for_image_list(image_dir: str, filenames: List[str]) sleap.io.video.Video[source]#
+make_video_for_image_list(image_dir: str, filenames: List[str]) sleap.io.video.Video[source]#

Creates a Video object from frame images.

Parameters
@@ -489,7 +488,7 @@

sleap.io.format.alphatracker

-read(file: sleap.io.format.filehandle.FileHandle, skeleton: Optional[sleap.skeleton.Skeleton] = None, full_video: Optional[sleap.io.video.Video] = None) sleap.io.dataset.Labels[source]#
+read(file: sleap.io.format.filehandle.FileHandle, skeleton: Optional[sleap.skeleton.Skeleton] = None, full_video: Optional[sleap.io.video.Video] = None) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

Parameters
@@ -508,7 +507,7 @@

sleap.io.format.alphatracker

-write(filename: str, source_object: sleap.io.dataset.Labels) List[dict][source]#
+write(filename: str, source_object: sleap.io.dataset.Labels) List[dict][source]#

Writes the object to an AlphaTracker JSON file.

Parameters
diff --git a/develop/api/sleap.io.format.coco.html b/develop/api/sleap.io.format.coco.html index eda5eee47..fbdfecd94 100644 --- a/develop/api/sleap.io.format.coco.html +++ b/develop/api/sleap.io.format.coco.html @@ -9,7 +9,7 @@ - sleap.io.format.coco — SLEAP (v1.4.1a1) + sleap.io.format.coco — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.io.format.coco

See http://cocodataset.org/#format-data for details about this format.

-class sleap.io.format.coco.LabelsCocoAdaptor[source]#
+class sleap.io.format.coco.LabelsCocoAdaptor[source]#
property all_exts#
@@ -333,13 +332,13 @@

sleap.io.format.coco

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -351,13 +350,13 @@

sleap.io.format.coco

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -377,7 +376,7 @@

sleap.io.format.coco

-classmethod read(file: sleap.io.format.filehandle.FileHandle, img_dir: str, use_missing_gui: bool = False, *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, img_dir: str, use_missing_gui: bool = False, *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

diff --git a/develop/api/sleap.io.format.csv.html b/develop/api/sleap.io.format.csv.html index 0cf48a27b..4a3f510f0 100644 --- a/develop/api/sleap.io.format.csv.html +++ b/develop/api/sleap.io.format.csv.html @@ -9,7 +9,7 @@ - sleap.io.format.csv — SLEAP (v1.4.1a1) + sleap.io.format.csv — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.format.csv

Adaptor for writing SLEAP analysis as csv.

-class sleap.io.format.csv.CSVAdaptor[source]#
+class sleap.io.format.csv.CSVAdaptor[source]#
property all_exts#
@@ -332,13 +331,13 @@

sleap.io.format.csv

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -350,13 +349,13 @@

sleap.io.format.csv

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -376,7 +375,7 @@

sleap.io.format.csv

-classmethod write(filename: str, source_object: sleap.io.dataset.Labels, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#
+classmethod write(filename: str, source_object: sleap.io.dataset.Labels, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#

Writes csv file for Labels source_object.

Parameters
diff --git a/develop/api/sleap.io.format.deeplabcut.html b/develop/api/sleap.io.format.deeplabcut.html index 294c1eba9..669dc6a49 100644 --- a/develop/api/sleap.io.format.deeplabcut.html +++ b/develop/api/sleap.io.format.deeplabcut.html @@ -9,7 +9,7 @@ - sleap.io.format.deeplabcut — SLEAP (v1.4.1a1) + sleap.io.format.deeplabcut — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -330,7 +329,7 @@

sleap.io.format.deeplabcut

create a video object which wraps the individual frame images.

-class sleap.io.format.deeplabcut.LabelsDeepLabCutCsvAdaptor[source]#
+class sleap.io.format.deeplabcut.LabelsDeepLabCutCsvAdaptor[source]#

Reads DeepLabCut csv file with labeled frames for single video.

@@ -340,13 +339,13 @@

sleap.io.format.deeplabcut

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -358,13 +357,13 @@

sleap.io.format.deeplabcut

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -378,7 +377,7 @@

sleap.io.format.deeplabcut

-classmethod make_video_for_image_list(image_dir, filenames) sleap.io.video.Video[source]#
+classmethod make_video_for_image_list(image_dir, filenames) sleap.io.video.Video[source]#

Creates a Video object from frame images.

@@ -390,7 +389,7 @@

sleap.io.format.deeplabcut

-classmethod read(file: sleap.io.format.filehandle.FileHandle, full_video: Optional[sleap.io.video.Video] = None, *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, full_video: Optional[sleap.io.video.Video] = None, *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

@@ -398,7 +397,7 @@

sleap.io.format.deeplabcut

-class sleap.io.format.deeplabcut.LabelsDeepLabCutYamlAdaptor[source]#
+class sleap.io.format.deeplabcut.LabelsDeepLabCutYamlAdaptor[source]#
property all_exts#
@@ -407,13 +406,13 @@

sleap.io.format.deeplabcut

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -425,13 +424,13 @@

sleap.io.format.deeplabcut

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -451,7 +450,7 @@

sleap.io.format.deeplabcut

-classmethod read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

diff --git a/develop/api/sleap.io.format.deepposekit.html b/develop/api/sleap.io.format.deepposekit.html index 26b69cfea..18120d48d 100644 --- a/develop/api/sleap.io.format.deepposekit.html +++ b/develop/api/sleap.io.format.deepposekit.html @@ -9,7 +9,7 @@ - sleap.io.format.deepposekit — SLEAP (v1.4.1a1) + sleap.io.format.deepposekit — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.format.deepposekit

Adaptor for reading DeepPoseKit datasets (HDF5).

-class sleap.io.format.deepposekit.LabelsDeepPoseKitAdaptor[source]#
+class sleap.io.format.deepposekit.LabelsDeepPoseKitAdaptor[source]#
property all_exts#
@@ -332,13 +331,13 @@

sleap.io.format.deepposekit

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -350,13 +349,13 @@

sleap.io.format.deepposekit

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -376,7 +375,7 @@

sleap.io.format.deepposekit

-classmethod read(file: sleap.io.format.filehandle.FileHandle, video_path: str, skeleton_path: str, *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, video_path: str, skeleton_path: str, *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

diff --git a/develop/api/sleap.io.format.dispatch.html b/develop/api/sleap.io.format.dispatch.html index d60bb38bc..9fe6b8899 100644 --- a/develop/api/sleap.io.format.dispatch.html +++ b/develop/api/sleap.io.format.dispatch.html @@ -9,7 +9,7 @@ - sleap.io.format.dispatch — SLEAP (v1.4.1a1) + sleap.io.format.dispatch — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/api/sleap.io.format.filehandle.html b/develop/api/sleap.io.format.filehandle.html index 89f9ebb09..0c0f656e2 100644 --- a/develop/api/sleap.io.format.filehandle.html +++ b/develop/api/sleap.io.format.filehandle.html @@ -9,7 +9,7 @@ - sleap.io.format.filehandle — SLEAP (v1.4.1a1) + sleap.io.format.filehandle — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -326,11 +325,11 @@

sleap.io.format.filehandle

to keep any results from previous reads.

-class sleap.io.format.filehandle.FileHandle(filename: str, is_hdf5: bool = False, is_json: Optional[bool] = None, is_open: bool = False, file: Optional[object] = None, text: Optional[str] = None, json: Optional[object] = None)[source]#
+class sleap.io.format.filehandle.FileHandle(filename: str, is_hdf5: bool = False, is_json: Optional[bool] = None, is_open: bool = False, file: Optional[object] = None, text: Optional[str] = None, json: Optional[object] = None)[source]#

Reference to a file; can hold loaded data so it needn’t be read twice.

-close()[source]#
+close()[source]#

Closes the file.

@@ -370,7 +369,7 @@

sleap.io.format.filehandle

-open()[source]#
+open()[source]#

Opens the file (if it’s not already open).

diff --git a/develop/api/sleap.io.format.genericjson.html b/develop/api/sleap.io.format.genericjson.html index 83b7718ff..afc81dfa8 100644 --- a/develop/api/sleap.io.format.genericjson.html +++ b/develop/api/sleap.io.format.genericjson.html @@ -9,7 +9,7 @@ - sleap.io.format.genericjson — SLEAP (v1.4.1a1) + sleap.io.format.genericjson — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.io.format.genericjson

This is a good example of a very simple adaptor class.

-class sleap.io.format.genericjson.GenericJsonAdaptor[source]#
+class sleap.io.format.genericjson.GenericJsonAdaptor[source]#
property all_exts#
@@ -333,13 +332,13 @@

sleap.io.format.genericjson

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -351,13 +350,13 @@

sleap.io.format.genericjson

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -377,13 +376,13 @@

sleap.io.format.genericjson

-read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs)[source]#
+read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs)[source]#

Reads the file and returns the appropriate deserialized object.

-write(filename: str, source_object: dict)[source]#
+write(filename: str, source_object: dict)[source]#

Writes the object to a file.

diff --git a/develop/api/sleap.io.format.hdf5.html b/develop/api/sleap.io.format.hdf5.html index 5c24e6186..3620c342f 100644 --- a/develop/api/sleap.io.format.hdf5.html +++ b/develop/api/sleap.io.format.hdf5.html @@ -9,7 +9,7 @@ - sleap.io.format.hdf5 — SLEAP (v1.4.1a1) + sleap.io.format.hdf5 — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -325,7 +324,7 @@

sleap.io.format.hdf5

format.

-class sleap.io.format.hdf5.LabelsV1Adaptor[source]#
+class sleap.io.format.hdf5.LabelsV1Adaptor[source]#
property all_exts#
@@ -334,13 +333,13 @@

sleap.io.format.hdf5

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -352,13 +351,13 @@

sleap.io.format.hdf5

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -378,13 +377,13 @@

sleap.io.format.hdf5

-classmethod read(file: sleap.io.format.filehandle.FileHandle, video_search: Optional[Union[Callable, List[str]]] = None, match_to: Optional[sleap.io.dataset.Labels] = None, *args, **kwargs)[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, video_search: Optional[Union[Callable, List[str]]] = None, match_to: Optional[sleap.io.dataset.Labels] = None, *args, **kwargs)[source]#

Reads the file and returns the appropriate deserialized object.

-classmethod write(filename: str, source_object: object, append: bool = False, save_frame_data: bool = False, frame_data_format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None)[source]#
+classmethod write(filename: str, source_object: object, append: bool = False, save_frame_data: bool = False, frame_data_format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None)[source]#

Writes the object to a file.

diff --git a/develop/api/sleap.io.format.labels_json.html b/develop/api/sleap.io.format.labels_json.html index 00f087f6c..bb0f0a60b 100644 --- a/develop/api/sleap.io.format.labels_json.html +++ b/develop/api/sleap.io.format.labels_json.html @@ -9,7 +9,7 @@ - sleap.io.format.labels_json — SLEAP (v1.4.1a1) + sleap.io.format.labels_json — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -327,7 +326,7 @@

sleap.io.format.labels_json

also the videos/frames as HDF5 datasets.

-class sleap.io.format.labels_json.LabelsJsonAdaptor[source]#
+class sleap.io.format.labels_json.LabelsJsonAdaptor[source]#
property all_exts#
@@ -336,13 +335,13 @@

sleap.io.format.labels_json

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -354,19 +353,19 @@

sleap.io.format.labels_json

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

-classmethod from_json_data(data: Union[str, dict], match_to: Optional[sleap.io.dataset.Labels] = None) sleap.io.dataset.Labels[source]#
+classmethod from_json_data(data: Union[str, dict], match_to: Optional[sleap.io.dataset.Labels] = None) sleap.io.dataset.Labels[source]#

Create instance of class from data in dictionary.

Method is used by other methods that load from JSON.

@@ -402,13 +401,13 @@

sleap.io.format.labels_json

-classmethod read(file: sleap.io.format.filehandle.FileHandle, video_search: Optional[Union[Callable, List[str]]] = None, match_to: Optional[sleap.io.dataset.Labels] = None, *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, video_search: Optional[Union[Callable, List[str]]] = None, match_to: Optional[sleap.io.dataset.Labels] = None, *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

-classmethod write(filename: str, source_object: str, compress: Optional[bool] = None, save_frame_data: bool = False, frame_data_format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None)[source]#
+classmethod write(filename: str, source_object: str, compress: Optional[bool] = None, save_frame_data: bool = False, frame_data_format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None)[source]#

Save a Labels instance to a JSON format.

Parameters
diff --git a/develop/api/sleap.io.format.leap_matlab.html b/develop/api/sleap.io.format.leap_matlab.html index b3d5562f2..24d9b1009 100644 --- a/develop/api/sleap.io.format.leap_matlab.html +++ b/develop/api/sleap.io.format.leap_matlab.html @@ -9,7 +9,7 @@ - sleap.io.format.leap_matlab — SLEAP (v1.4.1a1) + sleap.io.format.leap_matlab — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -325,7 +324,7 @@

sleap.io.format.leap_matlab

gui param is True, then the user will be prompted to find the videos.

-class sleap.io.format.leap_matlab.LabelsLeapMatlabAdaptor[source]#
+class sleap.io.format.leap_matlab.LabelsLeapMatlabAdaptor[source]#
property all_exts#
@@ -334,13 +333,13 @@

sleap.io.format.leap_matlab

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -352,13 +351,13 @@

sleap.io.format.leap_matlab

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -378,7 +377,7 @@

sleap.io.format.leap_matlab

-classmethod read(file: sleap.io.format.filehandle.FileHandle, gui: bool = True, *args, **kwargs)[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, gui: bool = True, *args, **kwargs)[source]#

Reads the file and returns the appropriate deserialized object.

diff --git a/develop/api/sleap.io.format.main.html b/develop/api/sleap.io.format.main.html index ad507140a..78c891211 100644 --- a/develop/api/sleap.io.format.main.html +++ b/develop/api/sleap.io.format.main.html @@ -9,7 +9,7 @@ - sleap.io.format.main — SLEAP (v1.4.1a1) + sleap.io.format.main — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -349,7 +348,7 @@

sleap.io.format.main

also a non-default file extension for the LabelsV1Adaptor adaptor.

-sleap.io.format.main.read(filename: str, for_object: Union[str, object], as_format: Optional[str] = None, *args, **kwargs) object[source]#
+sleap.io.format.main.read(filename: str, for_object: Union[str, object], as_format: Optional[str] = None, *args, **kwargs) object[source]#

Reads file using the appropriate file format adaptor.

Parameters
@@ -374,7 +373,7 @@

sleap.io.format.main

-sleap.io.format.main.write(filename: str, source_object: object, as_format: Optional[str] = None, *args, **kwargs)[source]#
+sleap.io.format.main.write(filename: str, source_object: object, as_format: Optional[str] = None, *args, **kwargs)[source]#

Writes SLEAP dataset file using the appropriate file format adaptor.

Parameters
diff --git a/develop/api/sleap.io.format.ndx_pose.html b/develop/api/sleap.io.format.ndx_pose.html index 8cc8c9dca..2498ba053 100644 --- a/develop/api/sleap.io.format.ndx_pose.html +++ b/develop/api/sleap.io.format.ndx_pose.html @@ -9,7 +9,7 @@ - sleap.io.format.ndx_pose — SLEAP (v1.4.1a1) + sleap.io.format.ndx_pose — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.format.ndx_pose

Adaptor to read and write ndx-pose files.

-class sleap.io.format.ndx_pose.NDXPoseAdaptor[source]#
+class sleap.io.format.ndx_pose.NDXPoseAdaptor[source]#

Adaptor to read and write ndx-pose files.

@@ -333,13 +332,13 @@

sleap.io.format.ndx_pose

-can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -351,13 +350,13 @@

sleap.io.format.ndx_pose

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -377,7 +376,7 @@

sleap.io.format.ndx_pose

-read(file: sleap.io.format.filehandle.FileHandle) sleap.io.dataset.Labels[source]#
+read(file: sleap.io.format.filehandle.FileHandle) sleap.io.dataset.Labels[source]#

Read the NWB file and returns the appropriate deserialized Labels object.

Parameters
@@ -391,7 +390,7 @@

sleap.io.format.ndx_pose

-write(filename: str, labels: sleap.io.dataset.Labels, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]#
+write(filename: str, labels: sleap.io.dataset.Labels, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]#

Write all PredictedInstance objects in a Labels object to an NWB file.

Use Labels.numpy to create a pynwb.NWBFile with a separate diff --git a/develop/api/sleap.io.format.nix.html b/develop/api/sleap.io.format.nix.html index 4f5a12791..1e22e9e30 100644 --- a/develop/api/sleap.io.format.nix.html +++ b/develop/api/sleap.io.format.nix.html @@ -9,7 +9,7 @@ - sleap.io.format.nix — SLEAP (v1.4.1a1) + sleap.io.format.nix — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -322,7 +321,7 @@

sleap.io.format.nix

sleap.io.format.nix#

-class sleap.io.format.nix.NixAdaptor[source]#
+class sleap.io.format.nix.NixAdaptor[source]#

Adaptor class for export of tracking analysis results to the generic [NIX](g-node/nix) format. NIX defines a generic data model for scientific data that combines data and data @@ -347,13 +346,13 @@

sleap.io.format.nix

-classmethod can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#
+classmethod can_read_file(file: sleap.io.format.filehandle.FileHandle) bool[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -365,13 +364,13 @@

sleap.io.format.nix

-classmethod does_read() bool[source]#
+classmethod does_read() bool[source]#

Returns whether this adaptor supports reading.

-classmethod does_write() bool[source]#
+classmethod does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -391,13 +390,13 @@

sleap.io.format.nix

-classmethod read(file: sleap.io.format.filehandle.FileHandle) object[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle) object[source]#

Reads the file and returns the appropriate deserialized object.

-classmethod write(filename: str, source_object: object, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#
+classmethod write(filename: str, source_object: object, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#

Writes the object to a file.

diff --git a/develop/api/sleap.io.format.sleap_analysis.html b/develop/api/sleap.io.format.sleap_analysis.html index 6c43dd0d1..8610e375f 100644 --- a/develop/api/sleap.io.format.sleap_analysis.html +++ b/develop/api/sleap.io.format.sleap_analysis.html @@ -9,7 +9,7 @@ - sleap.io.format.sleap_analysis — SLEAP (v1.4.1a1) + sleap.io.format.sleap_analysis — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -329,7 +328,7 @@

sleap.io.format.sleap_analysis

with a track_occupancy dataset.

-class sleap.io.format.sleap_analysis.SleapAnalysisAdaptor[source]#
+class sleap.io.format.sleap_analysis.SleapAnalysisAdaptor[source]#
property all_exts#
@@ -338,13 +337,13 @@

sleap.io.format.sleap_analysis

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str)[source]#
+can_write_filename(filename: str)[source]#

Returns whether this adaptor can write format of this filename.

@@ -356,13 +355,13 @@

sleap.io.format.sleap_analysis

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -382,13 +381,13 @@

sleap.io.format.sleap_analysis

-classmethod read(file: sleap.io.format.filehandle.FileHandle, video: Union[sleap.io.video.Video, str], *args, **kwargs) sleap.io.dataset.Labels[source]#
+classmethod read(file: sleap.io.format.filehandle.FileHandle, video: Union[sleap.io.video.Video, str], *args, **kwargs) sleap.io.dataset.Labels[source]#

Reads the file and returns the appropriate deserialized object.

-classmethod write(filename: str, source_object: sleap.io.dataset.Labels, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#
+classmethod write(filename: str, source_object: sleap.io.dataset.Labels, source_path: Optional[str] = None, video: Optional[sleap.io.video.Video] = None)[source]#

Writes analysis file for Labels source_object.

Parameters
diff --git a/develop/api/sleap.io.format.text.html b/develop/api/sleap.io.format.text.html index c555cb537..33ab19703 100644 --- a/develop/api/sleap.io.format.text.html +++ b/develop/api/sleap.io.format.text.html @@ -9,7 +9,7 @@ - sleap.io.format.text — SLEAP (v1.4.1a1) + sleap.io.format.text — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.io.format.text

This is a good example of a very simple adaptor class.

-class sleap.io.format.text.TextAdaptor[source]#
+class sleap.io.format.text.TextAdaptor[source]#
property all_exts#
@@ -333,13 +332,13 @@

sleap.io.format.text

-can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#
+can_read_file(file: sleap.io.format.filehandle.FileHandle)[source]#

Returns whether this adaptor can read this file.

-can_write_filename(filename: str) bool[source]#
+can_write_filename(filename: str) bool[source]#

Returns whether this adaptor can write format of this filename.

@@ -351,13 +350,13 @@

sleap.io.format.text

-does_read() bool[source]#
+does_read() bool[source]#

Returns whether this adaptor supports reading.

-does_write() bool[source]#
+does_write() bool[source]#

Returns whether this adaptor supports writing.

@@ -377,13 +376,13 @@

sleap.io.format.text

-read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs)[source]#
+read(file: sleap.io.format.filehandle.FileHandle, *args, **kwargs)[source]#

Reads the file and returns the appropriate deserialized object.

-write(filename: str, source_object: str)[source]#
+write(filename: str, source_object: str)[source]#

Writes the object to a file.

diff --git a/develop/api/sleap.io.legacy.html b/develop/api/sleap.io.legacy.html index 0318ba89b..37712a7c5 100644 --- a/develop/api/sleap.io.legacy.html +++ b/develop/api/sleap.io.legacy.html @@ -9,7 +9,7 @@ - sleap.io.legacy — SLEAP (v1.4.1a1) + sleap.io.legacy — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.legacy

Module for legacy LEAP dataset.

-sleap.io.legacy.load_labels_json_old(data_path: str, parsed_json: Optional[dict] = None, adjust_matlab_indexing: bool = True, fix_rel_paths: bool = True) List[sleap.instance.LabeledFrame][source]#
+sleap.io.legacy.load_labels_json_old(data_path: str, parsed_json: Optional[dict] = None, adjust_matlab_indexing: bool = True, fix_rel_paths: bool = True) List[sleap.instance.LabeledFrame][source]#

Load predicted instances from Talmo’s old JSON format.

Parameters
@@ -343,7 +342,7 @@

sleap.io.legacy

-sleap.io.legacy.load_predicted_labels_json_old(data_path: str, parsed_json: Optional[dict] = None, adjust_matlab_indexing: bool = True, fix_rel_paths: bool = True) List[sleap.instance.LabeledFrame][source]#
+sleap.io.legacy.load_predicted_labels_json_old(data_path: str, parsed_json: Optional[dict] = None, adjust_matlab_indexing: bool = True, fix_rel_paths: bool = True) List[sleap.instance.LabeledFrame][source]#

Load predicted instances from Talmo’s old JSON format.

Parameters
diff --git a/develop/api/sleap.io.pathutils.html b/develop/api/sleap.io.pathutils.html index f1323bb99..824bcb535 100644 --- a/develop/api/sleap.io.pathutils.html +++ b/develop/api/sleap.io.pathutils.html @@ -9,7 +9,7 @@ - sleap.io.pathutils — SLEAP (v1.4.1a1) + sleap.io.pathutils — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.pathutils

Utilities for working with file paths.

-sleap.io.pathutils.filenames_prefix_change(filenames, old_prefix, new_prefix, missing: Optional[bool] = None, confirm_callback: Optional[Callable] = None)[source]#
+sleap.io.pathutils.filenames_prefix_change(filenames, old_prefix, new_prefix, missing: Optional[bool] = None, confirm_callback: Optional[Callable] = None)[source]#

Finds missing files by changing the initial part of paths.

Parameters
@@ -345,7 +344,7 @@

sleap.io.pathutils

-sleap.io.pathutils.find_changed_subpath(old_path: str, new_path: str) Tuple[str, str][source]#
+sleap.io.pathutils.find_changed_subpath(old_path: str, new_path: str) Tuple[str, str][source]#

Finds the smallest initial section of path that was changed.

Parameters
@@ -362,7 +361,7 @@

sleap.io.pathutils

-sleap.io.pathutils.list_file_missing(filenames)[source]#
+sleap.io.pathutils.list_file_missing(filenames)[source]#

Given a list of filenames, returns list of whether file exists.

diff --git a/develop/api/sleap.io.video.html b/develop/api/sleap.io.video.html index 0e9731874..9b72e08c9 100644 --- a/develop/api/sleap.io.video.html +++ b/develop/api/sleap.io.video.html @@ -9,7 +9,7 @@ - sleap.io.video — SLEAP (v1.4.1a1) + sleap.io.video — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.video

Video reading and writing interfaces for different formats.

-class sleap.io.video.DummyVideo(filename: str = '', height: int = 2000, width: int = 2000, frames: int = 10000, channels: int = 1, dummy: bool = True)[source]#
+class sleap.io.video.DummyVideo(filename: str = '', height: int = 2000, width: int = 2000, frames: int = 10000, channels: int = 1, dummy: bool = True)[source]#

Fake video backend,returns frames with all zeros.

This can be useful when you want to look at labels for a dataset but don’t have access to the real video.

@@ -331,7 +330,7 @@

sleap.io.video

-class sleap.io.video.HDF5Video(filename: Optional[str] = None, dataset: Optional[str] = None, input_format: str = 'channels_last', convert_range: bool = True)[source]#
+class sleap.io.video.HDF5Video(filename: Optional[str] = None, dataset: Optional[str] = None, input_format: str = 'channels_last', convert_range: bool = True)[source]#

Video data stored as 4D datasets in HDF5 files.

Parameters
@@ -363,13 +362,13 @@

sleap.io.video

-check(attribute, value)[source]#
+check(attribute, value)[source]#

Called by attrs to validates input format.

-close()[source]#
+close()[source]#

Close the HDF5 file object (if it’s open).

@@ -400,7 +399,7 @@

sleap.io.video

-get_frame(idx) numpy.ndarray[source]#
+get_frame(idx) numpy.ndarray[source]#

Get a frame from the underlying HDF5 video data.

Parameters
@@ -435,7 +434,7 @@

sleap.io.video

-matches(other: sleap.io.video.HDF5Video) bool[source]#
+matches(other: sleap.io.video.HDF5Video) bool[source]#

Check if attributes match those of another video.

Parameters
@@ -449,7 +448,7 @@

sleap.io.video

-reset()[source]#
+reset()[source]#

Reloads the video.

@@ -475,7 +474,7 @@

sleap.io.video

-class sleap.io.video.ImgStoreVideo(filename: Optional[str] = None, index_by_original: bool = True)[source]#
+class sleap.io.video.ImgStoreVideo(filename: Optional[str] = None, index_by_original: bool = True)[source]#

Video data stored as an ImgStore dataset.

See: loopbio/imgstore This class is just a lightweight wrapper for reading such datasets as @@ -503,7 +502,7 @@

sleap.io.video

-close()[source]#
+close()[source]#

Close the imgstore if it isn’t already closed.

Returns
@@ -526,7 +525,7 @@

sleap.io.video

-get_frame(frame_number: int) numpy.ndarray[source]#
+get_frame(frame_number: int) numpy.ndarray[source]#

Get a frame from the underlying ImgStore video data.

Parameters
@@ -570,7 +569,7 @@

sleap.io.video

-matches(other)[source]#
+matches(other)[source]#

Check if attributes match.

Parameters
@@ -584,7 +583,7 @@

sleap.io.video

-open()[source]#
+open()[source]#

Open the image store if it isn’t already open.

Returns
@@ -595,7 +594,7 @@

sleap.io.video

-reset()[source]#
+reset()[source]#

Reloads the video.

@@ -609,7 +608,7 @@

sleap.io.video

-class sleap.io.video.MediaVideo(filename: str, grayscale: bool = NOTHING, bgr: bool = True, dataset: str = '', input_format: str = '')[source]#
+class sleap.io.video.MediaVideo(filename: str, grayscale: bool = NOTHING, bgr: bool = True, dataset: str = '', input_format: str = '')[source]#

Video data stored in traditional media formats readable by FFMPEG

This class provides bare minimum read only interface on top of OpenCV’s VideoCapture class.

@@ -649,7 +648,7 @@

sleap.io.video

-get_frame(idx: int, grayscale: Optional[bool] = None) numpy.ndarray[source]#
+get_frame(idx: int, grayscale: Optional[bool] = None) numpy.ndarray[source]#

See Video.

@@ -661,7 +660,7 @@

sleap.io.video

-matches(other: sleap.io.video.MediaVideo) bool[source]#
+matches(other: sleap.io.video.MediaVideo) bool[source]#

Check if attributes match those of another video.

Parameters
@@ -675,7 +674,7 @@

sleap.io.video

-reset(filename: Optional[str] = None, grayscale: Optional[bool] = None, bgr: Optional[bool] = None)[source]#
+reset(filename: Optional[str] = None, grayscale: Optional[bool] = None, bgr: Optional[bool] = None)[source]#

Reloads the video.

@@ -689,7 +688,7 @@

sleap.io.video

-class sleap.io.video.NumpyVideo(filename: Union[str, numpy.ndarray])[source]#
+class sleap.io.video.NumpyVideo(filename: Union[str, numpy.ndarray])[source]#

Video data stored as Numpy array.

Parameters
@@ -719,7 +718,7 @@

sleap.io.video

-get_frame(idx)[source]#
+get_frame(idx)[source]#

See Video.

@@ -737,7 +736,7 @@

sleap.io.video

-matches(other: sleap.io.video.NumpyVideo) numpy.ndarray[source]#
+matches(other: sleap.io.video.NumpyVideo) numpy.ndarray[source]#

Check if attributes match those of another video.

Parameters
@@ -751,7 +750,7 @@

sleap.io.video

-reset()[source]#
+reset()[source]#

Reload the video.

@@ -765,7 +764,7 @@

sleap.io.video

-class sleap.io.video.SingleImageVideo(filename: Optional[str] = None, filenames: Optional[List[str]] = NOTHING, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = NOTHING)[source]#
+class sleap.io.video.SingleImageVideo(filename: Optional[str] = None, filenames: Optional[List[str]] = NOTHING, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = NOTHING)[source]#

Video wrapper for individual image files.

Parameters
@@ -792,7 +791,7 @@

sleap.io.video

-get_frame(idx: int, grayscale: Optional[bool] = None) numpy.ndarray[source]#
+get_frame(idx: int, grayscale: Optional[bool] = None) numpy.ndarray[source]#

See Video.

@@ -804,7 +803,7 @@

sleap.io.video

-matches(other: sleap.io.video.SingleImageVideo) bool[source]#
+matches(other: sleap.io.video.SingleImageVideo) bool[source]#

Check if attributes match those of another video.

Parameters
@@ -818,7 +817,7 @@

sleap.io.video

-reset(filename: Optional[str] = None, filenames: Optional[List[str]] = None, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = None)[source]#
+reset(filename: Optional[str] = None, filenames: Optional[List[str]] = None, height_: Optional[int] = None, width_: Optional[int] = None, channels_: Optional[int] = None, grayscale: Optional[bool] = None)[source]#

Reloads the video.

@@ -832,7 +831,7 @@

sleap.io.video

-class sleap.io.video.Video(backend: Union[sleap.io.video.HDF5Video, sleap.io.video.NumpyVideo, sleap.io.video.MediaVideo, sleap.io.video.ImgStoreVideo, sleap.io.video.SingleImageVideo, sleap.io.video.DummyVideo])[source]#
+class sleap.io.video.Video(backend: Union[sleap.io.video.HDF5Video, sleap.io.video.NumpyVideo, sleap.io.video.MediaVideo, sleap.io.video.ImgStoreVideo, sleap.io.video.SingleImageVideo, sleap.io.video.DummyVideo])[source]#

The top-level interface to any Video data used by SLEAP.

This class provides a common interface for various supported video data backends. It provides the bare minimum of properties and methods that @@ -873,7 +872,7 @@

sleap.io.video

-static cattr()[source]#
+static cattr()[source]#

Return a cattr converter for serialiazing/deserializing Video objects.

Returns
@@ -884,7 +883,7 @@

sleap.io.video

-static fixup_path(path: str, raise_error: bool = False, raise_warning: bool = False) str[source]#
+static fixup_path(path: str, raise_error: bool = False, raise_warning: bool = False) str[source]#

Try to locate video if the given path doesn’t work.

Given a path to a video try to find it. This is attempt to make the paths serialized for different video objects portable across multiple @@ -916,7 +915,7 @@

sleap.io.video

-classmethod from_filename(filename: str, *args, **kwargs) sleap.io.video.Video[source]#
+classmethod from_filename(filename: str, *args, **kwargs) sleap.io.video.Video[source]#

Create an instance of a video object, auto-detecting the backend.

Parameters
@@ -946,7 +945,7 @@

sleap.io.video

-classmethod from_hdf5(dataset: Union[str, h5py._hl.dataset.Dataset], filename: Optional[Union[str, h5py._hl.files.File]] = None, input_format: str = 'channels_last', convert_range: bool = True) sleap.io.video.Video[source]#
+classmethod from_hdf5(dataset: Union[str, h5py._hl.dataset.Dataset], filename: Optional[Union[str, h5py._hl.files.File]] = None, input_format: str = 'channels_last', convert_range: bool = True) sleap.io.video.Video[source]#

Create an instance of a video object from an HDF5 file and dataset.

This is a helper method that invokes the HDF5Video backend.

@@ -968,13 +967,13 @@

sleap.io.video

-classmethod from_image_filenames(filenames: List[str], height: Optional[int] = None, width: Optional[int] = None, *args, **kwargs) sleap.io.video.Video[source]#
+classmethod from_image_filenames(filenames: List[str], height: Optional[int] = None, width: Optional[int] = None, *args, **kwargs) sleap.io.video.Video[source]#

Create an instance of a SingleImageVideo from individual image file(s).

-classmethod from_media(filename: str, *args, **kwargs) sleap.io.video.Video[source]#
+classmethod from_media(filename: str, *args, **kwargs) sleap.io.video.Video[source]#

Create an instance of a video object from a typical media file.

For example, mp4, avi, or other types readable by FFMPEG.

@@ -993,7 +992,7 @@

sleap.io.video

-classmethod from_numpy(filename: Union[str, numpy.ndarray], *args, **kwargs) sleap.io.video.Video[source]#
+classmethod from_numpy(filename: Union[str, numpy.ndarray], *args, **kwargs) sleap.io.video.Video[source]#

Create an instance of a video object from a numpy array.

Parameters
@@ -1011,7 +1010,7 @@

sleap.io.video

-get_frame(idx: int) numpy.ndarray[source]#
+get_frame(idx: int) numpy.ndarray[source]#

Return a single frame of video from the underlying video data.

Parameters
@@ -1025,7 +1024,7 @@

sleap.io.video

-get_frames(idxs: Union[int, Iterable[int]]) numpy.ndarray[source]#
+get_frames(idxs: Union[int, Iterable[int]]) numpy.ndarray[source]#

Return a collection of video frames from the underlying video data.

Parameters
@@ -1039,7 +1038,7 @@

sleap.io.video

-get_frames_safely(idxs: Iterable[int]) Tuple[List[int], numpy.ndarray][source]#
+get_frames_safely(idxs: Iterable[int]) Tuple[List[int], numpy.ndarray][source]#

Return list of frame indices and frames which were successfully loaded. :param idxs: An iterable object that contains the indices of frames.

@@ -1054,7 +1053,7 @@

sleap.io.video

-classmethod imgstore_from_filenames(filenames: list, output_filename: str, *args, **kwargs) sleap.io.video.Video[source]#
+classmethod imgstore_from_filenames(filenames: list, output_filename: str, *args, **kwargs) sleap.io.video.Video[source]#

Create an imgstore from a list of image files.

Parameters
@@ -1095,7 +1094,7 @@

sleap.io.video

-to_hdf5(path: str, dataset: str, frame_numbers: Optional[List[int]] = None, format: str = '', index_by_original: bool = True)[source]#
+to_hdf5(path: str, dataset: str, frame_numbers: Optional[List[int]] = None, format: str = '', index_by_original: bool = True)[source]#

Convert frames from arbitrary video backend to HDF5Video.

Used for building an HDF5 that holds all data needed for training.

@@ -1124,7 +1123,7 @@

sleap.io.video

-to_imgstore(path: str, frame_numbers: Optional[List[int]] = None, format: str = 'png', index_by_original: bool = True) sleap.io.video.Video[source]#
+to_imgstore(path: str, frame_numbers: Optional[List[int]] = None, format: str = 'png', index_by_original: bool = True) sleap.io.video.Video[source]#

Convert frames from arbitrary video backend to ImgStoreVideo.

This should facilitate conversion of any video to a loopbio imgstore.

@@ -1154,7 +1153,7 @@

sleap.io.video

-to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None) sleap.pipelines.Pipeline[source]#
+to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None) sleap.pipelines.Pipeline[source]#

Create a pipeline for reading the video.

Parameters
@@ -1178,7 +1177,7 @@

sleap.io.video

-sleap.io.video.available_video_exts() Tuple[str][source]#
+sleap.io.video.available_video_exts() Tuple[str][source]#

Return tuple of supported video extensions.

Returns
@@ -1189,7 +1188,7 @@

sleap.io.video

-sleap.io.video.load_video(filename: str, grayscale: typing.Optional[bool] = None, dataset=<class 'NoneType'>, channels_first: bool = False, **kwargs) sleap.io.video.Video[source]#
+sleap.io.video.load_video(filename: str, grayscale: typing.Optional[bool] = None, dataset=<class 'NoneType'>, channels_first: bool = False, **kwargs) sleap.io.video.Video[source]#

Open a video from disk.

Parameters
diff --git a/develop/api/sleap.io.videowriter.html b/develop/api/sleap.io.videowriter.html index 06aa34f24..a71104fbc 100644 --- a/develop/api/sleap.io.videowriter.html +++ b/develop/api/sleap.io.videowriter.html @@ -9,7 +9,7 @@ - sleap.io.videowriter — SLEAP (v1.4.1a1) + sleap.io.videowriter — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -329,11 +328,11 @@

sleap.io.videowriter

-class sleap.io.videowriter.VideoWriter(filename, height, width, fps)[source]#
+class sleap.io.videowriter.VideoWriter(filename, height, width, fps)[source]#

Abstract base class for writing avi/mp4 videos.

-static safe_builder(filename, height, width, fps)[source]#
+static safe_builder(filename, height, width, fps)[source]#

Builds VideoWriter based on available dependencies.

@@ -341,13 +340,13 @@

sleap.io.videowriter

-class sleap.io.videowriter.VideoWriterOpenCV(filename, height, width, fps)[source]#
+class sleap.io.videowriter.VideoWriterOpenCV(filename, height, width, fps)[source]#

Writes video using OpenCV as wrapper for ffmpeg.

-class sleap.io.videowriter.VideoWriterSkvideo(filename, height, width, fps, crf: int = 21, preset: str = 'superfast')[source]#
+class sleap.io.videowriter.VideoWriterSkvideo(filename, height, width, fps, crf: int = 21, preset: str = 'superfast')[source]#

Writes video using scikit-video as wrapper for ffmpeg.

diff --git a/develop/api/sleap.io.visuals.html b/develop/api/sleap.io.visuals.html index 63005ece6..29ce74cc9 100644 --- a/develop/api/sleap.io.visuals.html +++ b/develop/api/sleap.io.visuals.html @@ -9,7 +9,7 @@ - sleap.io.visuals — SLEAP (v1.4.1a1) + sleap.io.visuals — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.io.visuals

Module for generating videos with visual annotation overlays.

-class sleap.io.visuals.VideoMarkerThread(in_q: queue.Queue, out_q: queue.Queue, labels: sleap.io.dataset.Labels, video_idx: int, scale: float, show_edges: bool = True, edge_is_wedge: bool = False, marker_size: int = 4, crop_size_xy: Optional[Tuple[int, int]] = None, color_manager: Optional[sleap.gui.color.ColorManager] = None, palette: str = 'standard', distinctly_color: str = 'instances')[source]#
+class sleap.io.visuals.VideoMarkerThread(in_q: queue.Queue, out_q: queue.Queue, labels: sleap.io.dataset.Labels, video_idx: int, scale: float, show_edges: bool = True, edge_is_wedge: bool = False, marker_size: int = 4, crop_size_xy: Optional[Tuple[int, int]] = None, color_manager: Optional[sleap.gui.color.ColorManager] = None, palette: str = 'standard', distinctly_color: str = 'instances')[source]#

Annotate frame images (draw instances).

Parameters
@@ -342,7 +341,7 @@

sleap.io.visuals

-run()[source]#
+run()[source]#

Method representing the thread’s activity.

You may override this method in a subclass. The standard run() method invokes the callable object passed to the object’s constructor as the @@ -354,13 +353,13 @@

sleap.io.visuals

-sleap.io.visuals.img_to_cv(img: numpy.ndarray) numpy.ndarray[source]#
+sleap.io.visuals.img_to_cv(img: numpy.ndarray) numpy.ndarray[source]#

Prepares frame image as needed for opencv.

-sleap.io.visuals.reader(out_q: queue.Queue, video: sleap.io.video.Video, frames: List[int], scale: float = 1.0, background: str = 'original')[source]#
+sleap.io.visuals.reader(out_q: queue.Queue, video: sleap.io.video.Video, frames: List[int], scale: float = 1.0, background: str = 'original')[source]#

Read frame images from video and send them into queue.

Parameters
@@ -381,13 +380,13 @@

sleap.io.visuals

-sleap.io.visuals.resize_image(img: numpy.ndarray, scale: float) numpy.ndarray[source]#
+sleap.io.visuals.resize_image(img: numpy.ndarray, scale: float) numpy.ndarray[source]#

Resizes single image with shape (height, width, channels).

-sleap.io.visuals.save_labeled_video(filename: str, labels: sleap.io.dataset.Labels, video: sleap.io.video.Video, frames: List[int], fps: int = 15, scale: float = 1.0, crop_size_xy: Optional[Tuple[int, int]] = None, background: str = 'original', show_edges: bool = True, edge_is_wedge: bool = False, marker_size: int = 4, color_manager: Optional[sleap.gui.color.ColorManager] = None, palette: str = 'standard', distinctly_color: str = 'instances', gui_progress: bool = False)[source]#
+sleap.io.visuals.save_labeled_video(filename: str, labels: sleap.io.dataset.Labels, video: sleap.io.video.Video, frames: List[int], fps: int = 15, scale: float = 1.0, crop_size_xy: Optional[Tuple[int, int]] = None, background: str = 'original', show_edges: bool = True, edge_is_wedge: bool = False, marker_size: int = 4, color_manager: Optional[sleap.gui.color.ColorManager] = None, palette: str = 'standard', distinctly_color: str = 'instances', gui_progress: bool = False)[source]#

Function to generate and save video with annotations.

Parameters
@@ -420,7 +419,7 @@

sleap.io.visuals

-sleap.io.visuals.writer(in_q: queue.Queue, progress_queue: queue.Queue, filename: str, fps: float)[source]#
+sleap.io.visuals.writer(in_q: queue.Queue, progress_queue: queue.Queue, filename: str, fps: float)[source]#

Write annotated images to video.

Image size is determined by the first image received in queue.

diff --git a/develop/api/sleap.message.html b/develop/api/sleap.message.html index 924c4375c..5715f3864 100644 --- a/develop/api/sleap.message.html +++ b/develop/api/sleap.message.html @@ -9,7 +9,7 @@ - sleap.message — SLEAP (v1.4.1a1) + sleap.message — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -328,23 +327,23 @@

sleap.message

Each message is either dictionary or dictionary + numpy ndarray.

-class sleap.message.BaseMessageParticipant(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None)[source]#
+class sleap.message.BaseMessageParticipant(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None)[source]#

Base class for simple Sender and Receiver.

-class sleap.message.Receiver(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None, message_queue: List[Any] = NOTHING)[source]#
+class sleap.message.Receiver(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None, message_queue: List[Any] = NOTHING)[source]#

Receives messages from corresponding Sender.

-check_message(timeout: int = 10, fresh: bool = False) Any[source]#
+check_message(timeout: int = 10, fresh: bool = False) Any[source]#

Attempt to receive a single message.

-check_messages(timeout: int = 10, times_to_check: int = 10) List[dict][source]#
+check_messages(timeout: int = 10, times_to_check: int = 10) List[dict][source]#

Attempt to receive multiple messages.

This method allows us to keep up with the messages by getting multiple messages that have been sent since the last check. @@ -354,7 +353,7 @@

sleap.message

-push_back_message(message)[source]#
+push_back_message(message)[source]#

Act like we didn’t receive this message yet.

@@ -362,17 +361,17 @@

sleap.message

-class sleap.message.Sender(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None)[source]#
+class sleap.message.Sender(address: str = 'tcp://127.0.0.1:9001', context: Optional[zmq.sugar.context.Context] = None, socket: Optional[zmq.sugar.socket.Socket] = None)[source]#

Publishes messages to corresponding Receiver.

-send_array(header_data: dict, A: numpy.ndarray, flags=0, copy=True, track=False)[source]#
+send_array(header_data: dict, A: numpy.ndarray, flags=0, copy=True, track=False)[source]#

Sends dictionary + numpy ndarray.

-send_dict(data: dict)[source]#
+send_dict(data: dict)[source]#

Sends dictionary.

diff --git a/develop/api/sleap.nn.architectures.common.html b/develop/api/sleap.nn.architectures.common.html index 75d0a649e..bd2e2fe84 100644 --- a/develop/api/sleap.nn.architectures.common.html +++ b/develop/api/sleap.nn.architectures.common.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.common — SLEAP (v1.4.1a1) + sleap.nn.architectures.common — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.architectures.common

Common utilities for architecture and model building.

-class sleap.nn.architectures.common.IntermediateFeature(tensor: tensorflow.python.framework.ops.Tensor, stride: int)[source]#
+class sleap.nn.architectures.common.IntermediateFeature(tensor: tensorflow.python.framework.ops.Tensor, stride: int)[source]#

Intermediate feature tensor for use in skip connections.

This class is effectively a named tuple to store the stride (resolution) metadata.

diff --git a/develop/api/sleap.nn.architectures.encoder_decoder.html b/develop/api/sleap.nn.architectures.encoder_decoder.html index f22a15aaf..31e93ad67 100644 --- a/develop/api/sleap.nn.architectures.encoder_decoder.html +++ b/develop/api/sleap.nn.architectures.encoder_decoder.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.encoder_decoder — SLEAP (v1.4.1a1) + sleap.nn.architectures.encoder_decoder — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -347,7 +346,7 @@

sleap.nn.architectures.encoder_decoder

See the EncoderDecoder base class for requirements for creating new architectures.

-class sleap.nn.architectures.encoder_decoder.DecoderBlock(upsampling_stride: int = 2)[source]#
+class sleap.nn.architectures.encoder_decoder.DecoderBlock(upsampling_stride: int = 2)[source]#

Base class for decoder blocks.

@@ -364,7 +363,7 @@

sleap.nn.architectures.encoder_decoder

-make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int], skip_source: Optional[tensorflow.python.framework.ops.Tensor] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int], skip_source: Optional[tensorflow.python.framework.ops.Tensor] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#

Instantiate the decoder block from an input tensor.

Parameters
@@ -388,7 +387,7 @@

sleap.nn.architectures.encoder_decoder

-class sleap.nn.architectures.encoder_decoder.EncoderBlock(pool: bool = True, pooling_stride: int = 2)[source]#
+class sleap.nn.architectures.encoder_decoder.EncoderBlock(pool: bool = True, pooling_stride: int = 2)[source]#

Base class for encoder blocks.

@@ -415,7 +414,7 @@

sleap.nn.architectures.encoder_decoder

-make_block(x_in: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+make_block(x_in: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Instantiate the encoder block from an input tensor.

@@ -423,7 +422,7 @@

sleap.nn.architectures.encoder_decoder

-class sleap.nn.architectures.encoder_decoder.EncoderDecoder(stacks: int = 1)[source]#
+class sleap.nn.architectures.encoder_decoder.EncoderDecoder(stacks: int = 1)[source]#

General encoder-decoder base class.

New architectures that follow the encoder-decoder pattern can be defined by inheriting from this class and implementing the encoder_stack and decoder_stack @@ -469,7 +468,7 @@

sleap.nn.architectures.encoder_decoder

-make_backbone(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int = 1) Union[Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]], Tuple[List[tensorflow.python.framework.ops.Tensor], List[List[sleap.nn.architectures.common.IntermediateFeature]]]][source]#
+make_backbone(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int = 1) Union[Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]], Tuple[List[tensorflow.python.framework.ops.Tensor], List[List[sleap.nn.architectures.common.IntermediateFeature]]]][source]#

Instantiate the entire encoder-decoder backbone.

Parameters
@@ -496,7 +495,7 @@

sleap.nn.architectures.encoder_decoder

-make_decoder(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int, skip_source_features: Optional[Sequence[sleap.nn.architectures.common.IntermediateFeature]] = None, prefix: str = 'dec') Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+make_decoder(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int, skip_source_features: Optional[Sequence[sleap.nn.architectures.common.IntermediateFeature]] = None, prefix: str = 'dec') Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Instantiate the encoder layers defined by the decoder stack configuration.

Parameters
@@ -526,7 +525,7 @@

sleap.nn.architectures.encoder_decoder

-make_encoder(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int, prefix: str = 'enc') Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+make_encoder(x_in: tensorflow.python.framework.ops.Tensor, current_stride: int, prefix: str = 'enc') Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Instantiate the encoder layers defined by the encoder stack configuration.

Parameters
@@ -551,7 +550,7 @@

sleap.nn.architectures.encoder_decoder

-make_stem(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'stem') tensorflow.python.framework.ops.Tensor[source]#
+make_stem(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'stem') tensorflow.python.framework.ops.Tensor[source]#

Instantiate the stem layers defined by the stem block configuration.

Unlike in the encoder, the stem layers do not get repeated in stacked models.

@@ -600,7 +599,7 @@

sleap.nn.architectures.encoder_decoder

-class sleap.nn.architectures.encoder_decoder.SimpleConvBlock(pool: bool = True, pooling_stride: int = 2, pool_before_convs: bool = False, num_convs: int = 2, filters: int = 32, kernel_size: int = 3, use_bias: bool = True, batch_norm: bool = False, batch_norm_before_activation: bool = True, activation: str = 'relu', block_prefix: str = '')[source]#
+class sleap.nn.architectures.encoder_decoder.SimpleConvBlock(pool: bool = True, pooling_stride: int = 2, pool_before_convs: bool = False, num_convs: int = 2, filters: int = 32, kernel_size: int = 3, use_bias: bool = True, batch_norm: bool = False, batch_norm_before_activation: bool = True, activation: str = 'relu', block_prefix: str = '')[source]#

Flexible block of convolutions and max pooling.

@@ -740,7 +739,7 @@

sleap.nn.architectures.encoder_decoder

-make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'conv_block') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'conv_block') tensorflow.python.framework.ops.Tensor[source]#

Create the block from an input tensor.

Parameters
@@ -761,7 +760,7 @@

sleap.nn.architectures.encoder_decoder

-class sleap.nn.architectures.encoder_decoder.SimpleUpsamplingBlock(upsampling_stride: int = 2, transposed_conv: bool = False, transposed_conv_filters: int = 64, transposed_conv_kernel_size: int = 3, transposed_conv_use_bias: bool = True, transposed_conv_batch_norm: bool = True, transposed_conv_batch_norm_before_activation: bool = True, transposed_conv_activation: str = 'relu', interp_method: str = 'bilinear', skip_connection: bool = False, skip_add: bool = False, refine_convs: int = 2, refine_convs_first_filters: Optional[int] = None, refine_convs_filters: int = 64, refine_convs_use_bias: bool = True, refine_convs_kernel_size: int = 3, refine_convs_batch_norm: bool = True, refine_convs_batch_norm_before_activation: bool = True, refine_convs_activation: str = 'relu')[source]#
+class sleap.nn.architectures.encoder_decoder.SimpleUpsamplingBlock(upsampling_stride: int = 2, transposed_conv: bool = False, transposed_conv_filters: int = 64, transposed_conv_kernel_size: int = 3, transposed_conv_use_bias: bool = True, transposed_conv_batch_norm: bool = True, transposed_conv_batch_norm_before_activation: bool = True, transposed_conv_activation: str = 'relu', interp_method: str = 'bilinear', skip_connection: bool = False, skip_add: bool = False, refine_convs: int = 2, refine_convs_first_filters: Optional[int] = None, refine_convs_filters: int = 64, refine_convs_use_bias: bool = True, refine_convs_kernel_size: int = 3, refine_convs_batch_norm: bool = True, refine_convs_batch_norm_before_activation: bool = True, refine_convs_activation: str = 'relu')[source]#

Standard block of upsampling with optional refinement and skip connections.

@@ -1025,7 +1024,7 @@

sleap.nn.architectures.encoder_decoder

-make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int] = None, skip_source: Optional[tensorflow.python.framework.ops.Tensor] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int] = None, skip_source: Optional[tensorflow.python.framework.ops.Tensor] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#

Instantiate the decoder block from an input tensor.

Parameters
diff --git a/develop/api/sleap.nn.architectures.hourglass.html b/develop/api/sleap.nn.architectures.hourglass.html index 92758b0f1..ff5726aca 100644 --- a/develop/api/sleap.nn.architectures.hourglass.html +++ b/develop/api/sleap.nn.architectures.hourglass.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.hourglass — SLEAP (v1.4.1a1) + sleap.nn.architectures.hourglass — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.nn.architectures.hourglass

See the Hourglass class docstring for more information.

-class sleap.nn.architectures.hourglass.DownsamplingBlock(pool: bool = True, pooling_stride: int = 2, filters: int = 256)[source]#
+class sleap.nn.architectures.hourglass.DownsamplingBlock(pool: bool = True, pooling_stride: int = 2, filters: int = 256)[source]#

Convolutional downsampling block of the hourglass.

This block is the simplified convolution-only block described in the Associative Embedding paper, not the original residual @@ -347,7 +346,7 @@

sleap.nn.architectures.hourglass

-make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'downsample') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'downsample') tensorflow.python.framework.ops.Tensor[source]#

Create the block from an input tensor.

Parameters
@@ -368,7 +367,7 @@

sleap.nn.architectures.hourglass

-class sleap.nn.architectures.hourglass.Hourglass(down_blocks: int = 4, up_blocks: int = 4, stem_filters: int = 128, stem_stride: int = 4, filters: int = 256, filter_increase: int = 128, interp_method: str = 'nearest', stacks: int = 3)[source]#
+class sleap.nn.architectures.hourglass.Hourglass(down_blocks: int = 4, up_blocks: int = 4, stem_filters: int = 128, stem_stride: int = 4, filters: int = 256, filter_increase: int = 128, interp_method: str = 'nearest', stacks: int = 3)[source]#

Encoder-decoder definition of the (stacked) hourglass network backbone.

This implements the architecture of the Associative Embedding paper, which improves upon the architecture in the original hourglass paper. The primary changes @@ -496,7 +495,7 @@

sleap.nn.architectures.hourglass

-classmethod from_config(config: sleap.nn.config.model.HourglassConfig) sleap.nn.architectures.hourglass.Hourglass[source]#
+classmethod from_config(config: sleap.nn.config.model.HourglassConfig) sleap.nn.architectures.hourglass.Hourglass[source]#

Create a model from a set of configuration parameters.

Parameters
@@ -518,7 +517,7 @@

sleap.nn.architectures.hourglass

-class sleap.nn.architectures.hourglass.StemBlock(pool: bool = True, pooling_stride: int = 4, filters: int = 128, output_filters: int = 256)[source]#
+class sleap.nn.architectures.hourglass.StemBlock(pool: bool = True, pooling_stride: int = 4, filters: int = 128, output_filters: int = 256)[source]#

Stem layers of the hourglass. These are not repeated with multiple stacks.

The default structure of this block is:

Conv(7 x 7 x filters, stride 2) -> Conv(3 x 3 x 2*filters) -> MaxPool(stride 2) @@ -581,7 +580,7 @@

sleap.nn.architectures.hourglass

-make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'stem') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'stem') tensorflow.python.framework.ops.Tensor[source]#

Create the block from an input tensor.

Parameters
@@ -602,7 +601,7 @@

sleap.nn.architectures.hourglass

-class sleap.nn.architectures.hourglass.UpsamplingBlock(upsampling_stride: int = 2, filters: int = 256, interp_method: str = 'bilinear')[source]#
+class sleap.nn.architectures.hourglass.UpsamplingBlock(upsampling_stride: int = 2, filters: int = 256, interp_method: str = 'bilinear')[source]#

Upsampling block that integrates skip connections with refinement.

This block implements both the intermediate block after the skip connection from the downsampling path, as well as the upsampling block from the main network backbone @@ -639,7 +638,7 @@

sleap.nn.architectures.hourglass

-make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int] = None, skip_source: Optional[sleap.nn.architectures.common.IntermediateFeature] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x: tensorflow.python.framework.ops.Tensor, current_stride: Optional[int] = None, skip_source: Optional[sleap.nn.architectures.common.IntermediateFeature] = None, prefix: str = 'upsample') tensorflow.python.framework.ops.Tensor[source]#

Instantiate the upsampling block from an input tensor.

Parameters
@@ -663,7 +662,7 @@

sleap.nn.architectures.hourglass

-sleap.nn.architectures.hourglass.conv(x: tensorflow.python.framework.ops.Tensor, filters: int, kernel_size: int = 3, stride: int = 1, prefix: str = 'conv') tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.architectures.hourglass.conv(x: tensorflow.python.framework.ops.Tensor, filters: int, kernel_size: int = 3, stride: int = 1, prefix: str = 'conv') tensorflow.python.framework.ops.Tensor[source]#

Apply basic convolution with ReLU and batch normalization.

Parameters
diff --git a/develop/api/sleap.nn.architectures.hrnet.html b/develop/api/sleap.nn.architectures.hrnet.html index 18b712c5d..2acb8b123 100644 --- a/develop/api/sleap.nn.architectures.hrnet.html +++ b/develop/api/sleap.nn.architectures.hrnet.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.hrnet — SLEAP (v1.4.1a1) + sleap.nn.architectures.hrnet — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -328,7 +327,7 @@

sleap.nn.architectures.hrnet

https://arxiv.org/pdf/1908.10357.pdf

-class sleap.nn.architectures.hrnet.HigherHRNet(C: int = 18, initial_downsampling_steps: int = 1, n_deconv_modules: int = 1, bottleneck: bool = False, deconv_filters: int = 256, bilinear_upsampling: bool = False, stem_filters: int = 64)[source]#
+class sleap.nn.architectures.hrnet.HigherHRNet(C: int = 18, initial_downsampling_steps: int = 1, n_deconv_modules: int = 1, bottleneck: bool = False, deconv_filters: int = 256, bilinear_upsampling: bool = False, stem_filters: int = 64)[source]#

HigherHRNet backbone.

@@ -412,7 +411,7 @@

sleap.nn.architectures.hrnet

-output(x_in, n_output_channels)[source]#
+output(x_in, n_output_channels)[source]#

Builds the layers for this backbone and return the output tensor.

Parameters
@@ -444,19 +443,19 @@

sleap.nn.architectures.hrnet

-sleap.nn.architectures.hrnet.adjust_prefix(name_prefix)[source]#
+sleap.nn.architectures.hrnet.adjust_prefix(name_prefix)[source]#

Adds a delimiter if the prefix is not empty.

-sleap.nn.architectures.hrnet.bottleneck_block(x_in, filters, expansion_rate=4, name_prefix=None)[source]#
+sleap.nn.architectures.hrnet.bottleneck_block(x_in, filters, expansion_rate=4, name_prefix=None)[source]#

Creates a convolutional block with bottleneck.

-sleap.nn.architectures.hrnet.simple_block(x_in, filters, stride=1, downsampling_layer=None, name_prefix=None)[source]#
+sleap.nn.architectures.hrnet.simple_block(x_in, filters, stride=1, downsampling_layer=None, name_prefix=None)[source]#

Creates a basic residual convolutional block.

diff --git a/develop/api/sleap.nn.architectures.leap.html b/develop/api/sleap.nn.architectures.leap.html index 9721a1a45..a5f4ded31 100644 --- a/develop/api/sleap.nn.architectures.leap.html +++ b/develop/api/sleap.nn.architectures.leap.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.leap — SLEAP (v1.4.1a1) + sleap.nn.architectures.leap — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.nn.architectures.leap

See the LeapCNN class docstring for more information.

-class sleap.nn.architectures.leap.LeapCNN(stacks: int = 1, filters: int = 64, filters_rate: float = 2, down_blocks: int = 3, down_convs_per_block: int = 3, up_blocks: int = 3, up_interpolate: bool = False, up_convs_per_block: int = 2)[source]#
+class sleap.nn.architectures.leap.LeapCNN(stacks: int = 1, filters: int = 64, filters_rate: float = 2, down_blocks: int = 3, down_convs_per_block: int = 3, up_blocks: int = 3, up_interpolate: bool = False, up_convs_per_block: int = 2)[source]#

LEAP CNN from “Fast animal pose estimation using deep neural networks” (2019).

This is a simple encoder-decoder style architecture without skip connections.

This implementation is generalized from original paper (Pereira et al., 2019) and code.

@@ -435,7 +434,7 @@

sleap.nn.architectures.leap

-classmethod from_config(config: sleap.nn.config.model.LEAPConfig) sleap.nn.architectures.leap.LeapCNN[source]#
+classmethod from_config(config: sleap.nn.config.model.LEAPConfig) sleap.nn.architectures.leap.LeapCNN[source]#

Create a model from a set of configuration parameters.

Parameters
diff --git a/develop/api/sleap.nn.architectures.pretrained_encoders.html b/develop/api/sleap.nn.architectures.pretrained_encoders.html index eeaad0e5c..575a20ec0 100644 --- a/develop/api/sleap.nn.architectures.pretrained_encoders.html +++ b/develop/api/sleap.nn.architectures.pretrained_encoders.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.pretrained_encoders — SLEAP (v1.4.1a1) + sleap.nn.architectures.pretrained_encoders — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -355,7 +354,7 @@

sleap.nn.architectures.pretrained_encoders

THE SOFTWARE.

-class sleap.nn.architectures.pretrained_encoders.UnetPretrainedEncoder(encoder: str = 'efficientnetb0', decoder_filters: Tuple[int] = (256, 256, 128, 128), pretrained: bool = True)[source]#
+class sleap.nn.architectures.pretrained_encoders.UnetPretrainedEncoder(encoder: str = 'efficientnetb0', decoder_filters: Tuple[int] = (256, 256, 128, 128), pretrained: bool = True)[source]#

UNet with an (optionally) pretrained encoder model.

This backbone enables the use of a variety of popular neural network architectures for feature extraction in the backbone. These can be used with ImageNet-pretrained @@ -428,7 +427,7 @@

sleap.nn.architectures.pretrained_encoders

-classmethod from_config(config: sleap.nn.config.model.PretrainedEncoderConfig) sleap.nn.architectures.pretrained_encoders.UnetPretrainedEncoder[source]#
+classmethod from_config(config: sleap.nn.config.model.PretrainedEncoderConfig) sleap.nn.architectures.pretrained_encoders.UnetPretrainedEncoder[source]#

Create the backbone from a configuration.

Parameters
@@ -443,7 +442,7 @@

sleap.nn.architectures.pretrained_encoders

-make_backbone(x_in: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+make_backbone(x_in: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Create the backbone and return the output tensors for building a model.

Parameters
diff --git a/develop/api/sleap.nn.architectures.resnet.html b/develop/api/sleap.nn.architectures.resnet.html index 727ddc6dd..277dbba1e 100644 --- a/develop/api/sleap.nn.architectures.resnet.html +++ b/develop/api/sleap.nn.architectures.resnet.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.resnet — SLEAP (v1.4.1a1) + sleap.nn.architectures.resnet — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -328,7 +327,7 @@

sleap.nn.architectures.resnet

tensorflow/tensorflow

-class sleap.nn.architectures.resnet.ResNet101(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#
+class sleap.nn.architectures.resnet.ResNet101(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#

ResNet101 backbone.

This model has a stack of 3, 4, 23 and 3 residual blocks.

@@ -407,7 +406,7 @@

sleap.nn.architectures.resnet

-class sleap.nn.architectures.resnet.ResNet152(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#
+class sleap.nn.architectures.resnet.ResNet152(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#

ResNet152 backbone.

This model has a stack of 3, 4, 23 and 3 residual blocks.

@@ -486,7 +485,7 @@

sleap.nn.architectures.resnet

-class sleap.nn.architectures.resnet.ResNet50(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#
+class sleap.nn.architectures.resnet.ResNet50(upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False, model_name=NOTHING, stack_configs=NOTHING)[source]#

ResNet50 backbone.

This model has a stack of 3, 4, 6 and 3 residual blocks.

@@ -565,7 +564,7 @@

sleap.nn.architectures.resnet

-class sleap.nn.architectures.resnet.ResNetv1(model_name: str, stack_configs: Sequence[Mapping[str, Any]], upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False)[source]#
+class sleap.nn.architectures.resnet.ResNetv1(model_name: str, stack_configs: Sequence[Mapping[str, Any]], upsampling_stack: Optional[sleap.nn.architectures.upsampling.UpsamplingStack] = None, features_output_stride: int = 16, pretrained: bool = True, frozen: bool = False, skip_connections: bool = False)[source]#

ResNetv1 backbone with configurable output stride and pretrained weights.

@@ -675,7 +674,7 @@

sleap.nn.architectures.resnet

-classmethod from_config(config: sleap.nn.config.model.ResNetConfig) sleap.nn.architectures.resnet.ResNetv1[source]#
+classmethod from_config(config: sleap.nn.config.model.ResNetConfig) sleap.nn.architectures.resnet.ResNetv1[source]#

Create a model from a set of configuration parameters.

Parameters
@@ -689,7 +688,7 @@

sleap.nn.architectures.resnet

-make_backbone(x_in: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+make_backbone(x_in: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Create the full backbone starting with the specified input tensor.

Parameters
@@ -728,7 +727,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.block_v1(x: tensorflow.python.framework.ops.Tensor, filters: int, kernel_size: int = 3, stride: int = 1, dilation_rate: int = 1, conv_shortcut: bool = True, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.architectures.resnet.block_v1(x: tensorflow.python.framework.ops.Tensor, filters: int, kernel_size: int = 3, stride: int = 1, dilation_rate: int = 1, conv_shortcut: bool = True, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#

Create a ResNetv1 residual block.

Parameters
@@ -751,7 +750,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.imagenet_preproc_v1(X: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.architectures.resnet.imagenet_preproc_v1(X: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Preprocess images according to ImageNet/caffe/channels_last.

Parameters
@@ -769,7 +768,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.make_backbone_fn(stack_fn: Callable[[tensorflow.python.framework.ops.Tensor, Any], Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]]], stack_configs: Sequence[Mapping[str, Any]], output_stride: int) Callable[[tensorflow.python.framework.ops.Tensor, int], tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.architectures.resnet.make_backbone_fn(stack_fn: Callable[[tensorflow.python.framework.ops.Tensor, Any], Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]]], stack_configs: Sequence[Mapping[str, Any]], output_stride: int) Callable[[tensorflow.python.framework.ops.Tensor, int], tensorflow.python.framework.ops.Tensor][source]#

Return a function that creates a block stack with output stride adjustments.

Parameters
@@ -805,7 +804,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.make_resnet_model(backbone_fn: Callable[[tensorflow.python.framework.ops.Tensor, int], tensorflow.python.framework.ops.Tensor], preact: bool = False, use_bias: bool = True, model_name: str = 'resnet', weights: str = 'imagenet', input_tensor: Optional[tensorflow.python.framework.ops.Tensor] = None, input_shape: Optional[Tuple[int]] = None, stem_filters: int = 64, stem_stride1: int = 2, stem_stride2: int = 2) Tuple[keras.engine.training.Model, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+sleap.nn.architectures.resnet.make_resnet_model(backbone_fn: Callable[[tensorflow.python.framework.ops.Tensor, int], tensorflow.python.framework.ops.Tensor], preact: bool = False, use_bias: bool = True, model_name: str = 'resnet', weights: str = 'imagenet', input_tensor: Optional[tensorflow.python.framework.ops.Tensor] = None, input_shape: Optional[Tuple[int]] = None, stem_filters: int = 64, stem_stride1: int = 2, stem_stride2: int = 2) Tuple[keras.engine.training.Model, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Instantiate the ResNet, ResNetV2 (TODO), and ResNeXt (TODO) architecture.

Optionally loads weights pre-trained on ImageNet.

@@ -845,7 +844,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.stack_v1(x: tensorflow.python.framework.ops.Tensor, filters: int, blocks: int, stride1: int = 2, dilation_rate: int = 1, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.architectures.resnet.stack_v1(x: tensorflow.python.framework.ops.Tensor, filters: int, blocks: int, stride1: int = 2, dilation_rate: int = 1, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#

Create a set of stacked ResNetv1 residual blocks.

Parameters
@@ -866,7 +865,7 @@

sleap.nn.architectures.resnet

-sleap.nn.architectures.resnet.tile_channels(X: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.architectures.resnet.tile_channels(X: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Tile single channel to 3 channel tensor.

This functon is useful to replicate grayscale single-channel images into 3-channel monochrome RGB images.

diff --git a/develop/api/sleap.nn.architectures.unet.html b/develop/api/sleap.nn.architectures.unet.html index 0d71f2791..146a40527 100644 --- a/develop/api/sleap.nn.architectures.unet.html +++ b/develop/api/sleap.nn.architectures.unet.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.unet — SLEAP (v1.4.1a1) + sleap.nn.architectures.unet — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.nn.architectures.unet

See the UNet class docstring for more information.

-class sleap.nn.architectures.unet.PoolingBlock(pool: bool = True, pooling_stride: int = 2)[source]#
+class sleap.nn.architectures.unet.PoolingBlock(pool: bool = True, pooling_stride: int = 2)[source]#

Pooling-only encoder block.

Used to compensate for UNet having a skip source before the pooling, so the blocks need to end with a conv, not the pooling layer. This is added to the end of the @@ -355,7 +354,7 @@

sleap.nn.architectures.unet

-make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'conv_block') tensorflow.python.framework.ops.Tensor[source]#
+make_block(x_in: tensorflow.python.framework.ops.Tensor, prefix: str = 'conv_block') tensorflow.python.framework.ops.Tensor[source]#

Instantiate the encoder block from an input tensor.

@@ -363,7 +362,7 @@

sleap.nn.architectures.unet

-class sleap.nn.architectures.unet.UNet(stacks: int = 1, filters: int = 64, filters_rate: float = 2, kernel_size: int = 3, stem_kernel_size: int = 3, convs_per_block: int = 2, stem_blocks: int = 0, down_blocks: int = 4, middle_block: bool = True, up_blocks: int = 4, up_interpolate: bool = False, block_contraction: bool = False)[source]#
+class sleap.nn.architectures.unet.UNet(stacks: int = 1, filters: int = 64, filters_rate: float = 2, kernel_size: int = 3, stem_kernel_size: int = 3, convs_per_block: int = 2, stem_blocks: int = 0, down_blocks: int = 4, middle_block: bool = True, up_blocks: int = 4, up_interpolate: bool = False, block_contraction: bool = False)[source]#

UNet encoder-decoder architecture for fully convolutional networks.

This is the canonical architecture described in Ronneberger et al., 2015.

The default configuration with 4 down/up blocks and 64 base filters has ~34.5M @@ -527,7 +526,7 @@

sleap.nn.architectures.unet

-classmethod from_config(config: sleap.nn.config.model.UNetConfig) sleap.nn.architectures.unet.UNet[source]#
+classmethod from_config(config: sleap.nn.config.model.UNetConfig) sleap.nn.architectures.unet.UNet[source]#

Create a model from a set of configuration parameters.

Parameters
diff --git a/develop/api/sleap.nn.architectures.upsampling.html b/develop/api/sleap.nn.architectures.upsampling.html index 4e76d375a..9cbb2d641 100644 --- a/develop/api/sleap.nn.architectures.upsampling.html +++ b/develop/api/sleap.nn.architectures.upsampling.html @@ -9,7 +9,7 @@ - sleap.nn.architectures.upsampling — SLEAP (v1.4.1a1) + sleap.nn.architectures.upsampling — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -334,7 +333,7 @@

sleap.nn.architectures.upsampling

simpler patterns like shallow or direct upsampling (e.g., DLC).

-class sleap.nn.architectures.upsampling.UpsamplingStack(output_stride: int, upsampling_stride: int = 2, transposed_conv: bool = True, transposed_conv_filters: int = 64, transposed_conv_filters_rate: float = 1, transposed_conv_kernel_size: int = 4, transposed_conv_batchnorm: bool = True, make_skip_connection: bool = True, skip_add: bool = False, refine_convs: int = 2, refine_convs_filters: int = 64, refine_convs_filters_rate: float = 1, refine_convs_batchnorm: bool = True)[source]#
+class sleap.nn.architectures.upsampling.UpsamplingStack(output_stride: int, upsampling_stride: int = 2, transposed_conv: bool = True, transposed_conv_filters: int = 64, transposed_conv_filters_rate: float = 1, transposed_conv_kernel_size: int = 4, transposed_conv_batchnorm: bool = True, make_skip_connection: bool = True, skip_add: bool = False, refine_convs: int = 2, refine_convs_filters: int = 64, refine_convs_filters_rate: float = 1, refine_convs_batchnorm: bool = True)[source]#

Standard stack of upsampling layers with refinement and skip connections.

@@ -510,7 +509,7 @@

sleap.nn.architectures.upsampling

-classmethod from_config(config: sleap.nn.config.model.UpsamplingConfig, output_stride: int) sleap.nn.architectures.upsampling.UpsamplingStack[source]#
+classmethod from_config(config: sleap.nn.config.model.UpsamplingConfig, output_stride: int) sleap.nn.architectures.upsampling.UpsamplingStack[source]#

Create a model from a set of configuration parameters.

Parameters
@@ -527,7 +526,7 @@

sleap.nn.architectures.upsampling

-make_stack(x: tensorflow.python.framework.ops.Tensor, current_stride: int, skip_sources: Optional[Sequence[sleap.nn.architectures.common.IntermediateFeature]] = None) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#
+make_stack(x: tensorflow.python.framework.ops.Tensor, current_stride: int, skip_sources: Optional[Sequence[sleap.nn.architectures.common.IntermediateFeature]] = None) Tuple[tensorflow.python.framework.ops.Tensor, List[sleap.nn.architectures.common.IntermediateFeature]][source]#

Create the stack of upsampling layers.

Parameters
diff --git a/develop/api/sleap.nn.callbacks.html b/develop/api/sleap.nn.callbacks.html index 48b533909..fe11d4eab 100644 --- a/develop/api/sleap.nn.callbacks.html +++ b/develop/api/sleap.nn.callbacks.html @@ -9,7 +9,7 @@ - sleap.nn.callbacks — SLEAP (v1.4.1a1) + sleap.nn.callbacks — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.callbacks

Training-related tf.keras callbacks.

-class sleap.nn.callbacks.MatplotlibSaver(save_folder: str, plot_fn: Callable[[], matplotlib.figure.Figure], prefix: Optional[str] = None)[source]#
+class sleap.nn.callbacks.MatplotlibSaver(save_folder: str, plot_fn: Callable[[], matplotlib.figure.Figure], prefix: Optional[str] = None)[source]#

Callback for saving images rendered with matplotlib during training.

This is useful for saving visualizations of the training to disk. It will be called at the end of each epoch.

@@ -360,7 +359,7 @@

sleap.nn.callbacks

-on_epoch_end(epoch, logs=None)[source]#
+on_epoch_end(epoch, logs=None)[source]#

Save figure at the end of each epoch.

@@ -368,7 +367,7 @@

sleap.nn.callbacks

-class sleap.nn.callbacks.ModelCheckpointOnEvent(filepath: str, event: str = 'train_end')[source]#
+class sleap.nn.callbacks.ModelCheckpointOnEvent(filepath: str, event: str = 'train_end')[source]#

Callback for model checkpointing on a fixed event.

@@ -384,19 +383,19 @@

sleap.nn.callbacks

-on_epoch_end(epoch, logs=None)[source]#
+on_epoch_end(epoch, logs=None)[source]#

Called at the end of each epoch.

-on_train_begin(logs=None)[source]#
+on_train_begin(logs=None)[source]#

Called at the start of training.

-on_train_end(logs=None)[source]#
+on_train_end(logs=None)[source]#

Called at the end of training.

@@ -404,22 +403,22 @@

sleap.nn.callbacks

-class sleap.nn.callbacks.ProgressReporterZMQ(address='tcp://127.0.0.1:9001', what='not_set')[source]#
+class sleap.nn.callbacks.ProgressReporterZMQ(address='tcp://127.0.0.1:9001', what='not_set')[source]#
-on_batch_begin(batch, logs=None)[source]#
+on_batch_begin(batch, logs=None)[source]#

A backwards compatibility alias for on_train_batch_begin.

-on_batch_end(batch, logs=None)[source]#
+on_batch_end(batch, logs=None)[source]#

A backwards compatibility alias for on_train_batch_end.

-on_epoch_begin(epoch, logs=None)[source]#
+on_epoch_begin(epoch, logs=None)[source]#

Called at the start of an epoch. Subclasses should override for any actions to run. This function should only be called during train mode. @@ -435,7 +434,7 @@

sleap.nn.callbacks

-on_epoch_end(epoch, logs=None)[source]#
+on_epoch_end(epoch, logs=None)[source]#

Called at the end of an epoch. Subclasses should override for any actions to run. This function should only be called during train mode. @@ -452,7 +451,7 @@

sleap.nn.callbacks

-on_train_begin(logs=None)[source]#
+on_train_begin(logs=None)[source]#

Called at the beginning of training. Subclasses should override for any actions to run. # Arguments

@@ -466,7 +465,7 @@

sleap.nn.callbacks

-on_train_end(logs=None)[source]#
+on_train_end(logs=None)[source]#

Called at the end of training. Subclasses should override for any actions to run. # Arguments

@@ -482,7 +481,7 @@

sleap.nn.callbacks

-class sleap.nn.callbacks.TensorBoardMatplotlibWriter(log_dir: str, plot_fn: Callable[[], matplotlib.figure.Figure], tag: str = 'viz')[source]#
+class sleap.nn.callbacks.TensorBoardMatplotlibWriter(log_dir: str, plot_fn: Callable[[], matplotlib.figure.Figure], tag: str = 'viz')[source]#

Callback for writing image summaries with visualizations during training.

@@ -504,7 +503,7 @@

sleap.nn.callbacks

-on_epoch_end(epoch, logs=None)[source]#
+on_epoch_end(epoch, logs=None)[source]#

Called at the end of each epoch.

@@ -512,16 +511,16 @@

sleap.nn.callbacks

-class sleap.nn.callbacks.TrainingControllerZMQ(address='tcp://127.0.0.1:9000', topic='', poll_timeout=10)[source]#
+class sleap.nn.callbacks.TrainingControllerZMQ(address='tcp://127.0.0.1:9000', topic='', poll_timeout=10)[source]#
-on_batch_end(batch, logs=None)[source]#
+on_batch_end(batch, logs=None)[source]#

Called at the end of a training batch.

-set_lr(lr)[source]#
+set_lr(lr)[source]#

Adjust the model learning rate.

This is the based off of the implementation used in the native learning rate scheduling callbacks.

diff --git a/develop/api/sleap.nn.config.data.html b/develop/api/sleap.nn.config.data.html index ffa254ceb..29c25e2fc 100644 --- a/develop/api/sleap.nn.config.data.html +++ b/develop/api/sleap.nn.config.data.html @@ -9,7 +9,7 @@ - sleap.nn.config.data — SLEAP (v1.4.1a1) + sleap.nn.config.data — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -322,7 +321,7 @@

sleap.nn.config.data

sleap.nn.config.data#

-class sleap.nn.config.data.DataConfig(labels: sleap.nn.config.data.LabelsConfig = NOTHING, preprocessing: sleap.nn.config.data.PreprocessingConfig = NOTHING, instance_cropping: sleap.nn.config.data.InstanceCroppingConfig = NOTHING)[source]#
+class sleap.nn.config.data.DataConfig(labels: sleap.nn.config.data.LabelsConfig = NOTHING, preprocessing: sleap.nn.config.data.PreprocessingConfig = NOTHING, instance_cropping: sleap.nn.config.data.InstanceCroppingConfig = NOTHING)[source]#

Data configuration.

labels: Configuration options related to user labels for training or testing. preprocessing: Configuration options related to data preprocessing. @@ -334,7 +333,7 @@

sleap.nn.config.data

-class sleap.nn.config.data.InstanceCroppingConfig(center_on_part: Optional[str] = None, crop_size: Optional[int] = None, crop_size_detection_padding: int = 16)[source]#
+class sleap.nn.config.data.InstanceCroppingConfig(center_on_part: Optional[str] = None, crop_size: Optional[int] = None, crop_size_detection_padding: int = 16)[source]#

Instance cropping configuration.

These are only used in topdown or centroid models.

@@ -386,7 +385,7 @@

sleap.nn.config.data

-class sleap.nn.config.data.LabelsConfig(training_labels: Optional[str] = None, validation_labels: Optional[str] = None, validation_fraction: float = 0.1, test_labels: Optional[str] = None, split_by_inds: bool = False, training_inds: Optional[List[int]] = None, validation_inds: Optional[List[int]] = None, test_inds: Optional[List[int]] = None, search_path_hints: List[str] = NOTHING, skeletons: List[sleap.skeleton.Skeleton] = NOTHING)[source]#
+class sleap.nn.config.data.LabelsConfig(training_labels: Optional[str] = None, validation_labels: Optional[str] = None, validation_fraction: float = 0.1, test_labels: Optional[str] = None, split_by_inds: bool = False, training_inds: Optional[List[int]] = None, validation_inds: Optional[List[int]] = None, test_inds: Optional[List[int]] = None, search_path_hints: List[str] = NOTHING, skeletons: List[sleap.skeleton.Skeleton] = NOTHING)[source]#

Labels configuration.

@@ -527,7 +526,7 @@

sleap.nn.config.data

-class sleap.nn.config.data.PreprocessingConfig(ensure_rgb: bool = False, ensure_grayscale: bool = False, imagenet_mode: Optional[str] = None, input_scaling: float = 1.0, pad_to_stride: Optional[int] = None, resize_and_pad_to_target: bool = True, target_height: Optional[int] = None, target_width: Optional[int] = None)[source]#
+class sleap.nn.config.data.PreprocessingConfig(ensure_rgb: bool = False, ensure_grayscale: bool = False, imagenet_mode: Optional[str] = None, input_scaling: float = 1.0, pad_to_stride: Optional[int] = None, resize_and_pad_to_target: bool = True, target_height: Optional[int] = None, target_width: Optional[int] = None)[source]#

Preprocessing configuration.

diff --git a/develop/api/sleap.nn.config.model.html b/develop/api/sleap.nn.config.model.html index b43be50ad..e04b6b03d 100644 --- a/develop/api/sleap.nn.config.model.html +++ b/develop/api/sleap.nn.config.model.html @@ -9,7 +9,7 @@ - sleap.nn.config.model — SLEAP (v1.4.1a1) + sleap.nn.config.model — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -322,7 +321,7 @@

sleap.nn.config.model

sleap.nn.config.model#

-class sleap.nn.config.model.BackboneConfig(*args, **kwargs)[source]#
+class sleap.nn.config.model.BackboneConfig(*args, **kwargs)[source]#

Configurations related to the model backbone.

Only one field can be set and will determine which backbone architecture to use.

@@ -373,7 +372,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig(anchor_part: Optional[str] = None, part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#
+class sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig(anchor_part: Optional[str] = None, part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#

Configurations for centered instance confidence map heads.

These heads are used in topdown multi-instance models that make the assumption that there is an instance reliably centered in the cropped input image. These heads are @@ -482,7 +481,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.CentroidsHeadConfig(anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#
+class sleap.nn.config.model.CentroidsHeadConfig(anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#

Configurations for centroid confidence map heads.

These heads are used in topdown models that rely on centroid detection to detect instances for cropping before predicting the remaining body parts.

@@ -570,7 +569,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.ClassMapsHeadConfig(classes: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.config.model.ClassMapsHeadConfig(classes: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Configurations for class map heads.

These heads are used in bottom-up multi-instance models that classify detected points using a fixed set of learned classes (e.g., animal identities).

@@ -631,7 +630,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.ClassVectorsHeadConfig(classes: Optional[List[str]] = None, num_fc_layers: int = 1, num_fc_units: int = 64, global_pool: bool = True, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.config.model.ClassVectorsHeadConfig(classes: Optional[List[str]] = None, num_fc_layers: int = 1, num_fc_units: int = 64, global_pool: bool = True, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Configurations for class vectors heads.

These heads are used in top-down multi-instance models that classify detected points using a fixed set of learned classes (e.g., animal identities).

@@ -705,7 +704,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.HeadsConfig(*args, **kwargs)[source]#
+class sleap.nn.config.model.HeadsConfig(*args, **kwargs)[source]#

Configurations related to the model output head type.

Only one attribute of this class can be set, which defines the model output type.

@@ -778,7 +777,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.HourglassConfig(stem_stride: int = 4, max_stride: int = 64, output_stride: int = 4, stem_filters: int = 128, filters: int = 256, filter_increase: int = 128, stacks: int = 3)[source]#
+class sleap.nn.config.model.HourglassConfig(stem_stride: int = 4, max_stride: int = 64, output_stride: int = 4, stem_filters: int = 128, filters: int = 256, filter_increase: int = 128, stacks: int = 3)[source]#

Hourglass backbone configuration.

@@ -848,7 +847,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.LEAPConfig(max_stride: int = 8, output_stride: int = 1, filters: int = 64, filters_rate: float = 2, up_interpolate: bool = False, stacks: int = 1)[source]#
+class sleap.nn.config.model.LEAPConfig(max_stride: int = 8, output_stride: int = 1, filters: int = 64, filters_rate: float = 2, up_interpolate: bool = False, stacks: int = 1)[source]#

LEAP backbone configuration.

@@ -923,7 +922,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.ModelConfig(backbone: sleap.nn.config.model.BackboneConfig = NOTHING, heads: sleap.nn.config.model.HeadsConfig = NOTHING, base_checkpoint: Optional[str] = None)[source]#
+class sleap.nn.config.model.ModelConfig(backbone: sleap.nn.config.model.BackboneConfig = NOTHING, heads: sleap.nn.config.model.HeadsConfig = NOTHING, base_checkpoint: Optional[str] = None)[source]#

Configurations related to model architecture.

@@ -962,7 +961,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.MultiClassBottomUpConfig(confmaps: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig = NOTHING, class_maps: sleap.nn.config.model.ClassMapsHeadConfig = NOTHING)[source]#
+class sleap.nn.config.model.MultiClassBottomUpConfig(confmaps: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig = NOTHING, class_maps: sleap.nn.config.model.ClassMapsHeadConfig = NOTHING)[source]#

Configuration for multi-instance confidence map and class map models.

This configuration specifies a multi-head model that outputs both multi-instance confidence maps and class maps, which together enable multi-instance pose tracking @@ -1000,7 +999,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.MultiClassTopDownConfig(confmaps: sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig = NOTHING, class_vectors: sleap.nn.config.model.ClassVectorsHeadConfig = NOTHING)[source]#
+class sleap.nn.config.model.MultiClassTopDownConfig(confmaps: sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig = NOTHING, class_vectors: sleap.nn.config.model.ClassVectorsHeadConfig = NOTHING)[source]#

Configuration for centered-instance confidence map and class map models.

This configuration specifies a multi-head model that outputs both centered-instance confidence maps and class vectors, which together enable multi-instance pose @@ -1039,7 +1038,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.MultiInstanceConfig(confmaps: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig = NOTHING, pafs: sleap.nn.config.model.PartAffinityFieldsHeadConfig = NOTHING)[source]#
+class sleap.nn.config.model.MultiInstanceConfig(confmaps: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig = NOTHING, pafs: sleap.nn.config.model.PartAffinityFieldsHeadConfig = NOTHING)[source]#

Configuration for combined multi-instance confidence map and PAF model heads.

This configuration specifies a multi-head model that outputs both multi-instance confidence maps and part affinity fields, which together enable multi-instance pose @@ -1073,7 +1072,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.MultiInstanceConfmapsHeadConfig(part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#
+class sleap.nn.config.model.MultiInstanceConfmapsHeadConfig(part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#

Configurations for multi-instance confidence map heads.

These heads are used in bottom-up multi-instance models that do not make any assumption about the connectivity of the body parts. These heads will generate @@ -1167,7 +1166,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.PartAffinityFieldsHeadConfig(edges: Optional[Sequence[Tuple[str, str]]] = None, sigma: float = 15.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.config.model.PartAffinityFieldsHeadConfig(edges: Optional[Sequence[Tuple[str, str]]] = None, sigma: float = 15.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Configurations for multi-instance part affinity field heads.

These heads are used in bottom-up multi-instance models that require information about body part connectivity in order to group multiple detections of each body part @@ -1247,7 +1246,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.PretrainedEncoderConfig(encoder: str = 'efficientnetb0', pretrained: bool = True, decoder_filters: int = 256, decoder_filters_rate: float = 1.0, output_stride: int = 2, decoder_batchnorm: bool = True)[source]#
+class sleap.nn.config.model.PretrainedEncoderConfig(encoder: str = 'efficientnetb0', pretrained: bool = True, decoder_filters: int = 256, decoder_filters_rate: float = 1.0, output_stride: int = 2, decoder_batchnorm: bool = True)[source]#

Configuration for UNet backbone with pretrained encoder.

@@ -1338,7 +1337,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.ResNetConfig(version: str = 'ResNet50', weights: str = 'frozen', upsampling: Optional[sleap.nn.config.model.UpsamplingConfig] = None, max_stride: int = 32, output_stride: int = 4)[source]#
+class sleap.nn.config.model.ResNetConfig(version: str = 'ResNet50', weights: str = 'frozen', upsampling: Optional[sleap.nn.config.model.UpsamplingConfig] = None, max_stride: int = 32, output_stride: int = 4)[source]#

ResNet backbone configuration.

@@ -1405,7 +1404,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.SingleInstanceConfmapsHeadConfig(part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#
+class sleap.nn.config.model.SingleInstanceConfmapsHeadConfig(part_names: Optional[List[str]] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0, offset_refinement: bool = False)[source]#

Configurations for single instance confidence map heads.

These heads are used in single instance models that make the assumption that only one of each body part is present in the image. These heads produce confidence maps @@ -1489,7 +1488,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.UNetConfig(stem_stride: Optional[int] = None, max_stride: int = 16, output_stride: int = 1, filters: int = 64, filters_rate: float = 2, middle_block: bool = True, up_interpolate: bool = False, stacks: int = 1)[source]#
+class sleap.nn.config.model.UNetConfig(stem_stride: Optional[int] = None, max_stride: int = 16, output_stride: int = 1, filters: int = 64, filters_rate: float = 2, middle_block: bool = True, up_interpolate: bool = False, stacks: int = 1)[source]#

UNet backbone configuration.

@@ -1590,7 +1589,7 @@

sleap.nn.config.model

-class sleap.nn.config.model.UpsamplingConfig(method: str = 'interpolation', skip_connections: Optional[str] = None, block_stride: int = 2, filters: int = 64, filters_rate: float = 1, refine_convs: int = 2, batch_norm: bool = True, transposed_conv_kernel_size: int = 4)[source]#
+class sleap.nn.config.model.UpsamplingConfig(method: str = 'interpolation', skip_connections: Optional[str] = None, block_stride: int = 2, filters: int = 64, filters_rate: float = 1, refine_convs: int = 2, batch_norm: bool = True, transposed_conv_kernel_size: int = 4)[source]#

Upsampling stack configuration.

diff --git a/develop/api/sleap.nn.config.optimization.html b/develop/api/sleap.nn.config.optimization.html index 3be374858..2f721150d 100644 --- a/develop/api/sleap.nn.config.optimization.html +++ b/develop/api/sleap.nn.config.optimization.html @@ -9,7 +9,7 @@ - sleap.nn.config.optimization — SLEAP (v1.4.1a1) + sleap.nn.config.optimization — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -322,7 +321,7 @@

sleap.nn.config.optimization

sleap.nn.config.optimization#

-class sleap.nn.config.optimization.AugmentationConfig(rotate: bool = False, rotation_min_angle: float = - 180, rotation_max_angle: float = 180, translate: bool = False, translate_min: int = - 5, translate_max: int = 5, scale: bool = False, scale_min: float = 0.9, scale_max: float = 1.1, uniform_noise: bool = False, uniform_noise_min_val: float = 0.0, uniform_noise_max_val: float = 10.0, gaussian_noise: bool = False, gaussian_noise_mean: float = 5.0, gaussian_noise_stddev: float = 1.0, contrast: bool = False, contrast_min_gamma: float = 0.5, contrast_max_gamma: float = 2.0, brightness: bool = False, brightness_min_val: float = 0.0, brightness_max_val: float = 10.0, random_crop: bool = False, random_crop_height: int = 256, random_crop_width: int = 256, random_flip: bool = False, flip_horizontal: bool = True)[source]#
+class sleap.nn.config.optimization.AugmentationConfig(rotate: bool = False, rotation_min_angle: float = - 180, rotation_max_angle: float = 180, translate: bool = False, translate_min: int = - 5, translate_max: int = 5, scale: bool = False, scale_min: float = 0.9, scale_max: float = 1.1, uniform_noise: bool = False, uniform_noise_min_val: float = 0.0, uniform_noise_max_val: float = 10.0, gaussian_noise: bool = False, gaussian_noise_mean: float = 5.0, gaussian_noise_stddev: float = 1.0, contrast: bool = False, contrast_min_gamma: float = 0.5, contrast_max_gamma: float = 2.0, brightness: bool = False, brightness_min_val: float = 0.0, brightness_max_val: float = 10.0, random_crop: bool = False, random_crop_height: int = 256, random_crop_width: int = 256, random_flip: bool = False, flip_horizontal: bool = True)[source]#

Parameters for configuring an augmentation stack.

The augmentations will be applied in the the order of the attributes.

@@ -637,7 +636,7 @@

sleap.nn.config.optimization

-class sleap.nn.config.optimization.EarlyStoppingConfig(stop_training_on_plateau: bool = True, plateau_min_delta: float = 1e-06, plateau_patience: int = 10)[source]#
+class sleap.nn.config.optimization.EarlyStoppingConfig(stop_training_on_plateau: bool = True, plateau_min_delta: float = 1e-06, plateau_patience: int = 10)[source]#

Configuration for early stopping.

@@ -681,7 +680,7 @@

sleap.nn.config.optimization

-class sleap.nn.config.optimization.HardKeypointMiningConfig(online_mining: bool = False, hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: Optional[int] = None, loss_scale: float = 5.0)[source]#
+class sleap.nn.config.optimization.HardKeypointMiningConfig(online_mining: bool = False, hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: Optional[int] = None, loss_scale: float = 5.0)[source]#

Configuration for online hard keypoint mining.

@@ -754,7 +753,7 @@

sleap.nn.config.optimization

-class sleap.nn.config.optimization.LearningRateScheduleConfig(reduce_on_plateau: bool = True, reduction_factor: float = 0.5, plateau_min_delta: float = 1e-06, plateau_patience: int = 5, plateau_cooldown: int = 3, min_learning_rate: float = 1e-08)[source]#
+class sleap.nn.config.optimization.LearningRateScheduleConfig(reduce_on_plateau: bool = True, reduction_factor: float = 0.5, plateau_min_delta: float = 1e-06, plateau_patience: int = 5, plateau_cooldown: int = 3, min_learning_rate: float = 1e-08)[source]#

Configuration for learning rate scheduling.

@@ -834,7 +833,7 @@

sleap.nn.config.optimization

-class sleap.nn.config.optimization.OptimizationConfig(preload_data: bool = True, augmentation_config: sleap.nn.config.optimization.AugmentationConfig = NOTHING, online_shuffling: bool = True, shuffle_buffer_size: int = 128, prefetch: bool = True, batch_size: int = 8, batches_per_epoch: Optional[int] = None, min_batches_per_epoch: int = 200, val_batches_per_epoch: Optional[int] = None, min_val_batches_per_epoch: int = 10, epochs: int = 100, optimizer: str = 'adam', initial_learning_rate: float = 0.0001, learning_rate_schedule: sleap.nn.config.optimization.LearningRateScheduleConfig = NOTHING, hard_keypoint_mining: sleap.nn.config.optimization.HardKeypointMiningConfig = NOTHING, early_stopping: sleap.nn.config.optimization.EarlyStoppingConfig = NOTHING)[source]#
+class sleap.nn.config.optimization.OptimizationConfig(preload_data: bool = True, augmentation_config: sleap.nn.config.optimization.AugmentationConfig = NOTHING, online_shuffling: bool = True, shuffle_buffer_size: int = 128, prefetch: bool = True, batch_size: int = 8, batches_per_epoch: Optional[int] = None, min_batches_per_epoch: int = 200, val_batches_per_epoch: Optional[int] = None, min_val_batches_per_epoch: int = 10, epochs: int = 100, optimizer: str = 'adam', initial_learning_rate: float = 0.0001, learning_rate_schedule: sleap.nn.config.optimization.LearningRateScheduleConfig = NOTHING, hard_keypoint_mining: sleap.nn.config.optimization.HardKeypointMiningConfig = NOTHING, early_stopping: sleap.nn.config.optimization.EarlyStoppingConfig = NOTHING)[source]#

Optimization configuration.

diff --git a/develop/api/sleap.nn.config.outputs.html b/develop/api/sleap.nn.config.outputs.html index 242f9fa6b..4c0d662ca 100644 --- a/develop/api/sleap.nn.config.outputs.html +++ b/develop/api/sleap.nn.config.outputs.html @@ -9,7 +9,7 @@ - sleap.nn.config.outputs — SLEAP (v1.4.1a1) + sleap.nn.config.outputs — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -322,7 +321,7 @@

sleap.nn.config.outputs

sleap.nn.config.outputs#

-class sleap.nn.config.outputs.CheckpointingConfig(initial_model: bool = False, best_model: bool = True, every_epoch: bool = False, latest_model: bool = False, final_model: bool = False)[source]#
+class sleap.nn.config.outputs.CheckpointingConfig(initial_model: bool = False, best_model: bool = True, every_epoch: bool = False, latest_model: bool = False, final_model: bool = False)[source]#

Configuration of model checkpointing.

@@ -408,7 +407,7 @@

sleap.nn.config.outputs

-class sleap.nn.config.outputs.OutputsConfig(save_outputs: bool = True, run_name: Optional[str] = None, run_name_prefix: str = '', run_name_suffix: Optional[str] = None, runs_folder: str = 'models', tags: List[str] = NOTHING, save_visualizations: bool = True, delete_viz_images: bool = True, zip_outputs: bool = False, log_to_csv: bool = True, checkpointing: sleap.nn.config.outputs.CheckpointingConfig = NOTHING, tensorboard: sleap.nn.config.outputs.TensorBoardConfig = NOTHING, zmq: sleap.nn.config.outputs.ZMQConfig = NOTHING)[source]#
+class sleap.nn.config.outputs.OutputsConfig(save_outputs: bool = True, run_name: Optional[str] = None, run_name_prefix: str = '', run_name_suffix: Optional[str] = None, runs_folder: str = 'models', tags: List[str] = NOTHING, save_visualizations: bool = True, delete_viz_images: bool = True, zip_outputs: bool = False, log_to_csv: bool = True, checkpointing: sleap.nn.config.outputs.CheckpointingConfig = NOTHING, tensorboard: sleap.nn.config.outputs.TensorBoardConfig = NOTHING, zmq: sleap.nn.config.outputs.ZMQConfig = NOTHING)[source]#

Configuration of training outputs.

@@ -618,7 +617,7 @@

sleap.nn.config.outputs

-class sleap.nn.config.outputs.TensorBoardConfig(write_logs: bool = False, loss_frequency: str = 'epoch', architecture_graph: bool = False, profile_graph: bool = False, visualizations: bool = True)[source]#
+class sleap.nn.config.outputs.TensorBoardConfig(write_logs: bool = False, loss_frequency: str = 'epoch', architecture_graph: bool = False, profile_graph: bool = False, visualizations: bool = True)[source]#

Configuration of TensorBoard-based monitoring of the training.

@@ -692,7 +691,7 @@

sleap.nn.config.outputs

-class sleap.nn.config.outputs.ZMQConfig(subscribe_to_controller: bool = False, controller_address: str = 'tcp://127.0.0.1:9000', controller_polling_timeout: int = 10, publish_updates: bool = False, publish_address: str = 'tcp://127.0.0.1:9001')[source]#
+class sleap.nn.config.outputs.ZMQConfig(subscribe_to_controller: bool = False, controller_address: str = 'tcp://127.0.0.1:9000', controller_polling_timeout: int = 10, publish_updates: bool = False, publish_address: str = 'tcp://127.0.0.1:9001')[source]#

Configuration of ZeroMQ-based monitoring of the training.

diff --git a/develop/api/sleap.nn.config.training_job.html b/develop/api/sleap.nn.config.training_job.html index 03b7a8acf..2b239db94 100644 --- a/develop/api/sleap.nn.config.training_job.html +++ b/develop/api/sleap.nn.config.training_job.html @@ -9,7 +9,7 @@ - sleap.nn.config.training_job — SLEAP (v1.4.1a1) + sleap.nn.config.training_job — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -350,7 +349,7 @@

sleap.nn.config.training_job

parameters are aggregated and documented for end users (as opposed to developers).

-class sleap.nn.config.training_job.TrainingJobConfig(data: sleap.nn.config.data.DataConfig = NOTHING, model: sleap.nn.config.model.ModelConfig = NOTHING, optimization: sleap.nn.config.optimization.OptimizationConfig = NOTHING, outputs: sleap.nn.config.outputs.OutputsConfig = NOTHING, name: Optional[str] = '', description: Optional[str] = '', sleap_version: Optional[str] = '1.4.1a1', filename: Optional[str] = '')[source]#
+class sleap.nn.config.training_job.TrainingJobConfig(data: sleap.nn.config.data.DataConfig = NOTHING, model: sleap.nn.config.model.ModelConfig = NOTHING, optimization: sleap.nn.config.optimization.OptimizationConfig = NOTHING, outputs: sleap.nn.config.outputs.OutputsConfig = NOTHING, name: Optional[str] = '', description: Optional[str] = '', sleap_version: Optional[str] = '1.4.1a2', filename: Optional[str] = '')[source]#

Configuration of a training job.

@@ -442,7 +441,7 @@

sleap.nn.config.training_job

-classmethod from_json(json_data: str) sleap.nn.config.training_job.TrainingJobConfig[source]#
+classmethod from_json(json_data: str) sleap.nn.config.training_job.TrainingJobConfig[source]#

Create training job configuration from JSON text data.

Parameters
@@ -456,7 +455,7 @@

sleap.nn.config.training_job

-classmethod from_json_dicts(json_data_dicts: Dict[str, Any]) sleap.nn.config.training_job.TrainingJobConfig[source]#
+classmethod from_json_dicts(json_data_dicts: Dict[str, Any]) sleap.nn.config.training_job.TrainingJobConfig[source]#

Create training job configuration from dictionaries decoded from JSON.

Parameters
@@ -471,7 +470,7 @@

sleap.nn.config.training_job

-classmethod load_json(filename: str, load_training_config: bool = True) sleap.nn.config.training_job.TrainingJobConfig[source]#
+classmethod load_json(filename: str, load_training_config: bool = True) sleap.nn.config.training_job.TrainingJobConfig[source]#

Load a training job configuration from a file.

Parameters
@@ -490,7 +489,7 @@

sleap.nn.config.training_job

-save_json(filename: str)[source]#
+save_json(filename: str)[source]#

Save the configuration to a JSON file.

Parameters
@@ -501,7 +500,7 @@

sleap.nn.config.training_job

-to_json() str[source]#
+to_json() str[source]#

Serialize the configuration into JSON-encoded string format.

Returns
@@ -514,7 +513,7 @@

sleap.nn.config.training_job

-sleap.nn.config.training_job.load_config(filename: str, load_training_config: bool = True) sleap.nn.config.training_job.TrainingJobConfig[source]#
+sleap.nn.config.training_job.load_config(filename: str, load_training_config: bool = True) sleap.nn.config.training_job.TrainingJobConfig[source]#

Load a training job configuration for a model run.

Parameters
diff --git a/develop/api/sleap.nn.config.utils.html b/develop/api/sleap.nn.config.utils.html index 7ef40a2fd..25b66b63d 100644 --- a/develop/api/sleap.nn.config.utils.html +++ b/develop/api/sleap.nn.config.utils.html @@ -9,7 +9,7 @@ - sleap.nn.config.utils — SLEAP (v1.4.1a1) + sleap.nn.config.utils — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.config.utils

Utilities for config building and validation.

-sleap.nn.config.utils.oneof(attrs_cls, must_be_set: bool = False)[source]#
+sleap.nn.config.utils.oneof(attrs_cls, must_be_set: bool = False)[source]#

Ensure that the decorated attrs class only has a single attribute set.

This decorator is inspired by the oneof protobuffer field behavior.

diff --git a/develop/api/sleap.nn.data.augmentation.html b/develop/api/sleap.nn.data.augmentation.html index 28db30544..60a572379 100644 --- a/develop/api/sleap.nn.data.augmentation.html +++ b/develop/api/sleap.nn.data.augmentation.html @@ -9,7 +9,7 @@ - sleap.nn.data.augmentation — SLEAP (v1.4.1a1) + sleap.nn.data.augmentation — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.augmentation

Transformers for applying data augmentation.

-class sleap.nn.data.augmentation.AlbumentationsAugmenter(augmenter: albumentations.core.composition.Compose, image_key: str = 'image', instances_key: str = 'instances')[source]#
+class sleap.nn.data.augmentation.AlbumentationsAugmenter(augmenter: albumentations.core.composition.Compose, image_key: str = 'image', instances_key: str = 'instances')[source]#

Data transformer based on the albumentations library.

This class can generate a tf.data.Dataset from an existing one that generates image and instance data. Element of the output dataset will have a set of @@ -366,7 +365,7 @@

sleap.nn.data.augmentation

-classmethod from_config(config: sleap.nn.config.optimization.AugmentationConfig, image_key: str = 'image', instances_key: str = 'instances') sleap.nn.data.augmentation.AlbumentationsAugmenter[source]#
+classmethod from_config(config: sleap.nn.config.optimization.AugmentationConfig, image_key: str = 'image', instances_key: str = 'instances') sleap.nn.data.augmentation.AlbumentationsAugmenter[source]#

Create an augmenter from a set of configuration parameters.

Parameters
@@ -399,7 +398,7 @@

sleap.nn.data.augmentation

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a tf.data.Dataset with elements containing augmented data.

Parameters
@@ -422,7 +421,7 @@

sleap.nn.data.augmentation

-class sleap.nn.data.augmentation.RandomCropper(crop_height: int = 256, crop_width: int = 256)[source]#
+class sleap.nn.data.augmentation.RandomCropper(crop_height: int = 256, crop_width: int = 256)[source]#

Data transformer for applying random crops to input images.

This class can generate a tf.data.Dataset from an existing one that generates image and instance data. Element of the output dataset will have random crops @@ -451,7 +450,7 @@

sleap.nn.data.augmentation

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2)[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2)[source]#

Create a tf.data.Dataset with elements containing augmented data.

Parameters
@@ -472,7 +471,7 @@

sleap.nn.data.augmentation

-class sleap.nn.data.augmentation.RandomFlipper(symmetric_inds: Optional[numpy.ndarray] = None, horizontal: bool = True, probability: float = 0.5)[source]#
+class sleap.nn.data.augmentation.RandomFlipper(symmetric_inds: Optional[numpy.ndarray] = None, horizontal: bool = True, probability: float = 0.5)[source]#

Data transformer for applying random flipping to input images.

This class can generate a tf.data.Dataset from an existing one that generates image and instance data. Elements of the output dataset will have random horizontal @@ -518,7 +517,7 @@

sleap.nn.data.augmentation

-classmethod from_skeleton(skeleton: sleap.skeleton.Skeleton, horizontal: bool = True, probability: float = 0.5) sleap.nn.data.augmentation.RandomFlipper[source]#
+classmethod from_skeleton(skeleton: sleap.skeleton.Skeleton, horizontal: bool = True, probability: float = 0.5) sleap.nn.data.augmentation.RandomFlipper[source]#

Create an instance of RandomFlipper from a skeleton.

Parameters
@@ -537,7 +536,7 @@

sleap.nn.data.augmentation

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2)[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2)[source]#

Create a tf.data.Dataset with elements containing augmented data.

Parameters
@@ -555,7 +554,7 @@

sleap.nn.data.augmentation

-sleap.nn.data.augmentation.flip_instances_lr(instances: tensorflow.python.framework.ops.Tensor, img_width: int, symmetric_inds: Optional[tensorflow.python.framework.ops.Tensor] = None) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.augmentation.flip_instances_lr(instances: tensorflow.python.framework.ops.Tensor, img_width: int, symmetric_inds: Optional[tensorflow.python.framework.ops.Tensor] = None) tensorflow.python.framework.ops.Tensor[source]#

Flip a set of instance points horizontally with symmetric node adjustment.

Parameters
@@ -579,7 +578,7 @@

sleap.nn.data.augmentation

-sleap.nn.data.augmentation.flip_instances_ud(instances: tensorflow.python.framework.ops.Tensor, img_height: int, symmetric_inds: Optional[tensorflow.python.framework.ops.Tensor] = None) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.augmentation.flip_instances_ud(instances: tensorflow.python.framework.ops.Tensor, img_height: int, symmetric_inds: Optional[tensorflow.python.framework.ops.Tensor] = None) tensorflow.python.framework.ops.Tensor[source]#

Flip a set of instance points vertically with symmetric node adjustment.

Parameters
diff --git a/develop/api/sleap.nn.data.confidence_maps.html b/develop/api/sleap.nn.data.confidence_maps.html index 95166724b..1c560b588 100644 --- a/develop/api/sleap.nn.data.confidence_maps.html +++ b/develop/api/sleap.nn.data.confidence_maps.html @@ -9,7 +9,7 @@ - sleap.nn.data.confidence_maps — SLEAP (v1.4.1a1) + sleap.nn.data.confidence_maps — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.confidence_maps

Transformers for confidence map generation.

-class sleap.nn.data.confidence_maps.InstanceConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, all_instances: bool = False, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#
+class sleap.nn.data.confidence_maps.InstanceConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, all_instances: bool = False, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#

Transformer to generate instance-centered confidence maps.

@@ -412,7 +411,7 @@

sleap.nn.data.confidence_maps

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated confidence maps.

Parameters
@@ -444,7 +443,7 @@

sleap.nn.data.confidence_maps

-class sleap.nn.data.confidence_maps.MultiConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, centroids: bool = False, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#
+class sleap.nn.data.confidence_maps.MultiConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, centroids: bool = False, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#

Transformer to generate multi-instance confidence maps.

@@ -534,7 +533,7 @@

sleap.nn.data.confidence_maps

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated confidence maps.

Parameters
@@ -565,7 +564,7 @@

sleap.nn.data.confidence_maps

-class sleap.nn.data.confidence_maps.SingleInstanceConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#
+class sleap.nn.data.confidence_maps.SingleInstanceConfidenceMapGenerator(sigma: float = 1.0, output_stride: int = 1, with_offsets: bool = False, offsets_threshold: float = 0.2, flatten_offsets: bool = True)[source]#

Transformer to generate single-instance confidence maps.

@@ -643,7 +642,7 @@

sleap.nn.data.confidence_maps

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated confidence maps.

Parameters
@@ -669,7 +668,7 @@

sleap.nn.data.confidence_maps

-sleap.nn.data.confidence_maps.make_confmaps(points: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.confidence_maps.make_confmaps(points: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Make confidence maps from a set of points from a single instance.

Parameters
@@ -706,7 +705,7 @@

sleap.nn.data.confidence_maps

-sleap.nn.data.confidence_maps.make_multi_confmaps(instances: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.confidence_maps.make_multi_confmaps(instances: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Make confidence maps for multiple instances through reduction.

Parameters
@@ -745,7 +744,7 @@

sleap.nn.data.confidence_maps

-sleap.nn.data.confidence_maps.make_multi_confmaps_with_offsets(instances: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, stride: int, sigma: float, offsets_threshold: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.confidence_maps.make_multi_confmaps_with_offsets(instances: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, stride: int, sigma: float, offsets_threshold: float) tensorflow.python.framework.ops.Tensor[source]#

Make confidence maps and offsets for multiple instances through reduction.

Parameters
diff --git a/develop/api/sleap.nn.data.dataset_ops.html b/develop/api/sleap.nn.data.dataset_ops.html index bf89fda46..5b815327f 100644 --- a/develop/api/sleap.nn.data.dataset_ops.html +++ b/develop/api/sleap.nn.data.dataset_ops.html @@ -9,7 +9,7 @@ - sleap.nn.data.dataset_ops — SLEAP (v1.4.1a1) + sleap.nn.data.dataset_ops — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.nn.data.dataset_ops

These are mostly wrappers for standard tf.data.Dataset ops.

-class sleap.nn.data.dataset_ops.Batcher(batch_size: int = 8, drop_remainder: bool = False, unrag: bool = True)[source]#
+class sleap.nn.data.dataset_ops.Batcher(batch_size: int = 8, drop_remainder: bool = False, unrag: bool = True)[source]#

Batching transformer for use in pipelines.

This class enables variable-length example keys to be batched by converting them to ragged tensors prior to concatenation, then converting them back to dense tensors.

@@ -387,7 +386,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with batched elements.

Parameters
@@ -412,7 +411,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.LambdaFilter(filter_fn: Callable[[Dict[str, tensorflow.python.framework.ops.Tensor]], bool])[source]#
+class sleap.nn.data.dataset_ops.LambdaFilter(filter_fn: Callable[[Dict[str, tensorflow.python.framework.ops.Tensor]], bool])[source]#

Transformer for filtering examples out of a dataset.

This class is useful for eliminating examples that fail to meet some criteria, e.g., when no peaks are found.

@@ -442,7 +441,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with filtering applied.

Parameters
@@ -460,7 +459,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.Prefetcher(prefetch: bool = True, buffer_size: int = - 1)[source]#
+class sleap.nn.data.dataset_ops.Prefetcher(prefetch: bool = True, buffer_size: int = - 1)[source]#

Prefetching transformer for use in pipelines.

Prefetches elements from the input dataset to minimize the processing bottleneck as elements are requested since prefetching can occur in parallel.

@@ -502,7 +501,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with prefetching to maintain a buffer during iteration.

Parameters
@@ -520,7 +519,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.Preloader[source]#
+class sleap.nn.data.dataset_ops.Preloader[source]#

Preload elements of the underlying dataset to generate in-memory examples.

This transformer can lead to considerable performance improvements at the cost of memory consumption.

@@ -551,7 +550,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that generates preloaded elements.

Parameters
@@ -572,7 +571,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.Repeater(repeat: bool = True, epochs: int = - 1)[source]#
+class sleap.nn.data.dataset_ops.Repeater(repeat: bool = True, epochs: int = - 1)[source]#

Repeating transformer for use in pipelines.

Repeats the underlying elements indefinitely or for a number of “iterations” or “epochs”.

@@ -621,7 +620,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with repeated loops over the input elements.

Parameters
@@ -638,7 +637,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.Shuffler(shuffle: bool = True, buffer_size: int = 64, reshuffle_each_iteration: bool = True)[source]#
+class sleap.nn.data.dataset_ops.Shuffler(shuffle: bool = True, buffer_size: int = 64, reshuffle_each_iteration: bool = True)[source]#

Shuffling transformer for use in pipelines.

The input to this transformer should not be repeated or batched (though the latter would technically work). Repeating prevents the shuffling from going through “epoch” @@ -702,7 +701,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with shuffled element order.

Parameters
@@ -722,7 +721,7 @@

sleap.nn.data.dataset_ops

-class sleap.nn.data.dataset_ops.Unbatcher[source]#
+class sleap.nn.data.dataset_ops.Unbatcher[source]#

Unbatching transformer for use in pipelines.

@@ -738,7 +737,7 @@

sleap.nn.data.dataset_ops

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with unbatched elements.

diff --git a/develop/api/sleap.nn.data.edge_maps.html b/develop/api/sleap.nn.data.edge_maps.html index b24f55415..b567167ff 100644 --- a/develop/api/sleap.nn.data.edge_maps.html +++ b/develop/api/sleap.nn.data.edge_maps.html @@ -9,7 +9,7 @@ - sleap.nn.data.edge_maps — SLEAP (v1.4.1a1) + sleap.nn.data.edge_maps — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.edge_maps

Transformers for generating edge confidence maps and part affinity fields.

-class sleap.nn.data.edge_maps.PartAffinityFieldsGenerator(sigma=1.0, output_stride=1, skeletons: Optional[Any] = None, flatten_channels: bool = False)[source]#
+class sleap.nn.data.edge_maps.PartAffinityFieldsGenerator(sigma=1.0, output_stride=1, skeletons: Optional[Any] = None, flatten_channels: bool = False)[source]#

Transformer to generate part affinity fields.

@@ -391,7 +390,7 @@

sleap.nn.data.edge_maps

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated confidence maps.

Parameters
@@ -426,7 +425,7 @@

sleap.nn.data.edge_maps

-sleap.nn.data.edge_maps.distance_to_edge(points: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.edge_maps.distance_to_edge(points: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Compute pairwise distance between points and undirected edges.

Parameters
@@ -451,7 +450,7 @@

sleap.nn.data.edge_maps

-sleap.nn.data.edge_maps.get_edge_points(instances: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.data.edge_maps.get_edge_points(instances: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Return the points in each instance that form a directed graph.

Parameters
@@ -475,7 +474,7 @@

sleap.nn.data.edge_maps

-sleap.nn.data.edge_maps.make_edge_maps(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.edge_maps.make_edge_maps(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Generate confidence maps for a set of undirected edges.

Parameters
@@ -505,7 +504,7 @@

sleap.nn.data.edge_maps

-sleap.nn.data.edge_maps.make_multi_pafs(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_sources: tensorflow.python.framework.ops.Tensor, edge_destinations: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.edge_maps.make_multi_pafs(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_sources: tensorflow.python.framework.ops.Tensor, edge_destinations: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Make multiple instance PAFs with max reduction.

Parameters
@@ -536,7 +535,7 @@

sleap.nn.data.edge_maps

-sleap.nn.data.edge_maps.make_pafs(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.edge_maps.make_pafs(xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, edge_source: tensorflow.python.framework.ops.Tensor, edge_destination: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Generate part affinity fields for a set of directed edges.

Parameters
diff --git a/develop/api/sleap.nn.data.general.html b/develop/api/sleap.nn.data.general.html index 9f56fc0ef..488dc8cc3 100644 --- a/develop/api/sleap.nn.data.general.html +++ b/develop/api/sleap.nn.data.general.html @@ -9,7 +9,7 @@ - sleap.nn.data.general — SLEAP (v1.4.1a1) + sleap.nn.data.general — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.general

General purpose transformers for common pipeline processing tasks.

-class sleap.nn.data.general.KeyDeviceMover(keys: List[str] = NOTHING, device_name: str = '/cpu:0')[source]#
+class sleap.nn.data.general.KeyDeviceMover(keys: List[str] = NOTHING, device_name: str = '/cpu:0')[source]#

Transformer for moving example keys to a device.

@@ -339,7 +338,7 @@

sleap.nn.data.general

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains data but moved to cpu.

@@ -347,7 +346,7 @@

sleap.nn.data.general

-class sleap.nn.data.general.KeyFilter(keep_keys: List[str] = NOTHING)[source]#
+class sleap.nn.data.general.KeyFilter(keep_keys: List[str] = NOTHING)[source]#

Transformer for filtering example keys.

@@ -363,7 +362,7 @@

sleap.nn.data.general

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains filtered data.

@@ -371,7 +370,7 @@

sleap.nn.data.general

-class sleap.nn.data.general.KeyRenamer(old_key_names: List[str] = NOTHING, new_key_names: List[str] = NOTHING, drop_old: bool = True)[source]#
+class sleap.nn.data.general.KeyRenamer(old_key_names: List[str] = NOTHING, new_key_names: List[str] = NOTHING, drop_old: bool = True)[source]#

Transformer for renaming example keys.

@@ -387,7 +386,7 @@

sleap.nn.data.general

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains filtered data.

@@ -395,7 +394,7 @@

sleap.nn.data.general

-class sleap.nn.data.general.LambdaMap(func: Callable[[Dict[str, tensorflow.python.framework.ops.Tensor]], Dict[str, tensorflow.python.framework.ops.Tensor]], input_key_names: List[str] = NOTHING, output_key_names: List[str] = NOTHING)[source]#
+class sleap.nn.data.general.LambdaMap(func: Callable[[Dict[str, tensorflow.python.framework.ops.Tensor]], Dict[str, tensorflow.python.framework.ops.Tensor]], input_key_names: List[str] = NOTHING, output_key_names: List[str] = NOTHING)[source]#

Transformer for mapping an arbitrary function to the dataset.

@@ -446,7 +445,7 @@

sleap.nn.data.general

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains transformed data.

diff --git a/develop/api/sleap.nn.data.grouping.html b/develop/api/sleap.nn.data.grouping.html index 64c0f32a4..5cfbc011a 100644 --- a/develop/api/sleap.nn.data.grouping.html +++ b/develop/api/sleap.nn.data.grouping.html @@ -9,7 +9,7 @@ - sleap.nn.data.grouping — SLEAP (v1.4.1a1) + sleap.nn.data.grouping — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,14 +322,14 @@

sleap.nn.data.grouping

Group inference results (“examples”) by frame.

-sleap.nn.data.grouping.group_examples(examples)[source]#
+sleap.nn.data.grouping.group_examples(examples)[source]#

Group examples into dictionary.

Key is (video_ind, frame_ind), val is list of examples matching key.

-sleap.nn.data.grouping.group_examples_iter(examples)[source]#
+sleap.nn.data.grouping.group_examples_iter(examples)[source]#

Iterator which groups examples.

Yields ((video_ind, frame_ind), list of examples matching vid/frame).

diff --git a/develop/api/sleap.nn.data.identity.html b/develop/api/sleap.nn.data.identity.html index c94c658b3..cf1e4dc79 100644 --- a/develop/api/sleap.nn.data.identity.html +++ b/develop/api/sleap.nn.data.identity.html @@ -9,7 +9,7 @@ - sleap.nn.data.identity — SLEAP (v1.4.1a1) + sleap.nn.data.identity — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.identity

Utilities for generating data for track identity models.

-class sleap.nn.data.identity.ClassMapGenerator(sigma: float = 2.0, output_stride: int = 1, centroids: bool = False, class_map_threshold: float = 0.2)[source]#
+class sleap.nn.data.identity.ClassMapGenerator(sigma: float = 2.0, output_stride: int = 1, centroids: bool = False, class_map_threshold: float = 0.2)[source]#

Transformer to generate class maps from track indices.

@@ -390,7 +389,7 @@

sleap.nn.data.identity

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated class identity maps.

Parameters
@@ -409,7 +408,7 @@

sleap.nn.data.identity

-class sleap.nn.data.identity.ClassVectorGenerator[source]#
+class sleap.nn.data.identity.ClassVectorGenerator[source]#

Transformer to generate class probability vectors from track indices.

@@ -425,7 +424,7 @@

sleap.nn.data.identity

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains the generated class identity vectors.

Parameters
@@ -443,7 +442,7 @@

sleap.nn.data.identity

-sleap.nn.data.identity.make_class_maps(confmaps: tensorflow.python.framework.ops.Tensor, class_inds: tensorflow.python.framework.ops.Tensor, n_classes: int, threshold: float = 0.2) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.identity.make_class_maps(confmaps: tensorflow.python.framework.ops.Tensor, class_inds: tensorflow.python.framework.ops.Tensor, n_classes: int, threshold: float = 0.2) tensorflow.python.framework.ops.Tensor[source]#

Generate identity class maps using instance-wise confidence maps.

This is useful for making class maps defined on local neighborhoods around the peaks.

@@ -476,7 +475,7 @@

sleap.nn.data.identity

-sleap.nn.data.identity.make_class_vectors(class_inds: tensorflow.python.framework.ops.Tensor, n_classes: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.identity.make_class_vectors(class_inds: tensorflow.python.framework.ops.Tensor, n_classes: int) tensorflow.python.framework.ops.Tensor[source]#

Make a binary class vectors from class indices.

Parameters
diff --git a/develop/api/sleap.nn.data.inference.html b/develop/api/sleap.nn.data.inference.html index fcc3543f8..7103ef893 100644 --- a/develop/api/sleap.nn.data.inference.html +++ b/develop/api/sleap.nn.data.inference.html @@ -9,7 +9,7 @@ - sleap.nn.data.inference — SLEAP (v1.4.1a1) + sleap.nn.data.inference — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,31 +322,31 @@

sleap.nn.data.inference

Transformers for performing inference.

-class sleap.nn.data.inference.GlobalPeakFinder(confmaps_key: str = 'predicted_instance_confidence_maps', confmaps_stride: int = 1, peak_threshold: float = 0.2, peaks_key: str = 'predicted_center_instance_points', peak_vals_key: str = 'predicted_center_instance_confidences', keep_confmaps: bool = True, device_name: Optional[str] = None, integral: bool = True, integral_patch_size: int = 5)[source]#
+class sleap.nn.data.inference.GlobalPeakFinder(confmaps_key: str = 'predicted_instance_confidence_maps', confmaps_stride: int = 1, peak_threshold: float = 0.2, peaks_key: str = 'predicted_center_instance_points', peak_vals_key: str = 'predicted_center_instance_confidences', keep_confmaps: bool = True, device_name: Optional[str] = None, integral: bool = True, integral_patch_size: int = 5)[source]#

Global peak finding transformer.

-class sleap.nn.data.inference.KerasModelPredictor(keras_model: keras.engine.training.Model, model_input_keys: Any = 'instance_image', model_output_keys: Any = 'predicted_instance_confidence_maps', device_name: Optional[str] = None)[source]#
+class sleap.nn.data.inference.KerasModelPredictor(keras_model: keras.engine.training.Model, model_input_keys: Any = 'instance_image', model_output_keys: Any = 'predicted_instance_confidence_maps', device_name: Optional[str] = None)[source]#

Transformer for performing tf.keras model inference.

-class sleap.nn.data.inference.LocalPeakFinder(confmaps_key: str = 'centroid_confidence_maps', confmaps_stride: int = 1, peak_threshold: float = 0.2, peaks_key: str = 'predicted_centroids', peak_vals_key: str = 'predicted_centroid_confidences', peak_sample_inds_key: str = 'predicted_centroid_sample_inds', peak_channel_inds_key: str = 'predicted_centroid_channel_inds', keep_confmaps: bool = True, device_name: Optional[str] = None, integral: bool = True)[source]#
+class sleap.nn.data.inference.LocalPeakFinder(confmaps_key: str = 'centroid_confidence_maps', confmaps_stride: int = 1, peak_threshold: float = 0.2, peaks_key: str = 'predicted_centroids', peak_vals_key: str = 'predicted_centroid_confidences', peak_sample_inds_key: str = 'predicted_centroid_sample_inds', peak_channel_inds_key: str = 'predicted_centroid_channel_inds', keep_confmaps: bool = True, device_name: Optional[str] = None, integral: bool = True)[source]#

Local peak finding transformer.

-class sleap.nn.data.inference.MockGlobalPeakFinder(all_peaks_in_key: str = 'instances', peaks_out_key: str = 'predicted_center_instance_points', peak_vals_key: str = 'predicted_center_instance_confidences', keep_confmaps: bool = True, confmaps_in_key: str = 'instance_confidence_maps', confmaps_out_key: str = 'predicted_instance_confidence_maps')[source]#
+class sleap.nn.data.inference.MockGlobalPeakFinder(all_peaks_in_key: str = 'instances', peaks_out_key: str = 'predicted_center_instance_points', peak_vals_key: str = 'predicted_center_instance_confidences', keep_confmaps: bool = True, confmaps_in_key: str = 'instance_confidence_maps', confmaps_out_key: str = 'predicted_instance_confidence_maps')[source]#

Transformer that mimics GlobalPeakFinder but passes ground truth data.

-class sleap.nn.data.inference.PredictedCenterInstanceNormalizer(centroid_key: str = 'centroid', centroid_confidence_key: str = 'centroid_confidence', peaks_key: str = 'predicted_center_instance_points', peak_confidences_key: str = 'predicted_center_instance_confidences', new_centroid_key: str = 'predicted_centroid', new_centroid_confidence_key: str = 'predicted_centroid_confidence', new_peaks_key: str = 'predicted_instance', new_peak_confidences_key: str = 'predicted_instance_confidences')[source]#
+class sleap.nn.data.inference.PredictedCenterInstanceNormalizer(centroid_key: str = 'centroid', centroid_confidence_key: str = 'centroid_confidence', peaks_key: str = 'predicted_center_instance_points', peak_confidences_key: str = 'predicted_center_instance_confidences', new_centroid_key: str = 'predicted_centroid', new_centroid_confidence_key: str = 'predicted_centroid_confidence', new_peaks_key: str = 'predicted_instance', new_peak_confidences_key: str = 'predicted_instance_confidences')[source]#

Transformer for adjusting centered instance coordinates.

@@ -363,7 +362,7 @@

sleap.nn.data.inference

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains instance cropped data.

diff --git a/develop/api/sleap.nn.data.instance_centroids.html b/develop/api/sleap.nn.data.instance_centroids.html index d426718d0..53fd190b2 100644 --- a/develop/api/sleap.nn.data.instance_centroids.html +++ b/develop/api/sleap.nn.data.instance_centroids.html @@ -9,7 +9,7 @@ - sleap.nn.data.instance_centroids — SLEAP (v1.4.1a1) + sleap.nn.data.instance_centroids — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.instance_centroids

Transformers for finding instance centroids.

-class sleap.nn.data.instance_centroids.InstanceCentroidFinder(center_on_anchor_part: bool = False, anchor_part_names: Optional[Any] = None, skeletons: Optional[Any] = None, instances_key: str = 'instances')[source]#
+class sleap.nn.data.instance_centroids.InstanceCentroidFinder(center_on_anchor_part: bool = False, anchor_part_names: Optional[Any] = None, skeletons: Optional[Any] = None, instances_key: str = 'instances')[source]#

Data transformer to add centroid information to instances.

This is useful as a transformation to data streams that will be used in centroid networks or for instance cropping.

@@ -380,7 +379,7 @@

sleap.nn.data.instance_centroids

-classmethod from_config(config: sleap.nn.config.data.InstanceCroppingConfig, skeletons: Optional[Union[sleap.skeleton.Skeleton, List[sleap.skeleton.Skeleton]]] = None) sleap.nn.data.instance_centroids.InstanceCentroidFinder[source]#
+classmethod from_config(config: sleap.nn.config.data.InstanceCroppingConfig, skeletons: Optional[Union[sleap.skeleton.Skeleton, List[sleap.skeleton.Skeleton]]] = None) sleap.nn.data.instance_centroids.InstanceCentroidFinder[source]#

Build an instance of this class from its configuration options.

Parameters
@@ -417,7 +416,7 @@

sleap.nn.data.instance_centroids

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains centroids computed from the inputs.

Parameters
@@ -439,7 +438,7 @@

sleap.nn.data.instance_centroids

-sleap.nn.data.instance_centroids.find_points_bbox_midpoint(points: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_centroids.find_points_bbox_midpoint(points: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Find the midpoint of the bounding box of a set of points.

Parameters
@@ -468,7 +467,7 @@

sleap.nn.data.instance_centroids

-sleap.nn.data.instance_centroids.get_instance_anchors(instances: tensorflow.python.framework.ops.Tensor, anchor_inds: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_centroids.get_instance_anchors(instances: tensorflow.python.framework.ops.Tensor, anchor_inds: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Gather the anchor points of a set of instances.

Parameters
diff --git a/develop/api/sleap.nn.data.instance_cropping.html b/develop/api/sleap.nn.data.instance_cropping.html index 6bc26609b..20f0fcb30 100644 --- a/develop/api/sleap.nn.data.instance_cropping.html +++ b/develop/api/sleap.nn.data.instance_cropping.html @@ -9,7 +9,7 @@ - sleap.nn.data.instance_cropping — SLEAP (v1.4.1a1) + sleap.nn.data.instance_cropping — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.instance_cropping

Transformers for cropping instances for topdown processing.

-class sleap.nn.data.instance_cropping.InstanceCropper(crop_width: int, crop_height: int, keep_full_image: bool = False, mock_centroid_confidence: bool = False, unbatch: bool = True, image_key: str = 'image', instances_key: str = 'instances', centroids_key: str = 'centroids')[source]#
+class sleap.nn.data.instance_cropping.InstanceCropper(crop_width: int, crop_height: int, keep_full_image: bool = False, mock_centroid_confidence: bool = False, unbatch: bool = True, image_key: str = 'image', instances_key: str = 'instances', centroids_key: str = 'centroids')[source]#

Data transformer to crop and generate individual examples for instances.

This generates datasets that are instance cropped for topdown processing.

@@ -427,7 +426,7 @@

sleap.nn.data.instance_cropping

-classmethod from_config(config: sleap.nn.config.data.InstanceCroppingConfig, crop_size: Optional[int] = None) sleap.nn.data.instance_cropping.InstanceCropper[source]#
+classmethod from_config(config: sleap.nn.config.data.InstanceCroppingConfig, crop_size: Optional[int] = None) sleap.nn.data.instance_cropping.InstanceCropper[source]#

Build an instance of this class from its configuration options.

Parameters
@@ -462,7 +461,7 @@

sleap.nn.data.instance_cropping

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains instance cropped data.

Parameters
@@ -535,7 +534,7 @@

sleap.nn.data.instance_cropping

-sleap.nn.data.instance_cropping.crop_bboxes(image: tensorflow.python.framework.ops.Tensor, bboxes: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_cropping.crop_bboxes(image: tensorflow.python.framework.ops.Tensor, bboxes: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Crop bounding boxes from an image.

This method serves as a convenience method for specifying the arguments of tf.image.crop_and_resize, becoming especially useful in the case of multiple @@ -569,7 +568,7 @@

sleap.nn.data.instance_cropping

-sleap.nn.data.instance_cropping.find_instance_crop_size(labels: sleap.io.dataset.Labels, padding: int = 0, maximum_stride: int = 2, input_scaling: float = 1.0, min_crop_size: Optional[int] = None) int[source]#
+sleap.nn.data.instance_cropping.find_instance_crop_size(labels: sleap.io.dataset.Labels, padding: int = 0, maximum_stride: int = 2, input_scaling: float = 1.0, min_crop_size: Optional[int] = None) int[source]#

Compute the size of the largest instance bounding box from labels.

Parameters
@@ -596,7 +595,7 @@

sleap.nn.data.instance_cropping

-sleap.nn.data.instance_cropping.make_centered_bboxes(centroids: tensorflow.python.framework.ops.Tensor, box_height: int, box_width: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_cropping.make_centered_bboxes(centroids: tensorflow.python.framework.ops.Tensor, box_height: int, box_width: int) tensorflow.python.framework.ops.Tensor[source]#

Generate bounding boxes centered on a set of centroid coordinates.

Parameters
@@ -642,7 +641,7 @@

sleap.nn.data.instance_cropping

-sleap.nn.data.instance_cropping.normalize_bboxes(bboxes: tensorflow.python.framework.ops.Tensor, image_height: int, image_width: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_cropping.normalize_bboxes(bboxes: tensorflow.python.framework.ops.Tensor, image_height: int, image_width: int) tensorflow.python.framework.ops.Tensor[source]#

Normalize bounding box coordinates to the range [0, 1].

This is useful for transforming points for TensorFlow operations that require normalized image coordinates.

@@ -667,7 +666,7 @@

sleap.nn.data.instance_cropping

-sleap.nn.data.instance_cropping.unnormalize_bboxes(normalized_bboxes: tensorflow.python.framework.ops.Tensor, image_height: int, image_width: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.instance_cropping.unnormalize_bboxes(normalized_bboxes: tensorflow.python.framework.ops.Tensor, image_height: int, image_width: int) tensorflow.python.framework.ops.Tensor[source]#

Convert bounding boxes coordinates in the range [0, 1] to absolute coordinates.

Parameters
diff --git a/develop/api/sleap.nn.data.normalization.html b/develop/api/sleap.nn.data.normalization.html index 04feebc7c..d9a645c0a 100644 --- a/develop/api/sleap.nn.data.normalization.html +++ b/develop/api/sleap.nn.data.normalization.html @@ -9,7 +9,7 @@ - sleap.nn.data.normalization — SLEAP (v1.4.1a1) + sleap.nn.data.normalization — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.normalization

Transformers for normalizing data formats.

-class sleap.nn.data.normalization.Normalizer(image_key: str = 'image', ensure_float: bool = True, ensure_rgb: bool = False, ensure_grayscale: bool = False, imagenet_mode: Optional[str] = None)[source]#
+class sleap.nn.data.normalization.Normalizer(image_key: str = 'image', ensure_float: bool = True, ensure_rgb: bool = False, ensure_grayscale: bool = False, imagenet_mode: Optional[str] = None)[source]#

Data transformer to normalize images.

This is useful as a transformation to data streams that require specific data ranges such as for pretrained models with specific preprocessing constraints.

@@ -395,7 +394,7 @@

sleap.nn.data.normalization

-classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, image_key: str = 'image') sleap.nn.data.normalization.Normalizer[source]#
+classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, image_key: str = 'image') sleap.nn.data.normalization.Normalizer[source]#

Build an instance of this class from its configuration options.

Parameters
@@ -424,7 +423,7 @@

sleap.nn.data.normalization

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains centroids computed from the inputs.

Parameters
@@ -441,7 +440,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.convert_rgb_to_bgr(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.convert_rgb_to_bgr(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Convert an RGB image to BGR format by reversing the channel order.

Parameters
@@ -456,7 +455,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.ensure_float(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.ensure_float(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Convert the image to a tf.float32.

Parameters
@@ -475,7 +474,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.ensure_grayscale(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.ensure_grayscale(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Convert image to grayscale if in RGB format.

Parameters
@@ -491,7 +490,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.ensure_int(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.ensure_int(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Convert the image to a tf.uint8.

If the image is a floating dtype, then converts and scales data from [0, 1] to [0, 255] as needed. Otherwise, returns image as is.

@@ -510,7 +509,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.ensure_min_image_rank(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.ensure_min_image_rank(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Expand the image to a minimum rank of 3 by adding single dimensions.

Parameters
@@ -530,7 +529,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.ensure_rgb(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.ensure_rgb(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Convert image to RGB if in grayscale format.

Parameters
@@ -546,7 +545,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.scale_image_range(image: tensorflow.python.framework.ops.Tensor, min_val: float, max_val: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.scale_image_range(image: tensorflow.python.framework.ops.Tensor, min_val: float, max_val: float) tensorflow.python.framework.ops.Tensor[source]#

Scale the range of image values.

Parameters
@@ -566,7 +565,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.scale_to_imagenet_caffe_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.scale_to_imagenet_caffe_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Scale images according to the “caffe” preprocessing mode.

This applies the preprocessing operations implemented in tf.keras.applications for models pretrained on ImageNet.

@@ -604,7 +603,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.scale_to_imagenet_tf_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.scale_to_imagenet_tf_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Scale images according to the “tf” preprocessing mode.

This applies the preprocessing operations implemented in tf.keras.applications for models pretrained on ImageNet.

@@ -639,7 +638,7 @@

sleap.nn.data.normalization

-sleap.nn.data.normalization.scale_to_imagenet_torch_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.normalization.scale_to_imagenet_torch_mode(image: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Scale images according to the “torch” preprocessing mode.

This applies the preprocessing operations implemented in tf.keras.applications for models pretrained on ImageNet.

diff --git a/develop/api/sleap.nn.data.offset_regression.html b/develop/api/sleap.nn.data.offset_regression.html index 95e2f6d7d..fa5d77f02 100644 --- a/develop/api/sleap.nn.data.offset_regression.html +++ b/develop/api/sleap.nn.data.offset_regression.html @@ -9,7 +9,7 @@ - sleap.nn.data.offset_regression — SLEAP (v1.4.1a1) + sleap.nn.data.offset_regression — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.offset_regression

Utilities for creating offset regression maps.

-sleap.nn.data.offset_regression.make_offsets(points: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, stride: int = 1) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.offset_regression.make_offsets(points: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor, stride: int = 1) tensorflow.python.framework.ops.Tensor[source]#

Make point offset maps on a grid.

Parameters
@@ -356,7 +355,7 @@

sleap.nn.data.offset_regression

-sleap.nn.data.offset_regression.mask_offsets(offsets: tensorflow.python.framework.ops.Tensor, confmaps: tensorflow.python.framework.ops.Tensor, threshold: float = 0.2) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.offset_regression.mask_offsets(offsets: tensorflow.python.framework.ops.Tensor, confmaps: tensorflow.python.framework.ops.Tensor, threshold: float = 0.2) tensorflow.python.framework.ops.Tensor[source]#

Mask offset maps using a confidence map threshold.

This is useful for restricting offset maps to local neighborhoods around the peaks.

diff --git a/develop/api/sleap.nn.data.pipelines.html b/develop/api/sleap.nn.data.pipelines.html index 6b95b85a8..79143bd75 100644 --- a/develop/api/sleap.nn.data.pipelines.html +++ b/develop/api/sleap.nn.data.pipelines.html @@ -9,7 +9,7 @@ - sleap.nn.data.pipelines — SLEAP (v1.4.1a1) + sleap.nn.data.pipelines — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -327,7 +326,7 @@

sleap.nn.data.pipelines

well as to define training vs inference versions based on the same configurations.

-class sleap.nn.data.pipelines.BottomUpMultiClassPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, confmaps_head: sleap.nn.heads.MultiInstanceConfmapsHead, class_maps_head: sleap.nn.heads.ClassMapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.BottomUpMultiClassPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, confmaps_head: sleap.nn.heads.MultiInstanceConfmapsHead, class_maps_head: sleap.nn.heads.ClassMapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for confidence maps and class maps models.

@@ -386,7 +385,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -401,7 +400,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -422,7 +421,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
@@ -443,7 +442,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.BottomUpPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, confmaps_head: sleap.nn.heads.MultiInstanceConfmapsHead, pafs_head: sleap.nn.heads.PartAffinityFieldsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.BottomUpPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, confmaps_head: sleap.nn.heads.MultiInstanceConfmapsHead, pafs_head: sleap.nn.heads.PartAffinityFieldsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for confidence maps + part affinity fields models.

@@ -502,7 +501,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -517,7 +516,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -538,7 +537,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
@@ -559,7 +558,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.CentroidConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, centroid_confmap_head: sleap.nn.heads.CentroidConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.CentroidConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, centroid_confmap_head: sleap.nn.heads.CentroidConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for centroid confidence map models.

@@ -608,7 +607,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -623,7 +622,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -644,7 +643,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
@@ -665,7 +664,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.Pipeline(providers: Any = NOTHING, transformers: Any = NOTHING)[source]#
+class sleap.nn.data.pipelines.Pipeline(providers: Any = NOTHING, transformers: Any = NOTHING)[source]#

Pipeline composed of providers and transformers.

@@ -691,7 +690,7 @@

sleap.nn.data.pipelines

-append(other: Union[sleap.nn.data.pipelines.Pipeline, sleap.nn.data.pipelines.Transformer, List[sleap.nn.data.pipelines.Transformer]])[source]#
+append(other: Union[sleap.nn.data.pipelines.Pipeline, sleap.nn.data.pipelines.Transformer, List[sleap.nn.data.pipelines.Transformer]])[source]#

Append one or more blocks to this pipeline instance.

Parameters
@@ -707,7 +706,7 @@

sleap.nn.data.pipelines

-describe(return_description: bool = False) Optional[str][source]#
+describe(return_description: bool = False) Optional[str][source]#

Prints the keys in the examples generated by the pipeline.

Parameters
@@ -722,7 +721,7 @@

sleap.nn.data.pipelines

-classmethod from_blocks(blocks: Union[sleap.nn.data.pipelines.Provider, sleap.nn.data.pipelines.Transformer, Sequence[Union[sleap.nn.data.pipelines.Provider, sleap.nn.data.pipelines.Transformer]]]) sleap.nn.data.pipelines.Pipeline[source]#
+classmethod from_blocks(blocks: Union[sleap.nn.data.pipelines.Provider, sleap.nn.data.pipelines.Transformer, Sequence[Union[sleap.nn.data.pipelines.Provider, sleap.nn.data.pipelines.Transformer]]]) sleap.nn.data.pipelines.Pipeline[source]#

Create a pipeline from a sequence of providers and transformers.

Parameters
@@ -736,7 +735,7 @@

sleap.nn.data.pipelines

-classmethod from_pipelines(pipelines: Sequence[sleap.nn.data.pipelines.Pipeline]) sleap.nn.data.pipelines.Pipeline[source]#
+classmethod from_pipelines(pipelines: Sequence[sleap.nn.data.pipelines.Pipeline]) sleap.nn.data.pipelines.Pipeline[source]#

Create a new pipeline instance by chaining together multiple pipelines.

Parameters
@@ -750,7 +749,7 @@

sleap.nn.data.pipelines

-make_dataset() tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+make_dataset() tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset instance that generates examples from the pipeline.

Returns
@@ -768,7 +767,7 @@

sleap.nn.data.pipelines

-peek(n: int = 1) Union[Dict[str, tensorflow.python.framework.ops.Tensor], List[Dict[str, tensorflow.python.framework.ops.Tensor]]][source]#
+peek(n: int = 1) Union[Dict[str, tensorflow.python.framework.ops.Tensor], List[Dict[str, tensorflow.python.framework.ops.Tensor]]][source]#

Build and return the first n examples from the pipeline.

This function is useful for quickly inspecting the output of a pipeline.

@@ -783,7 +782,7 @@

sleap.nn.data.pipelines

-run() List[Dict[str, tensorflow.python.framework.ops.Tensor]][source]#
+run() List[Dict[str, tensorflow.python.framework.ops.Tensor]][source]#

Build and evaluate the pipeline.

Returns
@@ -794,7 +793,7 @@

sleap.nn.data.pipelines

-validate_pipeline() List[str][source]#
+validate_pipeline() List[str][source]#

Check that all pipeline blocks meet the data requirements.

Returns
@@ -811,7 +810,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.SingleInstanceConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, single_instance_confmap_head: sleap.nn.heads.SingleInstanceConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.SingleInstanceConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, single_instance_confmap_head: sleap.nn.heads.SingleInstanceConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for single-instance confidence map models.

@@ -860,7 +859,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -875,7 +874,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -896,7 +895,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider, keras_model: keras.engine.training.Model) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
@@ -917,7 +916,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.TopDownMultiClassPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, instance_confmap_head: sleap.nn.heads.CenteredInstanceConfmapsHead, class_vectors_head: sleap.nn.heads.ClassVectorsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.TopDownMultiClassPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, instance_confmap_head: sleap.nn.heads.CenteredInstanceConfmapsHead, class_vectors_head: sleap.nn.heads.ClassVectorsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for confidence maps and class maps models.

@@ -972,7 +971,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -987,7 +986,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -1008,7 +1007,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
@@ -1026,7 +1025,7 @@

sleap.nn.data.pipelines

-class sleap.nn.data.pipelines.TopdownConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, instance_confmap_head: sleap.nn.heads.CenteredInstanceConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#
+class sleap.nn.data.pipelines.TopdownConfmapsPipeline(data_config: sleap.nn.config.data.DataConfig, optimization_config: sleap.nn.config.optimization.OptimizationConfig, instance_confmap_head: sleap.nn.heads.CenteredInstanceConfmapsHead, offsets_head: Optional[sleap.nn.heads.OffsetRefinementHead] = None)[source]#

Pipeline builder for instance-centered confidence map models.

@@ -1075,7 +1074,7 @@

sleap.nn.data.pipelines

-make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_base_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create base pipeline with input data only.

Parameters
@@ -1090,7 +1089,7 @@

sleap.nn.data.pipelines

-make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_training_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create full training pipeline.

Parameters
@@ -1111,7 +1110,7 @@

sleap.nn.data.pipelines

-make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#
+make_viz_pipeline(data_provider: sleap.nn.data.pipelines.Provider) sleap.nn.data.pipelines.Pipeline[source]#

Create visualization pipeline.

Parameters
diff --git a/develop/api/sleap.nn.data.providers.html b/develop/api/sleap.nn.data.providers.html index 2da4101b5..d684dbf3d 100644 --- a/develop/api/sleap.nn.data.providers.html +++ b/develop/api/sleap.nn.data.providers.html @@ -9,7 +9,7 @@ - sleap.nn.data.providers — SLEAP (v1.4.1a1) + sleap.nn.data.providers — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.providers

Data providers for pipeline I/O.

-class sleap.nn.data.providers.LabelsReader(labels: sleap.io.dataset.Labels, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None, user_instances_only: bool = False, with_track_only: bool = False)[source]#
+class sleap.nn.data.providers.LabelsReader(labels: sleap.io.dataset.Labels, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None, user_instances_only: bool = False, with_track_only: bool = False)[source]#

Data provider from a sleap.Labels instance.

This class can generate tf.data.Dataset`s from a set of labels for use in data pipelines. Each element in the dataset will contain the data contained in a single @@ -381,7 +380,7 @@

sleap.nn.data.providers

-classmethod from_filename(filename: str, user_instances: bool = True) sleap.nn.data.providers.LabelsReader[source]#
+classmethod from_filename(filename: str, user_instances: bool = True) sleap.nn.data.providers.LabelsReader[source]#

Create a LabelsReader from a saved labels file.

Parameters
@@ -398,7 +397,7 @@

sleap.nn.data.providers

-classmethod from_unlabeled_suggestions(labels: sleap.io.dataset.Labels) sleap.nn.data.providers.LabelsReader[source]#
+classmethod from_unlabeled_suggestions(labels: sleap.io.dataset.Labels) sleap.nn.data.providers.LabelsReader[source]#

Create a LabelsReader using the unlabeled suggestions in a Labels set. :param labels: A sleap.Labels instance containing unlabeled suggestions.

@@ -410,7 +409,7 @@

sleap.nn.data.providers

-classmethod from_user_instances(labels: sleap.io.dataset.Labels, with_track_only: bool = False) sleap.nn.data.providers.LabelsReader[source]#
+classmethod from_user_instances(labels: sleap.io.dataset.Labels, with_track_only: bool = False) sleap.nn.data.providers.LabelsReader[source]#

Create a LabelsReader using the user instances in a Labels set. :param labels: A sleap.Labels instance containing user instances. :param with_track_only: If True, load only instances that have a track assigned.

@@ -432,7 +431,7 @@

sleap.nn.data.providers

-classmethod from_user_labeled_frames(labels: sleap.io.dataset.Labels) sleap.nn.data.providers.LabelsReader[source]#
+classmethod from_user_labeled_frames(labels: sleap.io.dataset.Labels) sleap.nn.data.providers.LabelsReader[source]#

Create a LabelsReader using the user labeled frames in a Labels set. :param labels: A sleap.Labels instance containing user labeled frames.

@@ -453,7 +452,7 @@

sleap.nn.data.providers

-make_dataset(ds_index: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+make_dataset(ds_index: Optional[tensorflow.python.data.ops.dataset_ops.DatasetV2] = None) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Return a tf.data.Dataset whose elements are data from labeled frames. :returns: A dataset whose elements are dictionaries with the loaded data associated

@@ -519,7 +518,7 @@

sleap.nn.data.providers

-class sleap.nn.data.providers.VideoReader(video: sleap.io.video.Video, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None)[source]#
+class sleap.nn.data.providers.VideoReader(video: sleap.io.video.Video, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None)[source]#

Data provider from a sleap.Video instance.

This class can generate `tf.data.Dataset`s from a video for use in data pipelines. Each element in the dataset will contain the image data from a single frame.

@@ -556,7 +555,7 @@

sleap.nn.data.providers

-classmethod from_filepath(filename: str, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None, **kwargs) sleap.nn.data.providers.VideoReader[source]#
+classmethod from_filepath(filename: str, example_indices: Optional[Union[Sequence[int], numpy.ndarray]] = None, **kwargs) sleap.nn.data.providers.VideoReader[source]#

Create a LabelsReader from a saved labels file.

Parameters
@@ -576,7 +575,7 @@

sleap.nn.data.providers

-make_dataset() tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+make_dataset() tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Return a tf.data.Dataset whose elements are data from video frames.

Returns
diff --git a/develop/api/sleap.nn.data.resizing.html b/develop/api/sleap.nn.data.resizing.html index de7dfc812..eb5735f85 100644 --- a/develop/api/sleap.nn.data.resizing.html +++ b/develop/api/sleap.nn.data.resizing.html @@ -9,7 +9,7 @@ - sleap.nn.data.resizing — SLEAP (v1.4.1a1) + sleap.nn.data.resizing — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.resizing

Transformers for image resizing and padding.

-class sleap.nn.data.resizing.PointsRescaler(points_key: str = 'predicted_instances', scale_key: str = 'scale', invert: bool = True)[source]#
+class sleap.nn.data.resizing.PointsRescaler(points_key: str = 'predicted_instances', scale_key: str = 'scale', invert: bool = True)[source]#

Transformer to apply or invert scaling operations on points.

@@ -339,7 +338,7 @@

sleap.nn.data.resizing

-transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(input_ds: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains instance cropped data.

@@ -347,7 +346,7 @@

sleap.nn.data.resizing

-class sleap.nn.data.resizing.Resizer(image_key: str = 'image', scale_key: str = 'scale', points_key: Optional[str] = 'instances', scale: float = 1.0, pad_to_stride: int = 1, keep_full_image: bool = False, full_image_key: str = 'full_image')[source]#
+class sleap.nn.data.resizing.Resizer(image_key: str = 'image', scale_key: str = 'scale', points_key: Optional[str] = 'instances', scale: float = 1.0, pad_to_stride: int = 1, keep_full_image: bool = False, full_image_key: str = 'full_image')[source]#

Data transformer to resize or pad images.

This is useful as a transformation to data streams that require resizing or padding in order to be downsampled or meet divisibility criteria.

@@ -435,7 +434,7 @@

sleap.nn.data.resizing

-classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, image_key: str = 'image', scale_key: str = 'scale', pad_to_stride: Optional[int] = None, keep_full_image: bool = False, full_image_key: str = 'full_image', points_key: Optional[str] = 'instances') sleap.nn.data.resizing.Resizer[source]#
+classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, image_key: str = 'image', scale_key: str = 'scale', pad_to_stride: Optional[int] = None, keep_full_image: bool = False, full_image_key: str = 'full_image', points_key: Optional[str] = 'instances') sleap.nn.data.resizing.Resizer[source]#

Build an instance of this class from its configuration options.

Parameters
@@ -478,7 +477,7 @@

sleap.nn.data.resizing

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset that contains centroids computed from the inputs.

Parameters
@@ -503,7 +502,7 @@

sleap.nn.data.resizing

-class sleap.nn.data.resizing.SizeMatcher(image_key: str = 'image', scale_key: str = 'scale', points_key: Optional[str] = 'instances', keep_full_image: bool = False, full_image_key: str = 'full_image', max_image_height: Optional[int] = None, max_image_width: Optional[int] = None, center_pad: bool = False)[source]#
+class sleap.nn.data.resizing.SizeMatcher(image_key: str = 'image', scale_key: str = 'scale', points_key: Optional[str] = 'instances', keep_full_image: bool = False, full_image_key: str = 'full_image', max_image_height: Optional[int] = None, max_image_width: Optional[int] = None, center_pad: bool = False)[source]#

Data transformer that ensures output images have uniform shape by resizing/padding smaller images.

@@ -598,7 +597,7 @@

sleap.nn.data.resizing

-classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, provider: Optional[Provider] = None, update_config: bool = True, image_key: str = 'image', scale_key: str = 'scale', keep_full_image: bool = False, full_image_key: str = 'full_image', points_key: Optional[str] = 'instances') SizeMatcher[source]#
+classmethod from_config(config: sleap.nn.config.data.PreprocessingConfig, provider: Optional[Provider] = None, update_config: bool = True, image_key: str = 'image', scale_key: str = 'scale', keep_full_image: bool = False, full_image_key: str = 'full_image', points_key: Optional[str] = 'instances') SizeMatcher[source]#

Build an instance of this class from configuration.

Parameters
@@ -641,7 +640,7 @@

sleap.nn.data.resizing

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Transform a dataset with variable size images into one with fixed sizes.

Parameters
@@ -664,7 +663,7 @@

sleap.nn.data.resizing

-sleap.nn.data.resizing.find_padding_for_stride(image_height: int, image_width: int, max_stride: int) Tuple[int, int][source]#
+sleap.nn.data.resizing.find_padding_for_stride(image_height: int, image_width: int, max_stride: int) Tuple[int, int][source]#

Compute padding required to ensure image is divisible by a stride.

This function is useful for determining how to pad images such that they will not have issues with divisibility after repeated pooling steps.

@@ -710,7 +709,7 @@

sleap.nn.data.resizing

-sleap.nn.data.resizing.resize_image(image: tensorflow.python.framework.ops.Tensor, scale: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.resizing.resize_image(image: tensorflow.python.framework.ops.Tensor, scale: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Rescale an image by a scale factor.

This function is primarily a convenience wrapper for tf.image.resize that calculates the new shape from the scale factor.

diff --git a/develop/api/sleap.nn.data.training.html b/develop/api/sleap.nn.data.training.html index fa8efc4b3..d3fa355d5 100644 --- a/develop/api/sleap.nn.data.training.html +++ b/develop/api/sleap.nn.data.training.html @@ -9,7 +9,7 @@ - sleap.nn.data.training — SLEAP (v1.4.1a1) + sleap.nn.data.training — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.training

Transformers and utilities for training-related operations.

-class sleap.nn.data.training.KeyMapper(key_maps: Optional[Any])[source]#
+class sleap.nn.data.training.KeyMapper(key_maps: Optional[Any])[source]#

Maps example keys to specified outputs.

This is useful for transforming examples into tuples that map onto specific layer names for training.

@@ -354,7 +353,7 @@

sleap.nn.data.training

-transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#
+transform_dataset(ds_input: tensorflow.python.data.ops.dataset_ops.DatasetV2) tensorflow.python.data.ops.dataset_ops.DatasetV2[source]#

Create a dataset with input keys mapped to new key names.

Parameters
@@ -372,7 +371,7 @@

sleap.nn.data.training

-sleap.nn.data.training.split_labels(labels: sleap.io.dataset.Labels, split_fractions: Sequence[float]) Tuple[sleap.io.dataset.Labels][source]#
+sleap.nn.data.training.split_labels(labels: sleap.io.dataset.Labels, split_fractions: Sequence[float]) Tuple[sleap.io.dataset.Labels][source]#

Split a Labels into multiple new ones with random subsets of the data.

Parameters
@@ -403,7 +402,7 @@

sleap.nn.data.training

-sleap.nn.data.training.split_labels_reader(labels_reader: sleap.nn.data.providers.LabelsReader, split_fractions: Sequence[float]) Tuple[sleap.nn.data.providers.LabelsReader][source]#
+sleap.nn.data.training.split_labels_reader(labels_reader: sleap.nn.data.providers.LabelsReader, split_fractions: Sequence[float]) Tuple[sleap.nn.data.providers.LabelsReader][source]#

Split a LabelsReader into multiple new ones with random subsets of the data.

Parameters
@@ -442,7 +441,7 @@

sleap.nn.data.training

-sleap.nn.data.training.split_labels_train_val(labels: sleap.io.dataset.Labels, validation_fraction: float) Tuple[sleap.io.dataset.Labels, List[int], sleap.io.dataset.Labels, List[int]][source]#
+sleap.nn.data.training.split_labels_train_val(labels: sleap.io.dataset.Labels, validation_fraction: float) Tuple[sleap.io.dataset.Labels, List[int], sleap.io.dataset.Labels, List[int]][source]#

Make a train/validation split from a labels dataset.

Parameters
diff --git a/develop/api/sleap.nn.data.utils.html b/develop/api/sleap.nn.data.utils.html index d26d00d63..3406b5295 100644 --- a/develop/api/sleap.nn.data.utils.html +++ b/develop/api/sleap.nn.data.utils.html @@ -9,7 +9,7 @@ - sleap.nn.data.utils — SLEAP (v1.4.1a1) + sleap.nn.data.utils — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.data.utils

Miscellaneous utility functions for data processing.

-sleap.nn.data.utils.describe_tensors(example: Dict[str, tensorflow.python.framework.ops.Tensor], return_description: bool = False) Optional[str][source]#
+sleap.nn.data.utils.describe_tensors(example: Dict[str, tensorflow.python.framework.ops.Tensor], return_description: bool = False) Optional[str][source]#

Print the keys in a example.

Parameters
@@ -341,13 +340,13 @@

sleap.nn.data.utils

-sleap.nn.data.utils.ensure_list(x: Any) List[Any][source]#
+sleap.nn.data.utils.ensure_list(x: Any) List[Any][source]#

Convert the input into a list if it is not already.

-sleap.nn.data.utils.expand_to_rank(x: tensorflow.python.framework.ops.Tensor, target_rank: int, prepend: bool = True) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.utils.expand_to_rank(x: tensorflow.python.framework.ops.Tensor, target_rank: int, prepend: bool = True) tensorflow.python.framework.ops.Tensor[source]#

Expand a tensor to a target rank by adding singleton dimensions.

Parameters
@@ -370,7 +369,7 @@

sleap.nn.data.utils

-sleap.nn.data.utils.gaussian_pdf(x: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.utils.gaussian_pdf(x: tensorflow.python.framework.ops.Tensor, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Compute the PDF of an unnormalized 0-centered Gaussian distribution.

Parameters
@@ -385,7 +384,7 @@

sleap.nn.data.utils

-sleap.nn.data.utils.make_grid_vectors(image_height: int, image_width: int, output_stride: int = 1) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.data.utils.make_grid_vectors(image_height: int, image_width: int, output_stride: int = 1) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Make sampling grid vectors from image dimensions.

This is a useful function for creating the x- and y-vectors that define a sampling grid over an image space. These vectors can be used to generate a full meshgrid or @@ -419,7 +418,7 @@

sleap.nn.data.utils

-sleap.nn.data.utils.unrag_example(example: Dict[str, tensorflow.python.framework.ops.Tensor], numpy: bool = False) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.data.utils.unrag_example(example: Dict[str, tensorflow.python.framework.ops.Tensor], numpy: bool = False) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Convert ragged tensors in an example into normal tensors with NaN padding.

Parameters
@@ -443,7 +442,7 @@

sleap.nn.data.utils

-sleap.nn.data.utils.unrag_tensor(x: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, max_size: int, axis: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.data.utils.unrag_tensor(x: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, max_size: int, axis: int) tensorflow.python.framework.ops.Tensor[source]#

Converts a ragged tensor to a full tensor by padding to a maximum size.

This function is useful for converting ragged tensors to a fixed size when one or more of the dimensions are of variable length.

diff --git a/develop/api/sleap.nn.evals.html b/develop/api/sleap.nn.evals.html index 23d0e0105..275f9bca5 100644 --- a/develop/api/sleap.nn.evals.html +++ b/develop/api/sleap.nn.evals.html @@ -9,7 +9,7 @@ - sleap.nn.evals — SLEAP (v1.4.1a1) + sleap.nn.evals — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -343,7 +342,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_dist_metrics(dists_dict: Dict[str, Union[numpy.ndarray, List[sleap.instance.Instance]]]) Dict[str, numpy.ndarray][source]#
+sleap.nn.evals.compute_dist_metrics(dists_dict: Dict[str, Union[numpy.ndarray, List[sleap.instance.Instance]]]) Dict[str, numpy.ndarray][source]#

Compute the Euclidean distance error at different percentiles.

Parameters
@@ -357,7 +356,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_dists(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, Any]]) Dict[str, Union[numpy.ndarray, List[int], List[str]]][source]#
+sleap.nn.evals.compute_dists(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, Any]]) Dict[str, Union[numpy.ndarray, List[int], List[str]]][source]#

Compute Euclidean distances between matched pairs of instances.

Parameters
@@ -377,7 +376,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_generalized_voc_metrics(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, Any]], false_negatives: List[sleap.instance.Instance], match_scores: List[float], match_score_thresholds: numpy.ndarray = array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]), recall_thresholds: numpy.ndarray = array([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 1.0]), name: str = 'gvoc') Dict[str, Any][source]#
+sleap.nn.evals.compute_generalized_voc_metrics(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, Any]], false_negatives: List[sleap.instance.Instance], match_scores: List[float], match_score_thresholds: numpy.ndarray = array([0.5, 0.55, 0.6, 0.65, 0.7, 0.75, 0.8, 0.85, 0.9, 0.95]), recall_thresholds: numpy.ndarray = array([0.0, 0.01, 0.02, 0.03, 0.04, 0.05, 0.06, 0.07, 0.08, 0.09, 0.1, 0.11, 0.12, 0.13, 0.14, 0.15, 0.16, 0.17, 0.18, 0.19, 0.2, 0.21, 0.22, 0.23, 0.24, 0.25, 0.26, 0.27, 0.28, 0.29, 0.3, 0.31, 0.32, 0.33, 0.34, 0.35, 0.36, 0.37, 0.38, 0.39, 0.4, 0.41, 0.42, 0.43, 0.44, 0.45, 0.46, 0.47, 0.48, 0.49, 0.5, 0.51, 0.52, 0.53, 0.54, 0.55, 0.56, 0.57, 0.58, 0.59, 0.6, 0.61, 0.62, 0.63, 0.64, 0.65, 0.66, 0.67, 0.68, 0.69, 0.7, 0.71, 0.72, 0.73, 0.74, 0.75, 0.76, 0.77, 0.78, 0.79, 0.8, 0.81, 0.82, 0.83, 0.84, 0.85, 0.86, 0.87, 0.88, 0.89, 0.9, 0.91, 0.92, 0.93, 0.94, 0.95, 0.96, 0.97, 0.98, 0.99, 1.0]), name: str = 'gvoc') Dict[str, Any][source]#

Compute VOC metrics given matched pairs of instances.

Parameters
@@ -401,7 +400,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_instance_area(points: numpy.ndarray) numpy.ndarray[source]#
+sleap.nn.evals.compute_instance_area(points: numpy.ndarray) numpy.ndarray[source]#

Compute the area of the bounding box of a set of keypoints.

Parameters
@@ -415,7 +414,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_oks(points_gt: numpy.ndarray, points_pr: numpy.ndarray, scale: Optional[float] = None, stddev: float = 0.025, use_cocoeval: bool = True) numpy.ndarray[source]#
+sleap.nn.evals.compute_oks(points_gt: numpy.ndarray, points_pr: numpy.ndarray, scale: Optional[float] = None, stddev: float = 0.025, use_cocoeval: bool = True) numpy.ndarray[source]#

Compute the object keypoints similarity between sets of points.

Parameters
@@ -465,7 +464,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_pck_metrics(dists: numpy.ndarray, thresholds: numpy.ndarray = array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0])) Dict[str, numpy.ndarray][source]#
+sleap.nn.evals.compute_pck_metrics(dists: numpy.ndarray, thresholds: numpy.ndarray = array([1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0, 10.0])) Dict[str, numpy.ndarray][source]#

Compute PCK across a range of thresholds.

Parameters
@@ -482,7 +481,7 @@

sleap.nn.evals

-sleap.nn.evals.compute_visibility_conf(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.Instance, Any]]) Dict[str, float][source]#
+sleap.nn.evals.compute_visibility_conf(positive_pairs: List[Tuple[sleap.instance.Instance, sleap.instance.Instance, Any]]) Dict[str, float][source]#

Compute node visibility metrics.

Parameters
@@ -497,7 +496,7 @@

sleap.nn.evals

-sleap.nn.evals.evaluate(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, oks_stddev: float = 0.025, oks_scale: Optional[float] = None, match_threshold: float = 0, user_labels_only: bool = True) Dict[str, Union[float, numpy.ndarray]][source]#
+sleap.nn.evals.evaluate(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, oks_stddev: float = 0.025, oks_scale: Optional[float] = None, match_threshold: float = 0, user_labels_only: bool = True) Dict[str, Union[float, numpy.ndarray]][source]#

Calculate all metrics from ground truth and predicted labels.

Parameters
@@ -522,7 +521,7 @@

sleap.nn.evals

-sleap.nn.evals.evaluate_model(cfg: sleap.nn.config.training_job.TrainingJobConfig, labels_gt: Union[sleap.nn.data.providers.LabelsReader, sleap.io.dataset.Labels], model: sleap.nn.model.Model, save: bool = True, split_name: str = 'test') Tuple[sleap.io.dataset.Labels, Dict[str, Any]][source]#
+sleap.nn.evals.evaluate_model(cfg: sleap.nn.config.training_job.TrainingJobConfig, labels_gt: Union[sleap.nn.data.providers.LabelsReader, sleap.io.dataset.Labels], model: sleap.nn.model.Model, save: bool = True, split_name: str = 'test') Tuple[sleap.io.dataset.Labels, Dict[str, Any]][source]#

Evaluate a trained model and save metrics and predictions.

Parameters
@@ -547,7 +546,7 @@

sleap.nn.evals

-sleap.nn.evals.find_frame_pairs(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, user_labels_only: bool = True) List[Tuple[sleap.instance.LabeledFrame, sleap.instance.LabeledFrame]][source]#
+sleap.nn.evals.find_frame_pairs(labels_gt: sleap.io.dataset.Labels, labels_pr: sleap.io.dataset.Labels, user_labels_only: bool = True) List[Tuple[sleap.instance.LabeledFrame, sleap.instance.LabeledFrame]][source]#

Find corresponding frames across two sets of labels.

Parameters
@@ -566,7 +565,7 @@

sleap.nn.evals

-sleap.nn.evals.load_metrics(model_path: str, split: str = 'val') Dict[str, Any][source]#
+sleap.nn.evals.load_metrics(model_path: str, split: str = 'val') Dict[str, Any][source]#

Load metrics for a model.

Parameters
@@ -611,7 +610,7 @@

sleap.nn.evals

-sleap.nn.evals.match_frame_pairs(frame_pairs: List[Tuple[sleap.instance.LabeledFrame, sleap.instance.LabeledFrame]], stddev: float = 0.025, scale: Optional[float] = None, threshold: float = 0, user_labels_only: bool = True) Tuple[List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, float]], List[sleap.instance.Instance]][source]#
+sleap.nn.evals.match_frame_pairs(frame_pairs: List[Tuple[sleap.instance.LabeledFrame, sleap.instance.LabeledFrame]], stddev: float = 0.025, scale: Optional[float] = None, threshold: float = 0, user_labels_only: bool = True) Tuple[List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, float]], List[sleap.instance.Instance]][source]#

Match all ground truth and predicted instances within each pair of frames.

This is a wrapper for match_instances() but operates on lists of frames.

@@ -642,7 +641,7 @@

sleap.nn.evals

-sleap.nn.evals.match_instances(frame_gt: sleap.instance.LabeledFrame, frame_pr: sleap.instance.LabeledFrame, stddev: float = 0.025, scale: Optional[float] = None, threshold: float = 0, user_labels_only: bool = True) Tuple[List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, float]], List[sleap.instance.Instance]][source]#
+sleap.nn.evals.match_instances(frame_gt: sleap.instance.LabeledFrame, frame_pr: sleap.instance.LabeledFrame, stddev: float = 0.025, scale: Optional[float] = None, threshold: float = 0, user_labels_only: bool = True) Tuple[List[Tuple[sleap.instance.Instance, sleap.instance.PredictedInstance, float]], List[sleap.instance.Instance]][source]#

Match pairs of instances between ground truth and predictions in a frame.

Parameters
@@ -680,7 +679,7 @@

sleap.nn.evals

-sleap.nn.evals.replace_path(video_list: List[dict], new_paths: List[str])[source]#
+sleap.nn.evals.replace_path(video_list: List[dict], new_paths: List[str])[source]#

Replace video paths in unstructured video objects.

diff --git a/develop/api/sleap.nn.heads.html b/develop/api/sleap.nn.heads.html index 95fbd9fff..31506e36a 100644 --- a/develop/api/sleap.nn.heads.html +++ b/develop/api/sleap.nn.heads.html @@ -9,7 +9,7 @@ - sleap.nn.heads — SLEAP (v1.4.1a1) + sleap.nn.heads — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.heads

Model head definitions for defining model output types.

-class sleap.nn.heads.CenteredInstanceConfmapsHead(part_names: List[str], anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.CenteredInstanceConfmapsHead(part_names: List[str], anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying centered instance confidence maps.

@@ -390,7 +389,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.CenteredInstanceConfmapsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.CenteredInstanceConfmapsHead[source]#

Create this head from a set of configurations.

@@ -419,7 +418,7 @@

sleap.nn.heads

-class sleap.nn.heads.CentroidConfmapsHead(anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.CentroidConfmapsHead(anchor_part: Optional[str] = None, sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying instance centroid confidence maps.

@@ -475,7 +474,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.CentroidsHeadConfig) sleap.nn.heads.CentroidConfmapsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.CentroidsHeadConfig) sleap.nn.heads.CentroidConfmapsHead[source]#

Create this head from a set of configurations.

@@ -494,7 +493,7 @@

sleap.nn.heads

-class sleap.nn.heads.ClassMapsHead(classes: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.ClassMapsHead(classes: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying class identity maps.

@@ -555,7 +554,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.ClassMapsHeadConfig, classes: Optional[List[str]] = None) sleap.nn.heads.ClassMapsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.ClassMapsHeadConfig, classes: Optional[List[str]] = None) sleap.nn.heads.ClassMapsHead[source]#

Create this head from a set of configurations.

@@ -582,7 +581,7 @@

sleap.nn.heads

-class sleap.nn.heads.ClassVectorsHead(classes: List[str], num_fc_layers: int = 1, num_fc_units: int = 64, global_pool: bool = True, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.ClassVectorsHead(classes: List[str], num_fc_layers: int = 1, num_fc_units: int = 64, global_pool: bool = True, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying classification heads.

@@ -655,7 +654,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.ClassVectorsHeadConfig, classes: Optional[List[str]] = None) sleap.nn.heads.ClassVectorsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.ClassVectorsHeadConfig, classes: Optional[List[str]] = None) sleap.nn.heads.ClassVectorsHead[source]#

Create this head from a set of configurations.

@@ -686,7 +685,7 @@

sleap.nn.heads

-make_head(x_in: tensorflow.python.framework.ops.Tensor, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#
+make_head(x_in: tensorflow.python.framework.ops.Tensor, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#

Make head output tensor from input feature tensor.

Parameters
@@ -706,7 +705,7 @@

sleap.nn.heads

-class sleap.nn.heads.Head(output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.Head(output_stride: int = 1, loss_weight: float = 1.0)[source]#

Base class for model output heads.

@@ -728,7 +727,7 @@

sleap.nn.heads

-make_head(x_in: tensorflow.python.framework.ops.Tensor, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#
+make_head(x_in: tensorflow.python.framework.ops.Tensor, name: Optional[str] = None) tensorflow.python.framework.ops.Tensor[source]#

Make head output tensor from input feature tensor.

Parameters
@@ -748,7 +747,7 @@

sleap.nn.heads

-class sleap.nn.heads.MultiInstanceConfmapsHead(part_names: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.MultiInstanceConfmapsHead(part_names: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying multi-instance confidence maps.

@@ -803,7 +802,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.MultiInstanceConfmapsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.MultiInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.MultiInstanceConfmapsHead[source]#

Create this head from a set of configurations.

@@ -832,7 +831,7 @@

sleap.nn.heads

-class sleap.nn.heads.OffsetRefinementHead(part_names: List[str], output_stride: int = 1, sigma_threshold: float = 0.2, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.OffsetRefinementHead(part_names: List[str], output_stride: int = 1, sigma_threshold: float = 0.2, loss_weight: float = 1.0)[source]#

Head for specifying offset refinement maps.

@@ -888,7 +887,7 @@

sleap.nn.heads

-classmethod from_config(config: Union[sleap.nn.config.model.CentroidsHeadConfig, sleap.nn.config.model.SingleInstanceConfmapsHeadConfig, sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig, sleap.nn.config.model.MultiInstanceConfmapsHeadConfig], part_names: Optional[List[str]] = None, sigma_threshold: float = 0.2, loss_weight: float = 1.0) sleap.nn.heads.OffsetRefinementHead[source]#
+classmethod from_config(config: Union[sleap.nn.config.model.CentroidsHeadConfig, sleap.nn.config.model.SingleInstanceConfmapsHeadConfig, sleap.nn.config.model.CenteredInstanceConfmapsHeadConfig, sleap.nn.config.model.MultiInstanceConfmapsHeadConfig], part_names: Optional[List[str]] = None, sigma_threshold: float = 0.2, loss_weight: float = 1.0) sleap.nn.heads.OffsetRefinementHead[source]#

Create this head from a set of configurations.

@@ -929,7 +928,7 @@

sleap.nn.heads

-class sleap.nn.heads.PartAffinityFieldsHead(edges: Sequence[Tuple[str, str]], sigma: float = 15.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.PartAffinityFieldsHead(edges: Sequence[Tuple[str, str]], sigma: float = 15.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying multi-instance part affinity fields.

@@ -984,7 +983,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.PartAffinityFieldsHeadConfig, edges: Optional[Sequence[Tuple[str, str]]] = None) sleap.nn.heads.PartAffinityFieldsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.PartAffinityFieldsHeadConfig, edges: Optional[Sequence[Tuple[str, str]]] = None) sleap.nn.heads.PartAffinityFieldsHead[source]#

Create this head from a set of configurations.

@@ -1012,7 +1011,7 @@

sleap.nn.heads

-class sleap.nn.heads.SingleInstanceConfmapsHead(part_names: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#
+class sleap.nn.heads.SingleInstanceConfmapsHead(part_names: List[str], sigma: float = 5.0, output_stride: int = 1, loss_weight: float = 1.0)[source]#

Head for specifying single instance confidence maps.

@@ -1067,7 +1066,7 @@

sleap.nn.heads

-classmethod from_config(config: sleap.nn.config.model.SingleInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.SingleInstanceConfmapsHead[source]#
+classmethod from_config(config: sleap.nn.config.model.SingleInstanceConfmapsHeadConfig, part_names: Optional[List[str]] = None) sleap.nn.heads.SingleInstanceConfmapsHead[source]#

Create this head from a set of configurations.

diff --git a/develop/api/sleap.nn.identity.html b/develop/api/sleap.nn.identity.html index e0e1bbbf7..83ba35585 100644 --- a/develop/api/sleap.nn.identity.html +++ b/develop/api/sleap.nn.identity.html @@ -9,7 +9,7 @@ - sleap.nn.identity — SLEAP (v1.4.1a1) + sleap.nn.identity — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -325,7 +324,7 @@

sleap.nn.identity

classification vectors.

-sleap.nn.identity.classify_peaks_from_maps(class_maps: tensorflow.python.framework.ops.Tensor, peak_points: tensorflow.python.framework.ops.Tensor, peak_vals: tensorflow.python.framework.ops.Tensor, peak_sample_inds: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor, n_channels: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.identity.classify_peaks_from_maps(class_maps: tensorflow.python.framework.ops.Tensor, peak_points: tensorflow.python.framework.ops.Tensor, peak_vals: tensorflow.python.framework.ops.Tensor, peak_sample_inds: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor, n_channels: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Classify and group local peaks by their class map probability.

Parameters
@@ -362,7 +361,7 @@

sleap.nn.identity

-sleap.nn.identity.classify_peaks_from_vectors(peak_points: tensorflow.python.framework.ops.Tensor, peak_vals: tensorflow.python.framework.ops.Tensor, peak_class_probs: tensorflow.python.framework.ops.Tensor, crop_sample_inds: tensorflow.python.framework.ops.Tensor, n_samples: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.identity.classify_peaks_from_vectors(peak_points: tensorflow.python.framework.ops.Tensor, peak_vals: tensorflow.python.framework.ops.Tensor, peak_class_probs: tensorflow.python.framework.ops.Tensor, crop_sample_inds: tensorflow.python.framework.ops.Tensor, n_samples: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Group peaks by classification probabilities.

This is used in top-down classification models.

@@ -393,7 +392,7 @@

sleap.nn.identity

-sleap.nn.identity.group_class_peaks(peak_class_probs: tensorflow.python.framework.ops.Tensor, peak_sample_inds: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor, n_samples: int, n_channels: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.identity.group_class_peaks(peak_class_probs: tensorflow.python.framework.ops.Tensor, peak_sample_inds: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor, n_samples: int, n_channels: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Group local peaks using class probabilities.

This is useful for matching peaks that span multiple samples and channels into classes (e.g., instance identities) by their class probability.

diff --git a/develop/api/sleap.nn.inference.html b/develop/api/sleap.nn.inference.html index f27d99172..c01d889b3 100644 --- a/develop/api/sleap.nn.inference.html +++ b/develop/api/sleap.nn.inference.html @@ -9,7 +9,7 @@ - sleap.nn.inference — SLEAP (v1.4.1a1) + sleap.nn.inference — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -339,7 +338,7 @@

sleap.nn.inference

function which provides a simplified interface for creating `Predictor`s.

-class sleap.nn.inference.BottomUpInferenceLayer(*args, **kwargs)[source]#
+class sleap.nn.inference.BottomUpInferenceLayer(*args, **kwargs)[source]#

Keras layer that predicts instances from images using a trained model.

This layer encapsulates all of the inference operations required for generating predictions from a centered instance confidence map model. This includes @@ -462,7 +461,7 @@

sleap.nn.inference

-call(data)[source]#
+call(data)[source]#

Predict instances for one batch of images.

Parameters
@@ -496,13 +495,13 @@

sleap.nn.inference

-find_peaks(cms, offsets)[source]#
+find_peaks(cms, offsets)[source]#

Run peak finding on predicted confidence maps.

-forward_pass(data)[source]#
+forward_pass(data)[source]#

Run preprocessing and model inference on a batch.

@@ -510,7 +509,7 @@

sleap.nn.inference

-class sleap.nn.inference.BottomUpInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.BottomUpInferenceModel(*args, **kwargs)[source]#

Bottom-up instance prediction model.

This model encapsulates the bottom-up approach where points are first detected by local peak detection and then grouped into instances by connectivity scoring using @@ -524,7 +523,7 @@

sleap.nn.inference

-call(example)[source]#
+call(example)[source]#

Predict instances for one batch of images.

Parameters
@@ -557,7 +556,7 @@

sleap.nn.inference

-class sleap.nn.inference.BottomUpMultiClassInferenceLayer(*args, **kwargs)[source]#
+class sleap.nn.inference.BottomUpMultiClassInferenceLayer(*args, **kwargs)[source]#

Keras layer that predicts instances from images using a trained model.

This layer encapsulates all of the inference operations required for generating predictions from a centered instance confidence map model. This includes @@ -663,7 +662,7 @@

sleap.nn.inference

-call(data)[source]#
+call(data)[source]#

Predict instances for one batch of images.

Parameters
@@ -694,13 +693,13 @@

sleap.nn.inference

-find_peaks(cms, offsets)[source]#
+find_peaks(cms, offsets)[source]#

Run peak finding on predicted confidence maps.

-forward_pass(data)[source]#
+forward_pass(data)[source]#

Run preprocessing and model inference on a batch.

@@ -708,7 +707,7 @@

sleap.nn.inference

-class sleap.nn.inference.BottomUpMultiClassInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.BottomUpMultiClassInferenceModel(*args, **kwargs)[source]#

Bottom-up multi-class instance prediction model.

This model encapsulates the bottom-up multi-class approach where points are first detected by local peak finding and then grouped into instances by their identity @@ -722,7 +721,7 @@

sleap.nn.inference

-call(example)[source]#
+call(example)[source]#

Predict instances for one batch of images.

Parameters
@@ -755,7 +754,7 @@

sleap.nn.inference

-class sleap.nn.inference.BottomUpMultiClassPredictor(config: sleap.nn.config.training_job.TrainingJobConfig, model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.BottomUpMultiClassInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 4, integral_refinement: bool = True, integral_patch_size: int = 5, tracks: Optional[List[sleap.instance.Track]] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.BottomUpMultiClassPredictor(config: sleap.nn.config.training_job.TrainingJobConfig, model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.BottomUpMultiClassInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 4, integral_refinement: bool = True, integral_patch_size: int = 5, tracks: Optional[List[sleap.instance.Track]] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Bottom-up multi-instance predictor.

This high-level class handles initialization, preprocessing and tracking using a trained bottom-up multi-instance SLEAP model.

@@ -874,7 +873,7 @@

sleap.nn.inference

-classmethod from_trained_models(model_path: str, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True) sleap.nn.inference.BottomUpMultiClassPredictor[source]#
+classmethod from_trained_models(model_path: str, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True) sleap.nn.inference.BottomUpMultiClassPredictor[source]#

Create predictor from a saved model.

Parameters
@@ -913,7 +912,7 @@

sleap.nn.inference

-class sleap.nn.inference.BottomUpPredictor(bottomup_config: sleap.nn.config.training_job.TrainingJobConfig, bottomup_model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.BottomUpInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 4, integral_refinement: bool = True, integral_patch_size: int = 5, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, paf_line_points: int = 10, min_line_scores: float = 0.25, max_instances: Optional[int] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.BottomUpPredictor(bottomup_config: sleap.nn.config.training_job.TrainingJobConfig, bottomup_model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.BottomUpInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 4, integral_refinement: bool = True, integral_patch_size: int = 5, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, paf_line_points: int = 10, min_line_scores: float = 0.25, max_instances: Optional[int] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Bottom-up multi-instance predictor.

This high-level class handles initialization, preprocessing and tracking using a trained bottom-up multi-instance SLEAP model.

@@ -1097,7 +1096,7 @@

sleap.nn.inference

-classmethod from_trained_models(model_path: str, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, paf_line_points: int = 10, min_line_scores: float = 0.25, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.BottomUpPredictor[source]#
+classmethod from_trained_models(model_path: str, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, paf_line_points: int = 10, min_line_scores: float = 0.25, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.BottomUpPredictor[source]#

Create predictor from a saved model.

Parameters
@@ -1150,7 +1149,7 @@

sleap.nn.inference

-class sleap.nn.inference.CentroidCrop(*args, **kwargs)[source]#
+class sleap.nn.inference.CentroidCrop(*args, **kwargs)[source]#

Inference layer for applying centroid crop-based models.

This layer encapsulates all of the inference operations requires for generating predictions from a centroid confidence map model. This includes preprocessing, @@ -1297,7 +1296,7 @@

sleap.nn.inference

-class sleap.nn.inference.CentroidCropGroundTruth(*args, **kwargs)[source]#
+class sleap.nn.inference.CentroidCropGroundTruth(*args, **kwargs)[source]#

Keras layer that simulates a centroid cropping model using ground truth.

This layer is useful for testing and evaluating centered instance models.

@@ -1308,7 +1307,7 @@

sleap.nn.inference

-call(example_gt: Dict[str, tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(example_gt: Dict[str, tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Return the ground truth instance crops.

Parameters
@@ -1344,7 +1343,7 @@

sleap.nn.inference

-class sleap.nn.inference.CentroidInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.CentroidInferenceModel(*args, **kwargs)[source]#

Centroid only instance prediction model.

This model encapsulates the first step in a top-down approach where instances are detected by local peak detection of an anchor point and then cropped.

@@ -1358,7 +1357,7 @@

sleap.nn.inference

-call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Predict instances for one batch of images.

Parameters
@@ -1386,7 +1385,7 @@

sleap.nn.inference

-class sleap.nn.inference.FindInstancePeaks(*args, **kwargs)[source]#
+class sleap.nn.inference.FindInstancePeaks(*args, **kwargs)[source]#

Keras layer that predicts instance peaks from images using a trained model.

This layer encapsulates all of the inference operations required for generating predictions from a centered instance confidence map model. This includes @@ -1467,7 +1466,7 @@

sleap.nn.inference

-call(inputs: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(inputs: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Predict confidence maps and infer peak coordinates.

This layer can be chained with a CentroidCrop layer to create a top-down inference function from full images.

@@ -1520,12 +1519,12 @@

sleap.nn.inference

-class sleap.nn.inference.FindInstancePeaksGroundTruth(*args, **kwargs)[source]#
+class sleap.nn.inference.FindInstancePeaksGroundTruth(*args, **kwargs)[source]#

Keras layer that simulates a centered instance peaks model.

This layer is useful for testing and evaluating centroid models.

-call(example_gt: Dict[str, tensorflow.python.framework.ops.Tensor], crop_output: Dict[str, tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(example_gt: Dict[str, tensorflow.python.framework.ops.Tensor], crop_output: Dict[str, tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Return the ground truth instance peaks given a set of crops.

Parameters
@@ -1573,7 +1572,7 @@

sleap.nn.inference

-class sleap.nn.inference.InferenceLayer(*args, **kwargs)[source]#
+class sleap.nn.inference.InferenceLayer(*args, **kwargs)[source]#

Base layer for wrapping a Keras model into a layer with preprocessing.

This layer is useful for wrapping input preprocessing operations that would otherwise be handled by a separate pipeline.

@@ -1618,7 +1617,7 @@

sleap.nn.inference

-call(data: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+call(data: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Call the model with preprocessed data.

Parameters
@@ -1632,7 +1631,7 @@

sleap.nn.inference

-preprocess(imgs: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+preprocess(imgs: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Apply all preprocessing operations configured for this layer.

Parameters
@@ -1650,14 +1649,14 @@

sleap.nn.inference

-class sleap.nn.inference.InferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.InferenceModel(*args, **kwargs)[source]#

SLEAP inference model base class.

This class wraps the tf.keras.Model class to provide SLEAP-specific inference utilities such as handling different input data types, preprocessing and variable output shapes.

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True)[source]#

Save the frozen graph of a model.

Parameters
@@ -1686,7 +1685,7 @@

sleap.nn.inference

-predict(data: Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.data.ops.dataset_ops.DatasetV2, sleap.nn.data.pipelines.Pipeline, sleap.io.video.Video], numpy: bool = True, batch_size: int = 4, **kwargs) Union[Dict[str, numpy.ndarray], Dict[str, Union[tensorflow.python.framework.ops.Tensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor]]][source]#
+predict(data: Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.data.ops.dataset_ops.DatasetV2, sleap.nn.data.pipelines.Pipeline, sleap.io.video.Video], numpy: bool = True, batch_size: int = 4, **kwargs) Union[Dict[str, numpy.ndarray], Dict[str, Union[tensorflow.python.framework.ops.Tensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor]]][source]#

Predict instances in the data.

Parameters
@@ -1730,7 +1729,7 @@

sleap.nn.inference

-predict_on_batch(data: Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]], numpy: bool = False, **kwargs) Union[Dict[str, numpy.ndarray], Dict[str, Union[tensorflow.python.framework.ops.Tensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor]]][source]#
+predict_on_batch(data: Union[numpy.ndarray, tensorflow.python.framework.ops.Tensor, Dict[str, tensorflow.python.framework.ops.Tensor]], numpy: bool = False, **kwargs) Union[Dict[str, numpy.ndarray], Dict[str, Union[tensorflow.python.framework.ops.Tensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor]]][source]#

Predict a single batch of samples.

Parameters
@@ -1767,7 +1766,7 @@

sleap.nn.inference

-class sleap.nn.inference.MoveNetInferenceLayer(*args, **kwargs)[source]#
+class sleap.nn.inference.MoveNetInferenceLayer(*args, **kwargs)[source]#

Inference layer for applying single instance models.

This layer encapsulates all of the inference operations requires for generating predictions from a single instance confidence map model. This includes @@ -1817,7 +1816,7 @@

sleap.nn.inference

-call(ex)[source]#
+call(ex)[source]#

Call the model with preprocessed data.

Parameters
@@ -1833,7 +1832,7 @@

sleap.nn.inference

-class sleap.nn.inference.MoveNetInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.MoveNetInferenceModel(*args, **kwargs)[source]#

MoveNet prediction model.

This model encapsulates the basic MoveNet approach. The images are passed to a model which is trained to detect all body parts (17 joints in total).

@@ -1850,7 +1849,7 @@

sleap.nn.inference

-call(x)[source]#
+call(x)[source]#

Calls the model on new inputs and returns the outputs as tensors.

In this case call() just reapplies all ops in the graph to the new inputs @@ -1884,7 +1883,7 @@

sleap.nn.inference

-class sleap.nn.inference.MoveNetPredictor(inference_model: Optional[sleap.nn.inference.MoveNetInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 1, model_name: str = 'lightning', *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.MoveNetPredictor(inference_model: Optional[sleap.nn.inference.MoveNetInferenceModel] = None, peak_threshold: float = 0.2, batch_size: int = 1, model_name: str = 'lightning', *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

MoveNet predictor.

This high-level class handles initialization, preprocessing and tracking using a trained MoveNet model. @@ -1951,7 +1950,7 @@

sleap.nn.inference

-classmethod from_trained_models(model_name: str, peak_threshold: float = 0.2) sleap.nn.inference.MoveNetPredictor[source]#
+classmethod from_trained_models(model_name: str, peak_threshold: float = 0.2) sleap.nn.inference.MoveNetPredictor[source]#

Create the predictor from a saved model.

Parameters
@@ -1978,11 +1977,11 @@

sleap.nn.inference

-class sleap.nn.inference.Predictor(*, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.Predictor(*, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Base interface class for predictors.

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#

Export a trained SLEAP model as a frozen graph. Initializes model, creates a dummy tracing batch and passes it through the model. The frozen graph is saved along with training meta info.

@@ -2011,7 +2010,7 @@

sleap.nn.inference

-classmethod from_model_paths(model_paths: Union[str, List[str]], peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.Predictor[source]#
+classmethod from_model_paths(model_paths: Union[str, List[str]], peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.Predictor[source]#

Create the appropriate Predictor subclass from a list of model paths.

Parameters
@@ -2055,7 +2054,7 @@

sleap.nn.inference

-make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#
+make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#

Make a data loading pipeline.

Parameters
@@ -2075,7 +2074,7 @@

sleap.nn.inference

-predict(data: Union[sleap.nn.data.pipelines.Provider, sleap.io.dataset.Labels, sleap.io.video.Video], make_labels: bool = True) Union[List[Dict[str, numpy.ndarray]], sleap.io.dataset.Labels][source]#
+predict(data: Union[sleap.nn.data.pipelines.Provider, sleap.io.dataset.Labels, sleap.io.video.Video], make_labels: bool = True) Union[List[Dict[str, numpy.ndarray]], sleap.io.dataset.Labels][source]#

Run inference on a data source.

Parameters
@@ -2105,11 +2104,11 @@

sleap.nn.inference

-class sleap.nn.inference.RateColumn(table_column: Optional[rich.table.Column] = None)[source]#
+class sleap.nn.inference.RateColumn(table_column: Optional[rich.table.Column] = None)[source]#

Renders the progress rate.

-render(task: Task) rich.text.Text[source]#
+render(task: Task) rich.text.Text[source]#

Show progress rate.

@@ -2117,7 +2116,7 @@

sleap.nn.inference

-class sleap.nn.inference.SingleInstanceInferenceLayer(*args, **kwargs)[source]#
+class sleap.nn.inference.SingleInstanceInferenceLayer(*args, **kwargs)[source]#

Inference layer for applying single instance models.

This layer encapsulates all of the inference operations requires for generating predictions from a single instance confidence map model. This includes @@ -2206,7 +2205,7 @@

sleap.nn.inference

-call(data)[source]#
+call(data)[source]#

Predict instance confidence maps and find peaks.

Parameters
@@ -2234,7 +2233,7 @@

sleap.nn.inference

-class sleap.nn.inference.SingleInstanceInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.SingleInstanceInferenceModel(*args, **kwargs)[source]#

Single instance prediction model.

This model encapsulates the basic single instance approach where it is assumed that there is only one instance in the frame. The images are passed to a peak detector @@ -2249,7 +2248,7 @@

sleap.nn.inference

-call(example)[source]#
+call(example)[source]#

Predict instances for one batch of images.

Parameters
@@ -2275,7 +2274,7 @@

sleap.nn.inference

-class sleap.nn.inference.SingleInstancePredictor(confmap_config: sleap.nn.config.training_job.TrainingJobConfig, confmap_model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.SingleInstanceInferenceModel] = None, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.SingleInstancePredictor(confmap_config: sleap.nn.config.training_job.TrainingJobConfig, confmap_model: sleap.nn.model.Model, inference_model: Optional[sleap.nn.inference.SingleInstanceInferenceModel] = None, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Single instance predictor.

This high-level class handles initialization, preprocessing and tracking using a trained single instance SLEAP model.

@@ -2379,7 +2378,7 @@

sleap.nn.inference

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#

Export a trained SLEAP model as a frozen graph. Initializes model, creates a dummy tracing batch and passes it through the model. The frozen graph is saved along with training meta info.

@@ -2408,7 +2407,7 @@

sleap.nn.inference

-classmethod from_trained_models(model_path: str, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, resize_input_layer: bool = True) sleap.nn.inference.SingleInstancePredictor[source]#
+classmethod from_trained_models(model_path: str, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, batch_size: int = 4, resize_input_layer: bool = True) sleap.nn.inference.SingleInstancePredictor[source]#

Create the predictor from a saved model.

Parameters
@@ -2447,7 +2446,7 @@

sleap.nn.inference

-class sleap.nn.inference.TopDownInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.TopDownInferenceModel(*args, **kwargs)[source]#

Top-down instance prediction model.

This model encapsulates the top-down approach where instances are first detected by local peak detection of an anchor point and then cropped. These instance-centered @@ -2472,7 +2471,7 @@

sleap.nn.inference

-call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Predict instances for one batch of images.

Parameters
@@ -2506,7 +2505,7 @@

sleap.nn.inference

-class sleap.nn.inference.TopDownMultiClassFindPeaks(*args, **kwargs)[source]#
+class sleap.nn.inference.TopDownMultiClassFindPeaks(*args, **kwargs)[source]#

Keras layer that predicts and classifies peaks from images using a trained model.

This layer encapsulates all of the inference operations required for generating predictions from a centered instance confidence map and multi-class model. This @@ -2614,7 +2613,7 @@

sleap.nn.inference

-call(inputs: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(inputs: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Predict confidence maps and infer peak coordinates.

This layer can be chained with a CentroidCrop layer to create a top-down inference function from full images.

@@ -2675,7 +2674,7 @@

sleap.nn.inference

-class sleap.nn.inference.TopDownMultiClassInferenceModel(*args, **kwargs)[source]#
+class sleap.nn.inference.TopDownMultiClassInferenceModel(*args, **kwargs)[source]#

Top-down instance prediction model.

This model encapsulates the top-down approach where instances are first detected by local peak detection of an anchor point and then cropped. These instance-centered @@ -2700,7 +2699,7 @@

sleap.nn.inference

-call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#
+call(example: Union[Dict[str, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor]) Dict[str, tensorflow.python.framework.ops.Tensor][source]#

Predict instances for one batch of images.

Parameters
@@ -2732,7 +2731,7 @@

sleap.nn.inference

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True)[source]#

Save the frozen graph of a model.

Parameters
@@ -2763,7 +2762,7 @@

sleap.nn.inference

-class sleap.nn.inference.TopDownMultiClassPredictor(centroid_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, centroid_model: Optional[sleap.nn.model.Model] = None, confmap_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, confmap_model: Optional[sleap.nn.model.Model] = None, inference_model: Optional[sleap.nn.inference.TopDownMultiClassInferenceModel] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, tracks: Optional[List[sleap.instance.Track]] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.TopDownMultiClassPredictor(centroid_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, centroid_model: Optional[sleap.nn.model.Model] = None, confmap_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, confmap_model: Optional[sleap.nn.model.Model] = None, inference_model: Optional[sleap.nn.inference.TopDownMultiClassInferenceModel] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, tracks: Optional[List[sleap.instance.Track]] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Top-down multi-instance predictor with classification.

This high-level class handles initialization, preprocessing and tracking using a trained top-down multi-instance classification SLEAP model.

@@ -2922,7 +2921,7 @@

sleap.nn.inference

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#

Export a trained SLEAP model as a frozen graph. Initializes model, creates a dummy tracing batch and passes it through the model. The frozen graph is saved along with training meta info.

@@ -2951,7 +2950,7 @@

sleap.nn.inference

-classmethod from_trained_models(centroid_model_path: Optional[str] = None, confmap_model_path: Optional[str] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True) sleap.nn.inference.TopDownMultiClassPredictor[source]#
+classmethod from_trained_models(centroid_model_path: Optional[str] = None, confmap_model_path: Optional[str] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True) sleap.nn.inference.TopDownMultiClassPredictor[source]#

Create predictor from saved models.

Parameters
@@ -2994,7 +2993,7 @@

sleap.nn.inference

-make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#
+make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#

Make a data loading pipeline.

Parameters
@@ -3016,7 +3015,7 @@

sleap.nn.inference

-class sleap.nn.inference.TopDownPredictor(centroid_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, centroid_model: Optional[sleap.nn.model.Model] = None, confmap_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, confmap_model: Optional[sleap.nn.model.Model] = None, inference_model: Optional[sleap.nn.inference.TopDownInferenceModel] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, max_instances: Optional[int] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.TopDownPredictor(centroid_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, centroid_model: Optional[sleap.nn.model.Model] = None, confmap_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None, confmap_model: Optional[sleap.nn.model.Model] = None, inference_model: Optional[sleap.nn.inference.TopDownInferenceModel] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, max_instances: Optional[int] = None, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Top-down multi-instance predictor.

This high-level class handles initialization, preprocessing and tracking using a trained top-down multi-instance SLEAP model.

@@ -3177,7 +3176,7 @@

sleap.nn.inference

-export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#
+export_model(save_path: str, signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#

Export a trained SLEAP model as a frozen graph. Initializes model, creates a dummy tracing batch and passes it through the model. The frozen graph is saved along with training meta info.

@@ -3206,7 +3205,7 @@

sleap.nn.inference

-classmethod from_trained_models(centroid_model_path: Optional[str] = None, confmap_model_path: Optional[str] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.TopDownPredictor[source]#
+classmethod from_trained_models(centroid_model_path: Optional[str] = None, confmap_model_path: Optional[str] = None, batch_size: int = 4, peak_threshold: float = 0.2, integral_refinement: bool = True, integral_patch_size: int = 5, resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.TopDownPredictor[source]#

Create predictor from saved models.

Parameters
@@ -3253,7 +3252,7 @@

sleap.nn.inference

-make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#
+make_pipeline(data_provider: Optional[sleap.nn.data.pipelines.Provider] = None) sleap.nn.data.pipelines.Pipeline[source]#

Make a data loading pipeline.

Parameters
@@ -3275,11 +3274,11 @@

sleap.nn.inference

-class sleap.nn.inference.VisualPredictor(config: sleap.nn.config.training_job.TrainingJobConfig, model: sleap.nn.model.Model, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#
+class sleap.nn.inference.VisualPredictor(config: sleap.nn.config.training_job.TrainingJobConfig, model: sleap.nn.model.Model, *, verbosity: str = 'rich', report_rate: float = 2.0, model_paths: List[str] = NOTHING)[source]#

Predictor class for generating the visual output of model.

-make_pipeline()[source]#
+make_pipeline()[source]#

Make a data loading pipeline.

Parameters
@@ -3299,7 +3298,7 @@

sleap.nn.inference

-predict(data_provider: sleap.nn.data.pipelines.Provider)[source]#
+predict(data_provider: sleap.nn.data.pipelines.Provider)[source]#

Run inference on a data source.

Parameters
@@ -3321,7 +3320,7 @@

sleap.nn.inference

-safely_generate(ds: tensorflow.python.data.ops.dataset_ops.DatasetV2, progress: bool = True)[source]#
+safely_generate(ds: tensorflow.python.data.ops.dataset_ops.DatasetV2, progress: bool = True)[source]#

Yields examples from dataset, catching and logging exceptions.

@@ -3329,13 +3328,13 @@

sleap.nn.inference

-sleap.nn.inference.export_cli(args: Optional[list] = None)[source]#
+sleap.nn.inference.export_cli(args: Optional[list] = None)[source]#

CLI for sleap-export.

-sleap.nn.inference.export_model(model_path: Union[str, List[str]], save_path: str = 'exported_model', signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#
+sleap.nn.inference.export_model(model_path: Union[str, List[str]], save_path: str = 'exported_model', signatures: str = 'serving_default', save_traces: bool = True, model_name: Optional[str] = None, tensors: Optional[Dict[str, str]] = None, unrag_outputs: bool = True, max_instances: Optional[int] = None)[source]#

High level export of a trained SLEAP model as a frozen graph.

Parameters
@@ -3363,7 +3362,7 @@

sleap.nn.inference

-sleap.nn.inference.find_head(model: keras.engine.training.Model, name: str) Optional[int][source]#
+sleap.nn.inference.find_head(model: keras.engine.training.Model, name: str) Optional[int][source]#

Return the index of a head in a model’s outputs.

Parameters
@@ -3390,7 +3389,7 @@

sleap.nn.inference

-sleap.nn.inference.get_keras_model_path(path: str) str[source]#
+sleap.nn.inference.get_keras_model_path(path: str) str[source]#

Utility method for finding the path to a saved Keras model.

Parameters
@@ -3404,7 +3403,7 @@

sleap.nn.inference

-sleap.nn.inference.get_model_output_stride(model: keras.engine.training.Model, input_ind: int = 0, output_ind: int = - 1) int[source]#
+sleap.nn.inference.get_model_output_stride(model: keras.engine.training.Model, input_ind: int = 0, output_ind: int = - 1) int[source]#

Return the stride (1/scale) of the model outputs relative to the input.

Parameters
@@ -3430,7 +3429,7 @@

sleap.nn.inference

-sleap.nn.inference.load_model(model_path: Union[str, List[str]], batch_size: int = 4, peak_threshold: float = 0.2, refinement: str = 'integral', tracker: Optional[str] = None, tracker_window: int = 5, tracker_max_instances: Optional[int] = None, disable_gpu_preallocation: bool = True, progress_reporting: str = 'rich', resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.Predictor[source]#
+sleap.nn.inference.load_model(model_path: Union[str, List[str]], batch_size: int = 4, peak_threshold: float = 0.2, refinement: str = 'integral', tracker: Optional[str] = None, tracker_window: int = 5, tracker_max_instances: Optional[int] = None, disable_gpu_preallocation: bool = True, progress_reporting: str = 'rich', resize_input_layer: bool = True, max_instances: Optional[int] = None) sleap.nn.inference.Predictor[source]#

Load a trained SLEAP model.

Parameters
@@ -3489,7 +3488,7 @@

sleap.nn.inference

-sleap.nn.inference.main(args: Optional[list] = None)[source]#
+sleap.nn.inference.main(args: Optional[list] = None)[source]#

Entrypoint for sleap-track CLI for running inference.

Parameters
@@ -3500,7 +3499,7 @@

sleap.nn.inference

-sleap.nn.inference.make_model_movenet(model_name: str) keras.engine.training.Model[source]#
+sleap.nn.inference.make_model_movenet(model_name: str) keras.engine.training.Model[source]#

Load a MoveNet model by name.

Parameters
diff --git a/develop/api/sleap.nn.losses.html b/develop/api/sleap.nn.losses.html index abab01b8a..2b5de6bab 100644 --- a/develop/api/sleap.nn.losses.html +++ b/develop/api/sleap.nn.losses.html @@ -9,7 +9,7 @@ - sleap.nn.losses — SLEAP (v1.4.1a1) + sleap.nn.losses — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.losses

Custom loss functions and metrics.

-class sleap.nn.losses.OHKMLoss(hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: int = - 1, loss_scale: float = 5.0, name='ohkm', **kwargs)[source]#
+class sleap.nn.losses.OHKMLoss(hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: int = - 1, loss_scale: float = 5.0, name='ohkm', **kwargs)[source]#

Online hard keypoint mining loss.

This loss serves to dynamically reweight the MSE of the top-K worst channels in each batch. This is useful when fine tuning a model to improve performance on a hard @@ -361,7 +360,7 @@

sleap.nn.losses

-call(y_gt, y_pr, sample_weight=None)[source]#
+call(y_gt, y_pr, sample_weight=None)[source]#

Invokes the Loss instance.

Parameters
@@ -380,7 +379,7 @@

sleap.nn.losses

-classmethod from_config(config: sleap.nn.config.optimization.HardKeypointMiningConfig) sleap.nn.losses.OHKMLoss[source]#
+classmethod from_config(config: sleap.nn.config.optimization.HardKeypointMiningConfig) sleap.nn.losses.OHKMLoss[source]#

Instantiates a Loss from its config (output of get_config()).

Parameters
@@ -396,7 +395,7 @@

sleap.nn.losses

-class sleap.nn.losses.PartLoss(*args, **kwargs)[source]#
+class sleap.nn.losses.PartLoss(*args, **kwargs)[source]#

Compute channelwise loss.

Useful for monitoring the MSE for specific body parts (channels).

@@ -413,7 +412,7 @@

sleap.nn.losses

-result()[source]#
+result()[source]#

Computes and returns the metric value tensor.

Result computation is an idempotent operation that simply calculates the metric value using the state variables.

@@ -421,7 +420,7 @@

sleap.nn.losses

-update_state(y_gt, y_pr, sample_weight=None)[source]#
+update_state(y_gt, y_pr, sample_weight=None)[source]#

Accumulates statistics for the metric.

Note: This function is executed as a graph function in graph mode. This means:

@@ -450,7 +449,7 @@

sleap.nn.losses

-sleap.nn.losses.compute_ohkm_loss(y_gt: tensorflow.python.framework.ops.Tensor, y_pr: tensorflow.python.framework.ops.Tensor, hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: int = - 1, loss_scale: float = 5.0) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.losses.compute_ohkm_loss(y_gt: tensorflow.python.framework.ops.Tensor, y_pr: tensorflow.python.framework.ops.Tensor, hard_to_easy_ratio: float = 2.0, min_hard_keypoints: int = 2, max_hard_keypoints: int = - 1, loss_scale: float = 5.0) tensorflow.python.framework.ops.Tensor[source]#

Compute the online hard keypoint mining loss.

diff --git a/develop/api/sleap.nn.model.html b/develop/api/sleap.nn.model.html index ee25ff47a..b58293810 100644 --- a/develop/api/sleap.nn.model.html +++ b/develop/api/sleap.nn.model.html @@ -9,7 +9,7 @@ - sleap.nn.model — SLEAP (v1.4.1a1) + sleap.nn.model — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -326,7 +325,7 @@

sleap.nn.model

model configuration without actually instantiating the model itself.

-class sleap.nn.model.Model(backbone: sleap.nn.model.Architecture, heads: Any, keras_model: Optional[keras.engine.training.Model] = None)[source]#
+class sleap.nn.model.Model(backbone: sleap.nn.model.Architecture, heads: Any, keras_model: Optional[keras.engine.training.Model] = None)[source]#

SLEAP model that describes an architecture and output types.

@@ -364,7 +363,7 @@

sleap.nn.model

-classmethod from_config(config: sleap.nn.config.model.ModelConfig, skeleton: Optional[sleap.skeleton.Skeleton] = None, tracks: Optional[List[sleap.instance.Track]] = None, update_config: bool = False) sleap.nn.model.Model[source]#
+classmethod from_config(config: sleap.nn.config.model.ModelConfig, skeleton: Optional[sleap.skeleton.Skeleton] = None, tracks: Optional[List[sleap.instance.Track]] = None, update_config: bool = False) sleap.nn.model.Model[source]#

Create a SLEAP model from configurations.

Parameters
@@ -383,7 +382,7 @@

sleap.nn.model

-make_model(input_shape: Tuple[int, int, int]) keras.engine.training.Model[source]#
+make_model(input_shape: Tuple[int, int, int]) keras.engine.training.Model[source]#

Create a trainable model by connecting the backbone with the heads.

Parameters
diff --git a/develop/api/sleap.nn.paf_grouping.html b/develop/api/sleap.nn.paf_grouping.html index f7f1d52eb..04b0929c3 100644 --- a/develop/api/sleap.nn.paf_grouping.html +++ b/develop/api/sleap.nn.paf_grouping.html @@ -9,7 +9,7 @@ - sleap.nn.paf_grouping — SLEAP (v1.4.1a1) + sleap.nn.paf_grouping — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -343,7 +342,7 @@

sleap.nn.paf_grouping

-class sleap.nn.paf_grouping.EdgeConnection(src_peak_ind: int, dst_peak_ind: int, score: float)[source]#
+class sleap.nn.paf_grouping.EdgeConnection(src_peak_ind: int, dst_peak_ind: int, score: float)[source]#

Indices to specify a matched connection between two peaks.

This is a convenience named tuple for use in the matching pipeline.

@@ -383,7 +382,7 @@

sleap.nn.paf_grouping

-class sleap.nn.paf_grouping.EdgeType(src_node_ind: int, dst_node_ind: int)[source]#
+class sleap.nn.paf_grouping.EdgeType(src_node_ind: int, dst_node_ind: int)[source]#

Indices to uniquely identify a single edge type.

This is a convenience named tuple for use in the matching pipeline.

@@ -412,7 +411,7 @@

sleap.nn.paf_grouping

-class sleap.nn.paf_grouping.PAFScorer(part_names: List[str], edges: List[Tuple[str, str]], pafs_stride: int, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, n_points: int = 10, min_instance_peaks: Union[int, float] = 0, min_line_scores: float = 0.25)[source]#
+class sleap.nn.paf_grouping.PAFScorer(part_names: List[str], edges: List[Tuple[str, str]], pafs_stride: int, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, n_points: int = 10, min_instance_peaks: Union[int, float] = 0, min_line_scores: float = 0.25)[source]#

Scoring pipeline based on part affinity fields.

This class facilitates grouping of predicted peaks based on PAFs. It holds a set of common parameters that are used across different steps of the pipeline.

@@ -596,7 +595,7 @@

sleap.nn.paf_grouping

-classmethod from_config(config: sleap.nn.config.model.MultiInstanceConfig, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, n_points: int = 10, min_instance_peaks: Union[int, float] = 0, min_line_scores: float = 0.25) sleap.nn.paf_grouping.PAFScorer[source]#
+classmethod from_config(config: sleap.nn.config.model.MultiInstanceConfig, max_edge_length_ratio: float = 0.25, dist_penalty_weight: float = 1.0, n_points: int = 10, min_instance_peaks: Union[int, float] = 0, min_line_scores: float = 0.25) sleap.nn.paf_grouping.PAFScorer[source]#

Initialize the PAF scorer from a MultiInstanceConfig head config.

Parameters
@@ -626,7 +625,7 @@

sleap.nn.paf_grouping

-group_instances(peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_src_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_dst_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+group_instances(peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_src_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_dst_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Group matched connections into full instances for a batch.

Parameters
@@ -684,7 +683,7 @@

sleap.nn.paf_grouping

-match_candidates(edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, edge_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+match_candidates(edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, edge_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Match candidate connections for a batch based on PAF scores.

Parameters
@@ -733,7 +732,7 @@

sleap.nn.paf_grouping

-predict(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+predict(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Group a batch of predicted peaks into full instance predictions using PAFs.

Parameters
@@ -780,7 +779,7 @@

sleap.nn.paf_grouping

-score_paf_lines(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+score_paf_lines(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Create and score PAF lines formed between connection candidates.

Parameters
@@ -822,7 +821,7 @@

sleap.nn.paf_grouping

-class sleap.nn.paf_grouping.PeakID(node_ind: int, peak_ind: int)[source]#
+class sleap.nn.paf_grouping.PeakID(node_ind: int, peak_ind: int)[source]#

Indices to uniquely identify a single peak.

This is a convenience named tuple for use in the matching pipeline.

@@ -851,7 +850,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.assign_connections_to_instances(connections: Dict[sleap.nn.paf_grouping.EdgeType, List[sleap.nn.paf_grouping.EdgeConnection]], min_instance_peaks: Union[int, float] = 0, n_nodes: Optional[int] = None) Dict[sleap.nn.paf_grouping.PeakID, int][source]#
+sleap.nn.paf_grouping.assign_connections_to_instances(connections: Dict[sleap.nn.paf_grouping.EdgeType, List[sleap.nn.paf_grouping.EdgeConnection]], min_instance_peaks: Union[int, float] = 0, n_nodes: Optional[int] = None) Dict[sleap.nn.paf_grouping.PeakID, int][source]#

Assigns connected edges to instances via greedy graph partitioning.

Parameters
@@ -891,7 +890,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.compute_distance_penalty(spatial_vec_lengths: tensorflow.python.framework.ops.Tensor, max_edge_length: float, dist_penalty_weight: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.paf_grouping.compute_distance_penalty(spatial_vec_lengths: tensorflow.python.framework.ops.Tensor, max_edge_length: float, dist_penalty_weight: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#

Compute the distance penalty component of the PAF line integral score.

Parameters
@@ -937,7 +936,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.get_connection_candidates(peak_channel_inds_sample: tensorflow.python.framework.ops.Tensor, skeleton_edges: tensorflow.python.framework.ops.Tensor, n_nodes: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.paf_grouping.get_connection_candidates(peak_channel_inds_sample: tensorflow.python.framework.ops.Tensor, skeleton_edges: tensorflow.python.framework.ops.Tensor, n_nodes: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Find the indices of all the possible connections formed by the detected peaks.

Parameters
@@ -965,7 +964,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.get_paf_lines(pafs_sample: tensorflow.python.framework.ops.Tensor, peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.paf_grouping.get_paf_lines(pafs_sample: tensorflow.python.framework.ops.Tensor, peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int) tensorflow.python.framework.ops.Tensor[source]#

Gets the PAF values at the lines formed between all detected peaks in a sample.

Parameters
@@ -1011,7 +1010,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.group_instances_batch(peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_src_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_dst_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, n_nodes: int, sorted_edge_inds: Tuple[int], edge_types: List[sleap.nn.paf_grouping.EdgeType], min_instance_peaks: int, min_line_scores: float = 0.25) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+sleap.nn.paf_grouping.group_instances_batch(peaks: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_vals: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_src_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_dst_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, match_line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, n_nodes: int, sorted_edge_inds: Tuple[int], edge_types: List[sleap.nn.paf_grouping.EdgeType], min_instance_peaks: int, min_line_scores: float = 0.25) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Group matched connections into full instances for a batch.

Parameters
@@ -1075,7 +1074,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.group_instances_sample(peaks_sample: tensorflow.python.framework.ops.Tensor, peak_scores_sample: tensorflow.python.framework.ops.Tensor, peak_channel_inds_sample: tensorflow.python.framework.ops.Tensor, match_edge_inds_sample: tensorflow.python.framework.ops.Tensor, match_src_peak_inds_sample: tensorflow.python.framework.ops.Tensor, match_dst_peak_inds_sample: tensorflow.python.framework.ops.Tensor, match_line_scores_sample: tensorflow.python.framework.ops.Tensor, n_nodes: int, sorted_edge_inds: Tuple[int], edge_types: List[sleap.nn.paf_grouping.EdgeType], min_instance_peaks: int, min_line_scores: float = 0.25) Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#
+sleap.nn.paf_grouping.group_instances_sample(peaks_sample: tensorflow.python.framework.ops.Tensor, peak_scores_sample: tensorflow.python.framework.ops.Tensor, peak_channel_inds_sample: tensorflow.python.framework.ops.Tensor, match_edge_inds_sample: tensorflow.python.framework.ops.Tensor, match_src_peak_inds_sample: tensorflow.python.framework.ops.Tensor, match_dst_peak_inds_sample: tensorflow.python.framework.ops.Tensor, match_line_scores_sample: tensorflow.python.framework.ops.Tensor, n_nodes: int, sorted_edge_inds: Tuple[int], edge_types: List[sleap.nn.paf_grouping.EdgeType], min_instance_peaks: int, min_line_scores: float = 0.25) Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray][source]#

Group matched connections into full instances for a single sample.

Parameters
@@ -1141,7 +1140,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.make_line_subs(peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.paf_grouping.make_line_subs(peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds: tensorflow.python.framework.ops.Tensor, edge_inds: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int) tensorflow.python.framework.ops.Tensor[source]#

Create the lines between candidate connections for evaluating the PAFs.

Parameters
@@ -1185,7 +1184,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.make_predicted_instances(peaks: numpy.array, peak_scores: numpy.array, connections: List[sleap.nn.paf_grouping.EdgeConnection], instance_assignments: Dict[sleap.nn.paf_grouping.PeakID, int]) Tuple[numpy.array, numpy.array, numpy.array][source]#
+sleap.nn.paf_grouping.make_predicted_instances(peaks: numpy.array, peak_scores: numpy.array, connections: List[sleap.nn.paf_grouping.EdgeConnection], instance_assignments: Dict[sleap.nn.paf_grouping.PeakID, int]) Tuple[numpy.array, numpy.array, numpy.array][source]#

Group peaks by assignments and accumulate scores.

Parameters
@@ -1208,7 +1207,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.match_candidates_batch(edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, edge_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, n_edges: int) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+sleap.nn.paf_grouping.match_candidates_batch(edge_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, edge_peak_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, line_scores: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, n_edges: int) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Match candidate connections for a batch based on PAF scores.

Parameters
@@ -1258,7 +1257,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.match_candidates_sample(edge_inds_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds_sample: tensorflow.python.framework.ops.Tensor, line_scores_sample: tensorflow.python.framework.ops.Tensor, n_edges: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.paf_grouping.match_candidates_sample(edge_inds_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds_sample: tensorflow.python.framework.ops.Tensor, line_scores_sample: tensorflow.python.framework.ops.Tensor, n_edges: int) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Match candidate connections for a sample based on PAF scores.

Parameters
@@ -1306,7 +1305,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.score_paf_lines(paf_lines_sample: tensorflow.python.framework.ops.Tensor, peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds_sample: tensorflow.python.framework.ops.Tensor, max_edge_length: float, dist_penalty_weight: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.paf_grouping.score_paf_lines(paf_lines_sample: tensorflow.python.framework.ops.Tensor, peaks_sample: tensorflow.python.framework.ops.Tensor, edge_peak_inds_sample: tensorflow.python.framework.ops.Tensor, max_edge_length: float, dist_penalty_weight: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#

Compute the connectivity score for each PAF line in a sample.

Parameters
@@ -1351,7 +1350,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.score_paf_lines_batch(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, skeleton_edges: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int, max_edge_length_ratio: float, dist_penalty_weight: float, n_nodes: int) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#
+sleap.nn.paf_grouping.score_paf_lines_batch(pafs: tensorflow.python.framework.ops.Tensor, peaks: tensorflow.python.framework.ops.Tensor, peak_channel_inds: tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, skeleton_edges: tensorflow.python.framework.ops.Tensor, n_line_points: int, pafs_stride: int, max_edge_length_ratio: float, dist_penalty_weight: float, n_nodes: int) Tuple[tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor, tensorflow.python.ops.ragged.ragged_tensor.RaggedTensor][source]#

Create and score PAF lines formed between connection candidates.

Parameters
@@ -1410,7 +1409,7 @@

sleap.nn.paf_grouping

-sleap.nn.paf_grouping.toposort_edges(edge_types: List[sleap.nn.paf_grouping.EdgeType]) Tuple[int][source]#
+sleap.nn.paf_grouping.toposort_edges(edge_types: List[sleap.nn.paf_grouping.EdgeType]) Tuple[int][source]#

Find a topological ordering for a list of edge types.

Parameters
diff --git a/develop/api/sleap.nn.peak_finding.html b/develop/api/sleap.nn.peak_finding.html index e018d29e2..11c2100c9 100644 --- a/develop/api/sleap.nn.peak_finding.html +++ b/develop/api/sleap.nn.peak_finding.html @@ -9,7 +9,7 @@ - sleap.nn.peak_finding — SLEAP (v1.4.1a1) + sleap.nn.peak_finding — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -330,7 +329,7 @@

sleap.nn.peak_finding

Peak finding entails finding either the global or local maxima of these confidence maps.

-sleap.nn.peak_finding.crop_bboxes(images: tensorflow.python.framework.ops.Tensor, bboxes: tensorflow.python.framework.ops.Tensor, sample_inds: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.peak_finding.crop_bboxes(images: tensorflow.python.framework.ops.Tensor, bboxes: tensorflow.python.framework.ops.Tensor, sample_inds: tensorflow.python.framework.ops.Tensor) tensorflow.python.framework.ops.Tensor[source]#

Crop bounding boxes from a batch of images.

This method serves as a convenience method for specifying the arguments of tf.image.crop_and_resize.

@@ -393,7 +392,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.find_global_peaks_integral(cms: tensorflow.python.framework.ops.Tensor, crop_size: int = 5, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.peak_finding.find_global_peaks_integral(cms: tensorflow.python.framework.ops.Tensor, crop_size: int = 5, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Find local peaks with integral refinement.

Integral regression refinement will be computed by taking the weighted average of the local neighborhood around each rough peak.

@@ -419,7 +418,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.find_global_peaks_rough(cms: tensorflow.python.framework.ops.Tensor, threshold: float = 0.1) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.peak_finding.find_global_peaks_rough(cms: tensorflow.python.framework.ops.Tensor, threshold: float = 0.1) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Find the global maximum for each sample and channel.

Parameters
@@ -498,7 +497,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.find_local_peaks_integral(cms: tensorflow.python.framework.ops.Tensor, crop_size: int = 5, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.peak_finding.find_local_peaks_integral(cms: tensorflow.python.framework.ops.Tensor, crop_size: int = 5, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Find local peaks with integral refinement.

Parameters
@@ -526,7 +525,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.find_local_peaks_rough(cms: tensorflow.python.framework.ops.Tensor, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.peak_finding.find_local_peaks_rough(cms: tensorflow.python.framework.ops.Tensor, threshold: float = 0.2) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Find local maxima via non-maximum suppresion.

Parameters
@@ -581,7 +580,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.find_offsets_local_direction(centered_patches: tensorflow.python.framework.ops.Tensor, delta: float = 0.25) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.peak_finding.find_offsets_local_direction(centered_patches: tensorflow.python.framework.ops.Tensor, delta: float = 0.25) tensorflow.python.framework.ops.Tensor[source]#

Computes subpixel offsets from the direction of the pixels around the peak.

This function finds the delta-offset from the center pixel of peak-centered patches by finding the direction of the gradient around each center.

@@ -639,7 +638,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.integral_regression(cms: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.peak_finding.integral_regression(cms: tensorflow.python.framework.ops.Tensor, xv: tensorflow.python.framework.ops.Tensor, yv: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Compute regression by integrating over the confidence maps on a grid.

Parameters
@@ -660,7 +659,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.make_gaussian_kernel(size: int, sigma: float) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.peak_finding.make_gaussian_kernel(size: int, sigma: float) tensorflow.python.framework.ops.Tensor[source]#

Generates a square unnormalized 2D symmetric Gaussian kernel.

Parameters
@@ -684,7 +683,7 @@

sleap.nn.peak_finding

-sleap.nn.peak_finding.smooth_imgs(imgs: tensorflow.python.framework.ops.Tensor, kernel_size: int = 5, sigma: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#
+sleap.nn.peak_finding.smooth_imgs(imgs: tensorflow.python.framework.ops.Tensor, kernel_size: int = 5, sigma: float = 1.0) tensorflow.python.framework.ops.Tensor[source]#

Smooths the input image by convolving it with a Gaussian kernel.

Parameters
diff --git a/develop/api/sleap.nn.system.html b/develop/api/sleap.nn.system.html index e31ef01bb..e903d4bb1 100644 --- a/develop/api/sleap.nn.system.html +++ b/develop/api/sleap.nn.system.html @@ -9,7 +9,7 @@ - sleap.nn.system — SLEAP (v1.4.1a1) + sleap.nn.system — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -325,7 +324,7 @@

sleap.nn.system

environment by wrapping tf.config module functions.

-sleap.nn.system.best_logical_device_name() str[source]#
+sleap.nn.system.best_logical_device_name() str[source]#

Return the name of the best logical device for performance.

This is particularly useful to use with tf.device() for explicit tensor placement.

@@ -341,7 +340,7 @@

sleap.nn.system

-sleap.nn.system.disable_preallocation()[source]#
+sleap.nn.system.disable_preallocation()[source]#

Disable preallocation of full GPU memory on all available GPUs.

This enables memory growth policy so that TensorFlow will not pre-allocate all available GPU memory.

@@ -352,7 +351,7 @@

sleap.nn.system

-sleap.nn.system.enable_preallocation()[source]#
+sleap.nn.system.enable_preallocation()[source]#

Enable preallocation of full GPU memory on all available GPUs.

This disables memory growth policy so that TensorFlow will pre-allocate all available GPU memory.

@@ -363,19 +362,19 @@

sleap.nn.system

-sleap.nn.system.get_all_gpus() List[tensorflow.python.eager.context.PhysicalDevice][source]#
+sleap.nn.system.get_all_gpus() List[tensorflow.python.eager.context.PhysicalDevice][source]#

Return a list of GPUs including unavailable devices.

-sleap.nn.system.get_available_gpus() List[tensorflow.python.eager.context.PhysicalDevice][source]#
+sleap.nn.system.get_available_gpus() List[tensorflow.python.eager.context.PhysicalDevice][source]#

Return a list of available GPUs.

-sleap.nn.system.get_current_gpu() tensorflow.python.eager.context.PhysicalDevice[source]#
+sleap.nn.system.get_current_gpu() tensorflow.python.eager.context.PhysicalDevice[source]#

Return the current (single) GPU device.

Returns
@@ -391,7 +390,7 @@

sleap.nn.system

-sleap.nn.system.get_gpu_memory() List[int][source]#
+sleap.nn.system.get_gpu_memory() List[int][source]#

Get the available memory on each GPU.

Returns
@@ -402,20 +401,20 @@

sleap.nn.system

-sleap.nn.system.initialize_devices()[source]#
+sleap.nn.system.initialize_devices()[source]#

Initialize available physical devices as logical devices.

If preallocation was enabled on the GPUs, this will trigger memory allocation.

-sleap.nn.system.is_gpu_system() bool[source]#
+sleap.nn.system.is_gpu_system() bool[source]#

Return True if the system has discoverable GPUs.

-sleap.nn.system.is_initialized(gpu: Optional[tensorflow.python.eager.context.PhysicalDevice] = None) bool[source]#
+sleap.nn.system.is_initialized(gpu: Optional[tensorflow.python.eager.context.PhysicalDevice] = None) bool[source]#

Check if a physical GPU has been initialized without triggering initialization.

Parameters
@@ -438,25 +437,25 @@

sleap.nn.system

-sleap.nn.system.summary()[source]#
+sleap.nn.system.summary()[source]#

Print a summary of the state of the system.

-sleap.nn.system.use_cpu_only()[source]#
+sleap.nn.system.use_cpu_only()[source]#

Hide GPUs from TensorFlow to ensure only the CPU is available.

-sleap.nn.system.use_first_gpu()[source]#
+sleap.nn.system.use_first_gpu()[source]#

Make only the first GPU available to TensorFlow.

-sleap.nn.system.use_gpu(device_ind: int)[source]#
+sleap.nn.system.use_gpu(device_ind: int)[source]#

Make a single GPU available to TensorFlow.

Parameters
@@ -467,7 +466,7 @@

sleap.nn.system

-sleap.nn.system.use_last_gpu()[source]#
+sleap.nn.system.use_last_gpu()[source]#

Make only the last GPU available to TensorFlow.

diff --git a/develop/api/sleap.nn.tracker.components.html b/develop/api/sleap.nn.tracker.components.html index 457dc79b6..d2c02fed3 100644 --- a/develop/api/sleap.nn.tracker.components.html +++ b/develop/api/sleap.nn.tracker.components.html @@ -9,7 +9,7 @@ - sleap.nn.tracker.components — SLEAP (v1.4.1a1) + sleap.nn.tracker.components — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -330,7 +329,7 @@

sleap.nn.tracker.components

-class sleap.nn.tracker.components.FrameMatches(matches: List[sleap.nn.tracker.components.Match], cost_matrix: numpy.ndarray, unmatched_instances: List[sleap.nn.tracker.components.InstanceType] = NOTHING)[source]#
+class sleap.nn.tracker.components.FrameMatches(matches: List[sleap.nn.tracker.components.Match], cost_matrix: numpy.ndarray, unmatched_instances: List[sleap.nn.tracker.components.InstanceType] = NOTHING)[source]#

Calculates (and stores) matches for a frame.

This class encapsulates the logic to generate matches (using a custom matching function) from a cost matrix. One key feature is that it retains @@ -376,7 +375,7 @@

sleap.nn.tracker.components

-classmethod from_candidate_instances(untracked_instances: List[sleap.nn.tracker.components.InstanceType], candidate_instances: List[sleap.nn.tracker.components.InstanceType], similarity_function: Callable, matching_function: Callable, robust_best_instance: float = 1.0)[source]#
+classmethod from_candidate_instances(untracked_instances: List[sleap.nn.tracker.components.InstanceType], candidate_instances: List[sleap.nn.tracker.components.InstanceType], similarity_function: Callable, matching_function: Callable, robust_best_instance: float = 1.0)[source]#

Calculates (and stores) matches for a frame from candidate instance.

Parameters
@@ -408,13 +407,13 @@

sleap.nn.tracker.components

-class sleap.nn.tracker.components.Match(track: sleap.instance.Track, instance: sleap.instance.Instance, score: Optional[float] = None, is_first_choice: bool = False)[source]#
+class sleap.nn.tracker.components.Match(track: sleap.instance.Track, instance: sleap.instance.Instance, score: Optional[float] = None, is_first_choice: bool = False)[source]#

Stores a match between a specific instance and specific track.

-sleap.nn.tracker.components.centroid_distance(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType, cache: dict = {}) float[source]#
+sleap.nn.tracker.components.centroid_distance(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType, cache: dict = {}) float[source]#

Returns the negative distance between the centroids of two instances.

Uses cache dictionary (created with function so it persists between calls) since without cache this method is significantly slower than others.

@@ -422,7 +421,7 @@

sleap.nn.tracker.components

-sleap.nn.tracker.components.connect_single_track_breaks(frames: List[LabeledFrame], instance_count: int) List[LabeledFrame][source]#
+sleap.nn.tracker.components.connect_single_track_breaks(frames: List[LabeledFrame], instance_count: int) List[LabeledFrame][source]#

Merges breaks in tracks by connecting single lost with single new track.

Parameters
@@ -439,7 +438,7 @@

sleap.nn.tracker.components

-sleap.nn.tracker.components.cull_frame_instances(instances_list: List[sleap.nn.tracker.components.InstanceType], instance_count: int, iou_threshold: Optional[float] = None) List[LabeledFrame][source]#
+sleap.nn.tracker.components.cull_frame_instances(instances_list: List[sleap.nn.tracker.components.InstanceType], instance_count: int, iou_threshold: Optional[float] = None) List[LabeledFrame][source]#

Removes instances (for single frame) over instance per frame threshold.

Parameters
@@ -459,7 +458,7 @@

sleap.nn.tracker.components

-sleap.nn.tracker.components.cull_instances(frames: List[LabeledFrame], instance_count: int, iou_threshold: Optional[float] = None)[source]#
+sleap.nn.tracker.components.cull_instances(frames: List[LabeledFrame], instance_count: int, iou_threshold: Optional[float] = None)[source]#

Removes instances from frames over instance per frame threshold.

Parameters
@@ -479,38 +478,38 @@

sleap.nn.tracker.components

-sleap.nn.tracker.components.first_choice_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#
+sleap.nn.tracker.components.first_choice_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#

Returns match indices where each row gets matched to best column.

The means that multiple rows might be matched to the same column.

-sleap.nn.tracker.components.greedy_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#
+sleap.nn.tracker.components.greedy_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#

Performs greedy bipartite matching.

-sleap.nn.tracker.components.hungarian_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#
+sleap.nn.tracker.components.hungarian_matching(cost_matrix: numpy.ndarray) List[Tuple[int, int]][source]#

Wrapper for Hungarian matching algorithm in scipy.

-sleap.nn.tracker.components.instance_iou(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType, cache: dict = {}) float[source]#
+sleap.nn.tracker.components.instance_iou(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType, cache: dict = {}) float[source]#

Computes IOU between bounding boxes of instances.

-sleap.nn.tracker.components.instance_similarity(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType) float[source]#
+sleap.nn.tracker.components.instance_similarity(ref_instance: sleap.nn.tracker.components.InstanceType, query_instance: sleap.nn.tracker.components.InstanceType) float[source]#

Computes similarity between instances.

-sleap.nn.tracker.components.nms_fast(boxes, scores, iou_threshold, target_count=None) List[int][source]#
+sleap.nn.tracker.components.nms_fast(boxes, scores, iou_threshold, target_count=None) List[int][source]#

https://www.pyimagesearch.com/2015/02/16/faster-non-maximum-suppression-python/

diff --git a/develop/api/sleap.nn.tracker.kalman.html b/develop/api/sleap.nn.tracker.kalman.html index d1b51de74..f757cfe0b 100644 --- a/develop/api/sleap.nn.tracker.kalman.html +++ b/develop/api/sleap.nn.tracker.kalman.html @@ -9,7 +9,7 @@ - sleap.nn.tracker.kalman — SLEAP (v1.4.1a1) + sleap.nn.tracker.kalman — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -329,7 +328,7 @@

sleap.nn.tracker.kalman

up the filters.

-sleap.nn.tracker.kalman.get_track_instance_matches(cost_matrix: numpy.ndarray, instances: List[sleap.nn.tracker.components.InstanceType], tracks: List[sleap.instance.Track], are_too_close_function: Callable) List[sleap.nn.tracker.components.Match][source]#
+sleap.nn.tracker.kalman.get_track_instance_matches(cost_matrix: numpy.ndarray, instances: List[sleap.nn.tracker.components.InstanceType], tracks: List[sleap.instance.Track], are_too_close_function: Callable) List[sleap.nn.tracker.components.Match][source]#

Matches track identities (from filters) to instances in frame.

Algorithm is modified greedy matching.

Standard greedy matching: @@ -361,7 +360,7 @@

sleap.nn.tracker.kalman

-sleap.nn.tracker.kalman.match_dict_from_match_function(cost_matrix: numpy.ndarray, row_items: List[Any], column_items: List[Any], match_function: Callable, key_by_column: bool = True) Dict[Any, Any][source]#
+sleap.nn.tracker.kalman.match_dict_from_match_function(cost_matrix: numpy.ndarray, row_items: List[Any], column_items: List[Any], match_function: Callable, key_by_column: bool = True) Dict[Any, Any][source]#

Dict keys are from column (tracks), values are from row (instances).

If multiple rows (instances) match on the same column (track), then dict will just contain the best match.

@@ -369,7 +368,7 @@

sleap.nn.tracker.kalman

-sleap.nn.tracker.kalman.remove_second_bests_from_cost_matrix(cost_matrix: numpy.ndarray, thresh: float, invalid_val: float = nan) numpy.ndarray[source]#
+sleap.nn.tracker.kalman.remove_second_bests_from_cost_matrix(cost_matrix: numpy.ndarray, thresh: float, invalid_val: float = nan) numpy.ndarray[source]#

Removes unclear matches from cost matrix.

If the best match for a given track is too close to the second best match, then this will clear all the matches for that track (and ensure that any diff --git a/develop/api/sleap.nn.tracking.html b/develop/api/sleap.nn.tracking.html index 5f1f0e179..e255a48f5 100644 --- a/develop/api/sleap.nn.tracking.html +++ b/develop/api/sleap.nn.tracking.html @@ -9,7 +9,7 @@ - sleap.nn.tracking — SLEAP (v1.4.1a1) + sleap.nn.tracking — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,13 +322,13 @@

sleap.nn.tracking

Tracking tools for linking grouped instances over time.

-class sleap.nn.tracking.BaseTracker[source]#
+class sleap.nn.tracking.BaseTracker[source]#

Abstract base class for tracker.

-class sleap.nn.tracking.FlowCandidateMaker(min_points: int = 0, img_scale: float = 1.0, of_window_size: int = 21, of_max_levels: int = 3, save_shifted_instances: bool = False, track_window: int = 5, shifted_instances: Dict[Tuple[int, int], List[sleap.nn.tracking.ShiftedInstance]] = NOTHING)[source]#
+class sleap.nn.tracking.FlowCandidateMaker(min_points: int = 0, img_scale: float = 1.0, of_window_size: int = 21, of_max_levels: int = 3, save_shifted_instances: bool = False, track_window: int = 5, shifted_instances: Dict[Tuple[int, int], List[sleap.nn.tracking.ShiftedInstance]] = NOTHING)[source]#

Class for producing optical flow shift matching candidates.

@@ -406,7 +405,7 @@

sleap.nn.tracking

-static flow_shift_instances(ref_instances: List[sleap.nn.tracker.components.InstanceType], ref_img: numpy.ndarray, new_img: numpy.ndarray, min_shifted_points: int = 0, scale: float = 1.0, window_size: int = 21, max_levels: int = 3) List[sleap.nn.tracking.ShiftedInstance][source]#
+static flow_shift_instances(ref_instances: List[sleap.nn.tracker.components.InstanceType], ref_img: numpy.ndarray, new_img: numpy.ndarray, min_shifted_points: int = 0, scale: float = 1.0, window_size: int = 21, max_levels: int = 3) List[sleap.nn.tracking.ShiftedInstance][source]#

Generates instances in a new frame by applying optical flow displacements.

Parameters
@@ -439,7 +438,7 @@

sleap.nn.tracking

-get_shifted_instances(ref_instances: List[sleap.nn.tracker.components.InstanceType], ref_img: numpy.ndarray, ref_t: int, img: numpy.ndarray, t: int) List[sleap.nn.tracking.ShiftedInstance][source]#
+get_shifted_instances(ref_instances: List[sleap.nn.tracker.components.InstanceType], ref_img: numpy.ndarray, ref_t: int, img: numpy.ndarray, t: int) List[sleap.nn.tracking.ShiftedInstance][source]#

Returns a list of shifted instances and save shifted instances if needed.

Parameters
@@ -456,7 +455,7 @@

sleap.nn.tracking

-get_shifted_instances_from_earlier_time(ref_t: int, ref_img: numpy.ndarray, ref_instances: typing.List[sleap.nn.tracker.components.InstanceType], t: int) -> (<class 'numpy.ndarray'>, typing.List[~InstanceType])[source]#
+get_shifted_instances_from_earlier_time(ref_t: int, ref_img: numpy.ndarray, ref_instances: typing.List[sleap.nn.tracker.components.InstanceType], t: int) -> (<class 'numpy.ndarray'>, typing.List[~InstanceType])[source]#

Generate shifted instances and corresponding image from earlier time.

Parameters
@@ -472,7 +471,7 @@

sleap.nn.tracking

-prune_shifted_instances(t: int)[source]#
+prune_shifted_instances(t: int)[source]#

Prune the shifted instances older than self.track_window.

If self.save_shifted_instances is False, do nothing.

@@ -488,17 +487,17 @@

sleap.nn.tracking

-class sleap.nn.tracking.FlowMaxTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, similarity_function: typing.Optional[typing.Callable] = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None)[source]#
+class sleap.nn.tracking.FlowMaxTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, similarity_function: typing.Optional[typing.Callable] = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None)[source]#

Pre-configured tracker to use optical flow shifted candidates with max tracks.

-matching_function() List[Tuple[int, int]][source]#
+matching_function() List[Tuple[int, int]][source]#

Performs greedy bipartite matching.

-similarity_function(query_instance: sleap.nn.tracker.components.InstanceType) float[source]#
+similarity_function(query_instance: sleap.nn.tracker.components.InstanceType) float[source]#

Computes similarity between instances.

@@ -506,7 +505,7 @@

sleap.nn.tracking

-class sleap.nn.tracking.FlowMaxTracksCandidateMaker(min_points: int = 0, img_scale: float = 1.0, of_window_size: int = 21, of_max_levels: int = 3, save_shifted_instances: bool = False, track_window: int = 5, shifted_instances: Dict[Tuple[int, int], List[sleap.nn.tracking.ShiftedInstance]] = NOTHING, max_tracks: Optional[int] = None)[source]#
+class sleap.nn.tracking.FlowMaxTracksCandidateMaker(min_points: int = 0, img_scale: float = 1.0, of_window_size: int = 21, of_max_levels: int = 3, save_shifted_instances: bool = False, track_window: int = 5, shifted_instances: Dict[Tuple[int, int], List[sleap.nn.tracking.ShiftedInstance]] = NOTHING, max_tracks: Optional[int] = None)[source]#

Class for producing optical flow shift matching candidates with maximum tracks.

@@ -521,7 +520,7 @@

sleap.nn.tracking

-static get_ref_instances(ref_t: int, ref_img: numpy.ndarray, track_matching_queue_dict: Dict[sleap.instance.Track, Deque[sleap.nn.tracking.MatchedFrameInstance]]) List[sleap.nn.tracker.components.InstanceType][source]#
+static get_ref_instances(ref_t: int, ref_img: numpy.ndarray, track_matching_queue_dict: Dict[sleap.instance.Track, Deque[sleap.nn.tracking.MatchedFrameInstance]]) List[sleap.nn.tracker.components.InstanceType][source]#

Generates a list of instances based on the reference time and image.

Parameters
@@ -539,13 +538,13 @@

sleap.nn.tracking

-class sleap.nn.tracking.FlowTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING)[source]#
+class sleap.nn.tracking.FlowTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING)[source]#

A Tracker pre-configured to use optical flow shifted candidates.

-class sleap.nn.tracking.KalmanTracker(init_tracker: Optional[sleap.nn.tracking.Tracker], init_set: sleap.nn.tracking.KalmanInitSet, kalman_tracker: sleap.nn.tracker.kalman.BareKalmanTracker, cull_function: Optional[Callable] = None, init_frame_count: int = 10, re_init_cooldown: int = 100, re_init_after: int = 20, init_done: bool = False, pre_tracked: bool = False, last_t: int = 0, last_init_t: int = 0)[source]#
+class sleap.nn.tracking.KalmanTracker(init_tracker: Optional[sleap.nn.tracking.Tracker], init_set: sleap.nn.tracking.KalmanInitSet, kalman_tracker: sleap.nn.tracker.kalman.BareKalmanTracker, cull_function: Optional[Callable] = None, init_frame_count: int = 10, re_init_cooldown: int = 100, re_init_after: int = 20, init_done: bool = False, pre_tracked: bool = False, last_t: int = 0, last_init_t: int = 0)[source]#

Class for Kalman filter-based tracking pipeline.

Kalman filters need to be initialized with a certain number of already tracked instances.

@@ -586,7 +585,7 @@

sleap.nn.tracking

-classmethod make_tracker(init_tracker: Optional[sleap.nn.tracking.Tracker], node_indices: List[int], instance_count: int, instance_iou_threshold: float = 0.8, init_frame_count: int = 10)[source]#
+classmethod make_tracker(init_tracker: Optional[sleap.nn.tracking.Tracker], node_indices: List[int], instance_count: int, instance_iou_threshold: float = 0.8, init_frame_count: int = 10)[source]#

Creates KalmanTracker object.

Parameters
@@ -611,7 +610,7 @@

sleap.nn.tracking

-track(untracked_instances: List[sleap.nn.tracker.components.InstanceType], img: Optional[numpy.ndarray] = None, t: Optional[int] = None) List[sleap.nn.tracker.components.InstanceType][source]#
+track(untracked_instances: List[sleap.nn.tracker.components.InstanceType], img: Optional[numpy.ndarray] = None, t: Optional[int] = None) List[sleap.nn.tracker.components.InstanceType][source]#

Tracks individual frame, using Kalman filters if possible.

@@ -619,31 +618,31 @@

sleap.nn.tracking

-class sleap.nn.tracking.SimpleCandidateMaker(min_points: int = 0)[source]#
+class sleap.nn.tracking.SimpleCandidateMaker(min_points: int = 0)[source]#

Class for producing list of matching candidates from prior frames.

-class sleap.nn.tracking.SimpleMaxTracker(track_window: int = 5, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_iou>, matching_function: typing.Callable = <function hungarian_matching>, candidate_maker: object = NOTHING, max_tracking: bool = True, *, max_tracks: int)[source]#
+class sleap.nn.tracking.SimpleMaxTracker(track_window: int = 5, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_iou>, matching_function: typing.Callable = <function hungarian_matching>, candidate_maker: object = NOTHING, max_tracking: bool = True, *, max_tracks: int)[source]#

Pre-configured tracker to use simple, non-image-based candidates with max tracks.

-class sleap.nn.tracking.SimpleMaxTracksCandidateMaker(min_points: int = 0, max_tracks: Optional[int] = None)[source]#
+class sleap.nn.tracking.SimpleMaxTracksCandidateMaker(min_points: int = 0, max_tracks: Optional[int] = None)[source]#

Class to generate instances with maximum number of tracks from prior frames.

-class sleap.nn.tracking.SimpleTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_iou>, matching_function: typing.Callable = <function hungarian_matching>, candidate_maker: object = NOTHING)[source]#
+class sleap.nn.tracking.SimpleTracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None, similarity_function: typing.Callable = <function instance_iou>, matching_function: typing.Callable = <function hungarian_matching>, candidate_maker: object = NOTHING)[source]#

A Tracker pre-configured to use simple, non-image-based candidates.

-class sleap.nn.tracking.TrackCleaner(instance_count: int, iou_threshold: Optional[float] = None)[source]#
+class sleap.nn.tracking.TrackCleaner(instance_count: int, iou_threshold: Optional[float] = None)[source]#

Class for merging breaks in the predicted tracks.

Method: 1. You specify how many instances there should be in each frame. @@ -682,7 +681,7 @@

sleap.nn.tracking

-class sleap.nn.tracking.Tracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, similarity_function: typing.Optional[typing.Callable] = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None)[source]#
+class sleap.nn.tracking.Tracker(max_tracks: typing.Optional[int] = None, track_window: int = 5, similarity_function: typing.Optional[typing.Callable] = <function instance_similarity>, matching_function: typing.Callable = <function greedy_matching>, candidate_maker: object = NOTHING, max_tracking: bool = False, cleaner: typing.Optional[typing.Callable] = None, target_instance_count: int = 0, pre_cull_function: typing.Optional[typing.Callable] = None, post_connect_single_breaks: bool = False, robust_best_instance: float = 1.0, min_new_track_points: int = 0, track_matching_queue: typing.Deque[sleap.nn.tracking.MatchedFrameInstances] = NOTHING, track_matching_queue_dict: typing.Dict[sleap.instance.Track, typing.Deque[sleap.nn.tracking.MatchedFrameInstance]] = NOTHING, spawned_tracks: typing.List[sleap.instance.Track] = NOTHING, save_tracked_instances: bool = False, tracked_instances: typing.Dict[int, typing.List[sleap.nn.tracker.components.InstanceType]] = NOTHING, last_matches: typing.Optional[sleap.nn.tracker.components.FrameMatches] = None)[source]#

Instance pose tracker.

Use by instantiated with the desired parameters and then calling the track method for each frame.

@@ -786,13 +785,13 @@

sleap.nn.tracking

-final_pass(frames: List[sleap.instance.LabeledFrame])[source]#
+final_pass(frames: List[sleap.instance.LabeledFrame])[source]#

Called after tracking has run on all frames to do any post-processing.

-track(untracked_instances: List[sleap.nn.tracker.components.InstanceType], img: Optional[numpy.ndarray] = None, t: Optional[int] = None) List[sleap.nn.tracker.components.InstanceType][source]#
+track(untracked_instances: List[sleap.nn.tracker.components.InstanceType], img: Optional[numpy.ndarray] = None, t: Optional[int] = None) List[sleap.nn.tracker.components.InstanceType][source]#

Performs a single step of tracking.

Parameters
@@ -818,7 +817,7 @@

sleap.nn.tracking

-sleap.nn.tracking.run_tracker(frames: List[sleap.instance.LabeledFrame], tracker: sleap.nn.tracking.BaseTracker) List[sleap.instance.LabeledFrame][source]#
+sleap.nn.tracking.run_tracker(frames: List[sleap.instance.LabeledFrame], tracker: sleap.nn.tracking.BaseTracker) List[sleap.instance.LabeledFrame][source]#

Run a tracker on a set of labeled frames.

Parameters
diff --git a/develop/api/sleap.nn.training.html b/develop/api/sleap.nn.training.html index 3fdb8d99d..72e7bf6c0 100644 --- a/develop/api/sleap.nn.training.html +++ b/develop/api/sleap.nn.training.html @@ -9,7 +9,7 @@ - sleap.nn.training — SLEAP (v1.4.1a1) + sleap.nn.training — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.training

Training functionality and high level APIs.

-class sleap.nn.training.BottomUpModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.BottomUpModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output multi-instance confidence maps and PAFs.

@@ -347,7 +346,7 @@

sleap.nn.training

-class sleap.nn.training.BottomUpMultiClassModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.BottomUpMultiClassModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output multi-instance confidence maps and class maps.

@@ -371,7 +370,7 @@

sleap.nn.training

-class sleap.nn.training.CentroidConfmapsModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.CentroidConfmapsModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output centroid confidence maps.

@@ -395,7 +394,7 @@

sleap.nn.training

-class sleap.nn.training.DataReaders(training_labels_reader: sleap.nn.data.providers.LabelsReader, validation_labels_reader: sleap.nn.data.providers.LabelsReader, test_labels_reader: Optional[sleap.nn.data.providers.LabelsReader] = None)[source]#
+class sleap.nn.training.DataReaders(training_labels_reader: sleap.nn.data.providers.LabelsReader, validation_labels_reader: sleap.nn.data.providers.LabelsReader, test_labels_reader: Optional[sleap.nn.data.providers.LabelsReader] = None)[source]#

Container class for SLEAP labels that serve as training data sources.

@@ -435,13 +434,13 @@

sleap.nn.training

-classmethod from_config(labels_config: sleap.nn.config.data.LabelsConfig, training: Union[str, sleap.io.dataset.Labels], validation: Union[str, sleap.io.dataset.Labels, float], test: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None, update_config: bool = False, with_track_only: bool = False) sleap.nn.training.DataReaders[source]#
+classmethod from_config(labels_config: sleap.nn.config.data.LabelsConfig, training: Union[str, sleap.io.dataset.Labels], validation: Union[str, sleap.io.dataset.Labels, float], test: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None, update_config: bool = False, with_track_only: bool = False) sleap.nn.training.DataReaders[source]#

Create data readers from a (possibly incomplete) configuration.

-classmethod from_labels(training: Union[str, sleap.io.dataset.Labels], validation: Union[str, sleap.io.dataset.Labels, float], test: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None, labels_config: Optional[sleap.nn.config.data.LabelsConfig] = None, update_config: bool = False, with_track_only: bool = False) sleap.nn.training.DataReaders[source]#
+classmethod from_labels(training: Union[str, sleap.io.dataset.Labels], validation: Union[str, sleap.io.dataset.Labels, float], test: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None, labels_config: Optional[sleap.nn.config.data.LabelsConfig] = None, update_config: bool = False, with_track_only: bool = False) sleap.nn.training.DataReaders[source]#

Create data readers from Labels datasets as data providers.

@@ -467,7 +466,7 @@

sleap.nn.training

-class sleap.nn.training.SingleInstanceModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.SingleInstanceModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output single-instance confidence maps.

@@ -491,7 +490,7 @@

sleap.nn.training

-class sleap.nn.training.TopDownMultiClassModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.TopDownMultiClassModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output multi-instance confidence maps and class maps.

@@ -515,7 +514,7 @@

sleap.nn.training

-class sleap.nn.training.TopdownConfmapsModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.TopdownConfmapsModelTrainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Trainer for models that output instance centered confidence maps.

@@ -539,7 +538,7 @@

sleap.nn.training

-class sleap.nn.training.Trainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#
+class sleap.nn.training.Trainer(data_readers: sleap.nn.training.DataReaders, model: sleap.nn.model.Model, config: sleap.nn.config.training_job.TrainingJobConfig, initial_config: Optional[sleap.nn.config.training_job.TrainingJobConfig] = None)[source]#

Base trainer class that provides general model training functionality.

This class is intended to be instantiated using the from_config() class method, which will return the appropriate subclass based on the input configuration.

@@ -703,19 +702,19 @@

sleap.nn.training

-cleanup()[source]#
+cleanup()[source]#

Delete visualization images subdirectory.

-evaluate()[source]#
+evaluate()[source]#

Compute evaluation metrics on data splits and save them.

-classmethod from_config(config: sleap.nn.config.training_job.TrainingJobConfig, training_labels: Optional[Union[str, sleap.io.dataset.Labels]] = None, validation_labels: Optional[Union[str, sleap.io.dataset.Labels, float]] = None, test_labels: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None) sleap.nn.training.Trainer[source]#
+classmethod from_config(config: sleap.nn.config.training_job.TrainingJobConfig, training_labels: Optional[Union[str, sleap.io.dataset.Labels]] = None, validation_labels: Optional[Union[str, sleap.io.dataset.Labels, float]] = None, test_labels: Optional[Union[str, sleap.io.dataset.Labels]] = None, video_search_paths: Optional[List[str]] = None) sleap.nn.training.Trainer[source]#

Initialize the trainer from a training job configuration.

Parameters
@@ -753,19 +752,19 @@

sleap.nn.training

-package()[source]#
+package()[source]#

Package model folder into a zip file for portability.

-setup()[source]#
+setup()[source]#

Set up data pipeline and model for training.

-train()[source]#
+train()[source]#

Execute the optimization loop to train the model.

@@ -773,85 +772,85 @@

sleap.nn.training

-sleap.nn.training.create_trainer_using_cli(args: Optional[List] = None)[source]#
+sleap.nn.training.create_trainer_using_cli(args: Optional[List] = None)[source]#

Create CLI for training.

-sleap.nn.training.get_timestamp() str[source]#
+sleap.nn.training.get_timestamp() str[source]#

Return the date and time as a string.

-sleap.nn.training.main(args: Optional[List] = None)[source]#
+sleap.nn.training.main(args: Optional[List] = None)[source]#

Create CLI for training and run.

-sleap.nn.training.sanitize_scope_name(name: str) str[source]#
+sleap.nn.training.sanitize_scope_name(name: str) str[source]#

Sanitizes string which will be used as TensorFlow scope name.

-sleap.nn.training.setup_checkpointing(config: sleap.nn.config.outputs.CheckpointingConfig, run_path: str) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_checkpointing(config: sleap.nn.config.outputs.CheckpointingConfig, run_path: str) List[keras.callbacks.Callback][source]#

Set up model checkpointing callbacks from config.

-sleap.nn.training.setup_losses(config: sleap.nn.config.optimization.OptimizationConfig) Callable[[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.training.setup_losses(config: sleap.nn.config.optimization.OptimizationConfig) Callable[[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor], tensorflow.python.framework.ops.Tensor][source]#

Set up model loss function from config.

-sleap.nn.training.setup_metrics(config: sleap.nn.config.optimization.OptimizationConfig, part_names: Optional[List[str]] = None) List[Union[keras.losses.Loss, keras.metrics.Metric]][source]#
+sleap.nn.training.setup_metrics(config: sleap.nn.config.optimization.OptimizationConfig, part_names: Optional[List[str]] = None) List[Union[keras.losses.Loss, keras.metrics.Metric]][source]#

Set up training metrics from config.

-sleap.nn.training.setup_new_run_folder(config: sleap.nn.config.outputs.OutputsConfig, base_run_name: Optional[str] = None) str[source]#
+sleap.nn.training.setup_new_run_folder(config: sleap.nn.config.outputs.OutputsConfig, base_run_name: Optional[str] = None) str[source]#

Create a new run folder from config.

-sleap.nn.training.setup_optimization_callbacks(config: sleap.nn.config.optimization.OptimizationConfig) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_optimization_callbacks(config: sleap.nn.config.optimization.OptimizationConfig) List[keras.callbacks.Callback][source]#

Set up optimization callbacks from config.

-sleap.nn.training.setup_optimizer(config: sleap.nn.config.optimization.OptimizationConfig) keras.optimizer_v2.optimizer_v2.OptimizerV2[source]#
+sleap.nn.training.setup_optimizer(config: sleap.nn.config.optimization.OptimizationConfig) keras.optimizer_v2.optimizer_v2.OptimizerV2[source]#

Set up model optimizer from config.

-sleap.nn.training.setup_output_callbacks(config: sleap.nn.config.outputs.OutputsConfig, run_path: Optional[str] = None) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_output_callbacks(config: sleap.nn.config.outputs.OutputsConfig, run_path: Optional[str] = None) List[keras.callbacks.Callback][source]#

Set up training outputs callbacks from config.

-sleap.nn.training.setup_tensorboard(config: sleap.nn.config.outputs.TensorBoardConfig, run_path: str) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_tensorboard(config: sleap.nn.config.outputs.TensorBoardConfig, run_path: str) List[keras.callbacks.Callback][source]#

Set up TensorBoard callbacks from config.

-sleap.nn.training.setup_visualization(config: sleap.nn.config.outputs.OutputsConfig, run_path: str, viz_fn: Callable[[], matplotlib.figure.Figure], name: str) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_visualization(config: sleap.nn.config.outputs.OutputsConfig, run_path: str, viz_fn: Callable[[], matplotlib.figure.Figure], name: str) List[keras.callbacks.Callback][source]#

Set up visualization callbacks from config.

-sleap.nn.training.setup_zmq_callbacks(zmq_config: sleap.nn.config.outputs.ZMQConfig) List[keras.callbacks.Callback][source]#
+sleap.nn.training.setup_zmq_callbacks(zmq_config: sleap.nn.config.outputs.ZMQConfig) List[keras.callbacks.Callback][source]#

Set up ZeroMQ callbacks from config.

diff --git a/develop/api/sleap.nn.utils.html b/develop/api/sleap.nn.utils.html index 69bdb8ee0..563f1fd53 100644 --- a/develop/api/sleap.nn.utils.html +++ b/develop/api/sleap.nn.utils.html @@ -9,7 +9,7 @@ - sleap.nn.utils — SLEAP (v1.4.1a1) + sleap.nn.utils — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.utils

This module contains generic utilities used for training and inference.

-sleap.nn.utils.compute_iou(bbox1: numpy.ndarray, bbox2: numpy.ndarray) float[source]#
+sleap.nn.utils.compute_iou(bbox1: numpy.ndarray, bbox2: numpy.ndarray) float[source]#

Computes the intersection over union for a pair of bounding boxes.

Parameters
@@ -341,7 +340,7 @@

sleap.nn.utils

-sleap.nn.utils.group_array(X: numpy.ndarray, groups: numpy.ndarray, axis: int = 0) Dict[numpy.ndarray, numpy.ndarray][source]#
+sleap.nn.utils.group_array(X: numpy.ndarray, groups: numpy.ndarray, axis: int = 0) Dict[numpy.ndarray, numpy.ndarray][source]#

Groups an array into a dictionary keyed by a grouping vector.

Parameters
@@ -370,7 +369,7 @@

sleap.nn.utils

-sleap.nn.utils.match_points(points1: tensorflow.python.framework.ops.Tensor, points2: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#
+sleap.nn.utils.match_points(points1: tensorflow.python.framework.ops.Tensor, points2: tensorflow.python.framework.ops.Tensor) Tuple[tensorflow.python.framework.ops.Tensor, tensorflow.python.framework.ops.Tensor][source]#

Match closest points across two sets.

Parameters
@@ -398,7 +397,7 @@

sleap.nn.utils

-sleap.nn.utils.reset_input_layer(keras_model: keras.engine.training.Model, new_shape: Optional[Tuple[Optional[int], Optional[int], Optional[int], int]] = None)[source]#
+sleap.nn.utils.reset_input_layer(keras_model: keras.engine.training.Model, new_shape: Optional[Tuple[Optional[int], Optional[int], Optional[int], int]] = None)[source]#

Returns a copy of keras_model with input shape reset to new_shape.

This method was modified from https://stackoverflow.com/a/58485055.

diff --git a/develop/api/sleap.nn.viz.html b/develop/api/sleap.nn.viz.html index aeefa1459..6a582a968 100644 --- a/develop/api/sleap.nn.viz.html +++ b/develop/api/sleap.nn.viz.html @@ -9,7 +9,7 @@ - sleap.nn.viz — SLEAP (v1.4.1a1) + sleap.nn.viz — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -323,7 +322,7 @@

sleap.nn.viz

Visualization and plotting utilities.

-sleap.nn.viz.generate_skeleton_preview_image(instance: sleap.instance.Instance, square_bb: bool = True, thumbnail_size=(128, 128)) bytes[source]#
+sleap.nn.viz.generate_skeleton_preview_image(instance: sleap.instance.Instance, square_bb: bool = True, thumbnail_size=(128, 128)) bytes[source]#

Generate preview image for skeleton based on given instance.

Parameters
@@ -341,7 +340,7 @@

sleap.nn.viz

-sleap.nn.viz.imgfig(size: Union[float, Tuple] = 6, dpi: int = 72, scale: float = 1.0) matplotlib.figure.Figure[source]#
+sleap.nn.viz.imgfig(size: Union[float, Tuple] = 6, dpi: int = 72, scale: float = 1.0) matplotlib.figure.Figure[source]#

Create a tight figure for image plotting.

Parameters
@@ -361,37 +360,37 @@

sleap.nn.viz

-sleap.nn.viz.plot_confmaps(confmaps: numpy.ndarray, output_scale: float = 1.0)[source]#
+sleap.nn.viz.plot_confmaps(confmaps: numpy.ndarray, output_scale: float = 1.0)[source]#

Plot confidence maps reduced over channels.

-sleap.nn.viz.plot_img(img: numpy.ndarray, dpi: int = 72, scale: float = 1.0) matplotlib.figure.Figure[source]#
+sleap.nn.viz.plot_img(img: numpy.ndarray, dpi: int = 72, scale: float = 1.0) matplotlib.figure.Figure[source]#

Plot an image in a tight figure.

-sleap.nn.viz.plot_instance(instance, skeleton=None, cmap=None, color_by_node=False, lw=2, ms=10, bbox=None, scale=1.0, **kwargs)[source]#
+sleap.nn.viz.plot_instance(instance, skeleton=None, cmap=None, color_by_node=False, lw=2, ms=10, bbox=None, scale=1.0, **kwargs)[source]#

Plot a single instance with edge coloring.

-sleap.nn.viz.plot_instances(instances, skeleton=None, cmap=None, color_by_track=False, tracks=None, **kwargs)[source]#
+sleap.nn.viz.plot_instances(instances, skeleton=None, cmap=None, color_by_track=False, tracks=None, **kwargs)[source]#

Plot a list of instances with identity coloring.

-sleap.nn.viz.plot_pafs(pafs: numpy.ndarray, output_scale: float = 1.0, stride: int = 1, scale: float = 4.0, width: float = 1.0, cmap: Optional[str] = None)[source]#
+sleap.nn.viz.plot_pafs(pafs: numpy.ndarray, output_scale: float = 1.0, stride: int = 1, scale: float = 4.0, width: float = 1.0, cmap: Optional[str] = None)[source]#

Quiver plot for a single frame of pafs.

-sleap.nn.viz.plot_peaks(pts_gt: numpy.ndarray, pts_pr: Optional[numpy.ndarray] = None, paired: bool = False)[source]#
+sleap.nn.viz.plot_peaks(pts_gt: numpy.ndarray, pts_pr: Optional[numpy.ndarray] = None, paired: bool = False)[source]#

Plot ground truth and detected peaks.

diff --git a/develop/api/sleap.skeleton.html b/develop/api/sleap.skeleton.html index 5cd6ac5e1..d936ae981 100644 --- a/develop/api/sleap.skeleton.html +++ b/develop/api/sleap.skeleton.html @@ -9,7 +9,7 @@ - sleap.skeleton — SLEAP (v1.4.1a1) + sleap.skeleton — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -326,7 +325,7 @@

sleap.skeleton

their connection to each other, and needed meta-data.

-class sleap.skeleton.EdgeType(value)[source]#
+class sleap.skeleton.EdgeType(value)[source]#

Type of edge in the skeleton graph.

The skeleton graph can store different types of edges to represent different things. All edges must specify one or more of the @@ -343,7 +342,7 @@

sleap.skeleton

-class sleap.skeleton.Node(name: str, weight: float = 1.0)[source]#
+class sleap.skeleton.Node(name: str, weight: float = 1.0)[source]#

This class represents node in the skeleton graph, i.e., a body part.

Note: Nodes can exist without being part of a skeleton.

@@ -370,19 +369,19 @@

sleap.skeleton

-classmethod as_node(node: Union[str, sleap.skeleton.Node]) sleap.skeleton.Node[source]#
+classmethod as_node(node: Union[str, sleap.skeleton.Node]) sleap.skeleton.Node[source]#

Convert given node to Node object (if not already).

-static from_names(name_list: str) List[sleap.skeleton.Node][source]#
+static from_names(name_list: str) List[sleap.skeleton.Node][source]#

Convert list of node names to list of nodes objects.

-matches(other: sleap.skeleton.Node) bool[source]#
+matches(other: sleap.skeleton.Node) bool[source]#

Check whether all attributes match between two nodes.

Parameters
@@ -398,7 +397,7 @@

sleap.skeleton

-class sleap.skeleton.Skeleton(name: Optional[str] = None)[source]#
+class sleap.skeleton.Skeleton(name: Optional[str] = None)[source]#

The main object for representing animal skeletons.

The skeleton represents the constituent parts of the animal whose pose is being estimated.

@@ -445,7 +444,7 @@

sleap.skeleton

-add_edge(source: str, destination: str)[source]#
+add_edge(source: str, destination: str)[source]#

Add an edge between two nodes.

Parameters
@@ -466,7 +465,7 @@

sleap.skeleton

-add_node(name: str)[source]#
+add_node(name: str)[source]#

Add a node representing an animal part to the skeleton.

Parameters
@@ -481,7 +480,7 @@

sleap.skeleton

-add_nodes(name_list: List[str])[source]#
+add_nodes(name_list: List[str])[source]#

Add a list of nodes representing animal parts to the skeleton.

Parameters
@@ -492,7 +491,7 @@

sleap.skeleton

-add_symmetry(node1: str, node2: str)[source]#
+add_symmetry(node1: str, node2: str)[source]#

Specify that two parts (nodes) in skeleton are symmetrical.

Certain parts of an animal body can be related as symmetrical parts in a pair. For example, left and right hands of a person.

@@ -515,13 +514,13 @@

sleap.skeleton

-clear_edges()[source]#
+clear_edges()[source]#

Delete all edges in skeleton.

-delete_edge(source: str, destination: str)[source]#
+delete_edge(source: str, destination: str)[source]#

Delete an edge between two nodes.

Parameters
@@ -542,7 +541,7 @@

sleap.skeleton

-delete_node(name: str)[source]#
+delete_node(name: str)[source]#

Remove a node from the skeleton.

The method removes a node from the skeleton and any edge that is connected to it.

@@ -561,7 +560,7 @@

sleap.skeleton

-delete_symmetry(node1: Union[str, sleap.skeleton.Node], node2: Union[str, sleap.skeleton.Node])[source]#
+delete_symmetry(node1: Union[str, sleap.skeleton.Node], node2: Union[str, sleap.skeleton.Node])[source]#

Delete a previously established symmetry between two nodes.

Parameters
@@ -604,7 +603,7 @@

sleap.skeleton

-edge_to_index(source: Union[str, sleap.skeleton.Node], destination: Union[str, sleap.skeleton.Node]) int[source]#
+edge_to_index(source: Union[str, sleap.skeleton.Node], destination: Union[str, sleap.skeleton.Node]) int[source]#

Return the index of edge from source to destination.

@@ -632,7 +631,7 @@

sleap.skeleton

-find_neighbors(node: Union[str, sleap.skeleton.Node]) List[sleap.skeleton.Node][source]#
+find_neighbors(node: Union[str, sleap.skeleton.Node]) List[sleap.skeleton.Node][source]#

Find nodes that are predecessors or successors from a node.

Parameters
@@ -646,7 +645,7 @@

sleap.skeleton

-find_node(name: Union[str, sleap.skeleton.Node]) sleap.skeleton.Node[source]#
+find_node(name: Union[str, sleap.skeleton.Node]) sleap.skeleton.Node[source]#

Find node in skeleton by name of node.

Parameters
@@ -660,7 +659,7 @@

sleap.skeleton

-static find_unique_nodes(skeletons: List[sleap.skeleton.Skeleton]) List[sleap.skeleton.Node][source]#
+static find_unique_nodes(skeletons: List[sleap.skeleton.Skeleton]) List[sleap.skeleton.Node][source]#

Find all unique nodes from a list of skeletons.

Parameters
@@ -674,7 +673,7 @@

sleap.skeleton

-classmethod from_dict(d: Dict, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) sleap.skeleton.Skeleton[source]#
+classmethod from_dict(d: Dict, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) sleap.skeleton.Skeleton[source]#

Create skeleton from dict; used for loading from JSON.

Parameters
@@ -698,7 +697,7 @@

sleap.skeleton

-classmethod from_json(json_str: str, idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) sleap.skeleton.Skeleton[source]#
+classmethod from_json(json_str: str, idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) sleap.skeleton.Skeleton[source]#

Instantiate Skeleton from JSON string.

Parameters
@@ -720,7 +719,7 @@

sleap.skeleton

-classmethod from_names_and_edge_inds(node_names: List[str], edge_inds: Optional[List[Tuple[int, int]]] = None) sleap.skeleton.Skeleton[source]#
+classmethod from_names_and_edge_inds(node_names: List[str], edge_inds: Optional[List[Tuple[int, int]]] = None) sleap.skeleton.Skeleton[source]#

Create skeleton from a list of node names and edge indices.

Parameters
@@ -738,7 +737,7 @@

sleap.skeleton

-get_symmetry(node: Union[str, sleap.skeleton.Node]) Optional[sleap.skeleton.Node][source]#
+get_symmetry(node: Union[str, sleap.skeleton.Node]) Optional[sleap.skeleton.Node][source]#

Return the node symmetric with the specified node.

Parameters
@@ -755,7 +754,7 @@

sleap.skeleton

-get_symmetry_name(node: Union[str, sleap.skeleton.Node]) Optional[str][source]#
+get_symmetry_name(node: Union[str, sleap.skeleton.Node]) Optional[str][source]#

Return the name of the node symmetric with the specified node.

Parameters
@@ -781,7 +780,7 @@

sleap.skeleton

-has_edge(source_name: str, dest_name: str) bool[source]#
+has_edge(source_name: str, dest_name: str) bool[source]#

Check whether the skeleton has an edge.

Parameters
@@ -798,7 +797,7 @@

sleap.skeleton

-has_node(name: str) bool[source]#
+has_node(name: str) bool[source]#

Check whether the skeleton has a node.

Parameters
@@ -812,7 +811,7 @@

sleap.skeleton

-has_nodes(names: Iterable[str]) bool[source]#
+has_nodes(names: Iterable[str]) bool[source]#

Check whether the skeleton has a list of nodes.

Parameters
@@ -842,7 +841,7 @@

sleap.skeleton

-classmethod load_all_hdf5(file: Union[str, h5py._hl.files.File], return_dict: bool = False) Union[List[sleap.skeleton.Skeleton], Dict[str, sleap.skeleton.Skeleton]][source]#
+classmethod load_all_hdf5(file: Union[str, h5py._hl.files.File], return_dict: bool = False) Union[List[sleap.skeleton.Skeleton], Dict[str, sleap.skeleton.Skeleton]][source]#

Load all skeletons found in the HDF5 file.

Parameters
@@ -863,7 +862,7 @@

sleap.skeleton

-classmethod load_hdf5(file: Union[str, h5py._hl.files.File], name: str) List[sleap.skeleton.Skeleton][source]#
+classmethod load_hdf5(file: Union[str, h5py._hl.files.File], name: str) List[sleap.skeleton.Skeleton][source]#

Load a specific skeleton (by name) from the HDF5 file.

Parameters
@@ -880,7 +879,7 @@

sleap.skeleton

-classmethod load_json(filename: str, idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) sleap.skeleton.Skeleton[source]#
+classmethod load_json(filename: str, idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) sleap.skeleton.Skeleton[source]#

Load a skeleton from a JSON file.

This method will load the Skeleton from JSON file saved with; save_json()

@@ -904,7 +903,7 @@

sleap.skeleton

-classmethod load_mat(filename: str) sleap.skeleton.Skeleton[source]#
+classmethod load_mat(filename: str) sleap.skeleton.Skeleton[source]#

Load the skeleton from a Matlab MAT file.

This is to support backwards compatibility with old LEAP MATLAB code and datasets.

@@ -920,7 +919,7 @@

sleap.skeleton

-static make_cattr(idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) cattr.converters.Converter[source]#
+static make_cattr(idx_to_node: Optional[Dict[int, sleap.skeleton.Node]] = None) cattr.converters.Converter[source]#

Make cattr.Convert() for Skeleton.

Make a cattr.Converter() that registers structure/unstructure hooks for Skeleton objects to handle serialization of skeletons.

@@ -937,7 +936,7 @@

sleap.skeleton

-matches(other: sleap.skeleton.Skeleton) bool[source]#
+matches(other: sleap.skeleton.Skeleton) bool[source]#

Compare this Skeleton to another, ignoring name and node identities.

Parameters
@@ -973,7 +972,7 @@

sleap.skeleton

-node_to_index(node: Union[str, sleap.skeleton.Node]) int[source]#
+node_to_index(node: Union[str, sleap.skeleton.Node]) int[source]#

Return the index of the node, accepts either Node or name.

Parameters
@@ -1001,7 +1000,7 @@

sleap.skeleton

-relabel_node(old_name: str, new_name: str)[source]#
+relabel_node(old_name: str, new_name: str)[source]#

Relabel a single node to a new name.

Parameters
@@ -1018,7 +1017,7 @@

sleap.skeleton

-relabel_nodes(mapping: Dict[str, str])[source]#
+relabel_nodes(mapping: Dict[str, str])[source]#

Relabel the nodes of the skeleton.

Parameters
@@ -1036,7 +1035,7 @@

sleap.skeleton

-classmethod rename_skeleton(skeleton: sleap.skeleton.Skeleton, name: str) sleap.skeleton.Skeleton[source]#
+classmethod rename_skeleton(skeleton: sleap.skeleton.Skeleton, name: str) sleap.skeleton.Skeleton[source]#

Make copy of skeleton with new name.

This property is immutable because it is used to hash skeletons. If you want to rename a Skeleton you must use this class method.

@@ -1059,7 +1058,7 @@

sleap.skeleton

-classmethod save_all_hdf5(file: Union[str, h5py._hl.files.File], skeletons: List[sleap.skeleton.Skeleton])[source]#
+classmethod save_all_hdf5(file: Union[str, h5py._hl.files.File], skeletons: List[sleap.skeleton.Skeleton])[source]#

Convenience method to save a list of skeletons to HDF5 file.

Skeletons are saved as attributes of a /skeleton group in the file.

@@ -1081,7 +1080,7 @@

sleap.skeleton

-save_hdf5(file: Union[str, h5py._hl.files.File])[source]#
+save_hdf5(file: Union[str, h5py._hl.files.File])[source]#

Wrapper for HDF5 saving which takes either filename or h5py.File.

Parameters
@@ -1095,7 +1094,7 @@

sleap.skeleton

-save_json(filename: str, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None)[source]#
+save_json(filename: str, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None)[source]#

Save the Skeleton as JSON file.

Output the complete skeleton to a file in JSON format.

@@ -1156,7 +1155,7 @@

sleap.skeleton

-static to_dict(obj: sleap.skeleton.Skeleton, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) Dict[source]#
+static to_dict(obj: sleap.skeleton.Skeleton, node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) Dict[source]#

Convert skeleton to dict; used for saving as JSON.

Parameters
@@ -1180,7 +1179,7 @@

sleap.skeleton

-to_json(node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) str[source]#
+to_json(node_to_idx: Optional[Dict[sleap.skeleton.Node, int]] = None) str[source]#

Convert the Skeleton to a JSON representation.

Parameters
diff --git a/develop/api/sleap.util.html b/develop/api/sleap.util.html index 031c81292..33ad9bfce 100644 --- a/develop/api/sleap.util.html +++ b/develop/api/sleap.util.html @@ -9,7 +9,7 @@ - sleap.util — SLEAP (v1.4.1a1) + sleap.util — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -324,7 +323,7 @@

sleap.util

Try not to put things in here unless they really have no other place.

-sleap.util.attr_to_dtype(cls: Any)[source]#
+sleap.util.attr_to_dtype(cls: Any)[source]#

Converts classes with basic types to numpy composite dtypes.

Parameters
@@ -338,7 +337,7 @@

sleap.util

-sleap.util.decode_preview_image(img_b64: bytes) <module 'PIL.Image' from '/usr/share/miniconda/envs/sleap_ci/lib/python3.7/site-packages/PIL/Image.py'>[source]#
+sleap.util.decode_preview_image(img_b64: bytes) <module 'PIL.Image' from '/usr/share/miniconda/envs/sleap_ci/lib/python3.7/site-packages/PIL/Image.py'>[source]#

Decode a skeleton preview image byte string representation to a PIL.Image

Parameters
@@ -352,7 +351,7 @@

sleap.util

-sleap.util.dict_cut(d: Dict, a: int, b: int) Dict[source]#
+sleap.util.dict_cut(d: Dict, a: int, b: int) Dict[source]#

Helper function for creating subdictionary by numeric indexing of items.

Assumes that dict.items() will have a fixed order.

@@ -371,7 +370,7 @@

sleap.util

-sleap.util.find_files_by_suffix(root_dir: str, suffix: str, prefix: str = '', depth: int = 0) List[posix.DirEntry][source]#
+sleap.util.find_files_by_suffix(root_dir: str, suffix: str, prefix: str = '', depth: int = 0) List[posix.DirEntry][source]#

Returns list of files matching suffix, optionally searching in subdirs.

Parameters
@@ -390,7 +389,7 @@

sleap.util

-sleap.util.frame_list(frame_str: str) Optional[List[int]][source]#
+sleap.util.frame_list(frame_str: str) Optional[List[int]][source]#

Converts ‘n-m’ string to list of ints.

Parameters
@@ -404,7 +403,7 @@

sleap.util

-sleap.util.get_config_file(shortname: str, ignore_file_not_found: bool = False, get_defaults: bool = False) str[source]#
+sleap.util.get_config_file(shortname: str, ignore_file_not_found: bool = False, get_defaults: bool = False) str[source]#

Returns the full path to the specified config file.

The config file will be at ~/.sleap/<version>/<shortname>

If that file doesn’t yet exist, we’ll look for a <shortname> file inside @@ -430,13 +429,13 @@

sleap.util

-sleap.util.get_package_file(filename: str) str[source]#
+sleap.util.get_package_file(filename: str) str[source]#

Returns full path to specified file within sleap package.

-sleap.util.json_dumps(d: Dict, filename: Optional[str] = None)[source]#
+sleap.util.json_dumps(d: Dict, filename: Optional[str] = None)[source]#

A simple wrapper around the JSON encoder we are using.

Parameters
@@ -453,7 +452,7 @@

sleap.util

-sleap.util.json_loads(json_str: str) Dict[source]#
+sleap.util.json_loads(json_str: str) Dict[source]#

A simple wrapper around the JSON decoder we are using.

Parameters
@@ -467,7 +466,7 @@

sleap.util

-sleap.util.make_scoped_dictionary(flat_dict: Dict[str, Any], exclude_nones: bool = True) Dict[str, Dict[str, Any]][source]#
+sleap.util.make_scoped_dictionary(flat_dict: Dict[str, Any], exclude_nones: bool = True) Dict[str, Dict[str, Any]][source]#

Converts dictionary with scoped keys to dictionary of dictionaries.

Parameters
@@ -489,13 +488,13 @@

sleap.util

-sleap.util.parse_uri_path(uri: str) str[source]#
+sleap.util.parse_uri_path(uri: str) str[source]#

Parse a URI starting with ‘file:///’ to a posix path.

-sleap.util.save_dict_to_hdf5(h5file: h5py._hl.files.File, path: str, dic: dict)[source]#
+sleap.util.save_dict_to_hdf5(h5file: h5py._hl.files.File, path: str, dic: dict)[source]#

Saves dictionary to an HDF5 file.

Calls itself recursively if items in dictionary are not np.ndarray, np.int64, np.float64, str, or bytes. @@ -520,7 +519,7 @@

sleap.util

-sleap.util.uniquify(seq: Iterable[Hashable]) List[source]#
+sleap.util.uniquify(seq: Iterable[Hashable]) List[source]#

Returns unique elements from list, preserving order.

Note: This will not work on Python 3.5 or lower since dicts don’t preserve order.

@@ -537,7 +536,7 @@

sleap.util

-sleap.util.usable_cpu_count() int[source]#
+sleap.util.usable_cpu_count() int[source]#

Gets number of CPUs usable by the current process.

Takes into consideration cpusets restrictions.

@@ -549,7 +548,7 @@

sleap.util

-sleap.util.weak_filename_match(filename_a: str, filename_b: str) bool[source]#
+sleap.util.weak_filename_match(filename_a: str, filename_b: str) bool[source]#

Check if paths probably point to same file.

Compares the filename and names of two directories up.

diff --git a/develop/datasets.html b/develop/datasets.html index 2eee32abe..ed5176b4a 100644 --- a/develop/datasets.html +++ b/develop/datasets.html @@ -9,7 +9,7 @@ - Datasets — SLEAP (v1.4.1a1) + Datasets — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/docs/_static/css/tabs.css b/develop/docs/_static/css/tabs.css deleted file mode 100644 index 10914e8a4..000000000 --- a/develop/docs/_static/css/tabs.css +++ /dev/null @@ -1,93 +0,0 @@ -.sphinx-tabs { - margin-bottom: 1rem; - } - - [role="tablist"] { - border-bottom: 1px solid #a0b3bf; - } - - .sphinx-tabs-tab { - position: relative; - font-family: Lato,'Helvetica Neue',Arial,Helvetica,sans-serif; - color: var(--pst-color-link); - line-height: 24px; - margin: 3px; - font-size: 16px; - font-weight: 400; - background-color: var(--bs-body-color); - border-radius: 5px 5px 0 0; - border: 0; - padding: 1rem 1.5rem; - margin-bottom: 0; - } - - .sphinx-tabs-tab[aria-selected="true"] { - font-weight: 700; - border: 1px solid #a0b3bf; - border-bottom: 1px solid rgb(50, 50, 50); - margin: -1px; - background-color: rgb(50, 50, 50); - } - - .admonition .sphinx-tabs-tab[aria-selected="true"]:last-child { - margin-bottom: -1px; - } - - .sphinx-tabs-tab:focus { - z-index: 1; - outline-offset: 1px; - } - - .sphinx-tabs-panel { - position: relative; - padding: 1rem; - border: 1px solid #a0b3bf; - margin: 0px -1px -1px -1px; - border-radius: 0 0 5px 5px; - border-top: 0; - background: rgb(50, 50, 50); - } - - .sphinx-tabs-panel.code-tab { - padding: 0.4rem; - } - - .sphinx-tab img { - margin-bottom: 24 px; - } - - /* Dark theme preference styling */ - - @media (prefers-color-scheme: dark) { - body[data-theme="auto"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); - } - - body[data-theme="auto"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); - } - - body[data-theme="auto"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 1px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); - } - } - - /* Explicit dark theme styling */ - - body[data-theme="dark"] .sphinx-tabs-panel { - color: white; - background-color: rgb(50, 50, 50); - } - - body[data-theme="dark"] .sphinx-tabs-tab { - color: white; - background-color: rgba(255, 255, 255, 0.05); - } - - body[data-theme="dark"] .sphinx-tabs-tab[aria-selected="true"] { - border-bottom: 2px solid rgb(50, 50, 50); - background-color: rgb(50, 50, 50); - } \ No newline at end of file diff --git a/develop/genindex.html b/develop/genindex.html index 80bfb6e70..75674f9d6 100644 --- a/develop/genindex.html +++ b/develop/genindex.html @@ -8,7 +8,7 @@ - Index — SLEAP (v1.4.1a1) + Index — SLEAP (v1.4.1a2) @@ -33,7 +33,6 @@ - @@ -45,7 +44,6 @@ - @@ -465,9 +463,11 @@

A

  • anchor_part_names (sleap.nn.data.instance_centroids.InstanceCentroidFinder attribute)
  • -
  • append() (sleap.io.dataset.Labels method) +
  • append() (sleap.instance.InstancesList method)
  • @@ -827,10 +827,10 @@

    C

  • CheckpointingConfig (class in sleap.nn.config.outputs)
  • - - +
  • export_nwb() (sleap.io.dataset.Labels method) +
  • +
  • extend() (sleap.instance.InstancesList method)
  • extend_from() (sleap.io.dataset.Labels method)
  • @@ -2331,9 +2339,11 @@

    I

  • instances_to_show (sleap.instance.LabeledFrame property) +
  • +
  • InstancesList (class in sleap.instance)
  • integral_patch_size (sleap.nn.inference.BottomUpInferenceLayer attribute) @@ -2555,6 +2567,8 @@

    K

    L

  • PoolingBlock (class in sleap.nn.architectures.unet) +
  • +
  • pop() (sleap.instance.InstancesList method)
  • predict() (sleap.nn.inference.InferenceModel method) @@ -4072,8 +4088,12 @@

    R

  • relabel_nodes() (sleap.skeleton.Skeleton method)
  • -
  • remove() (sleap.io.dataset.Labels method) +
  • remove() (sleap.instance.InstancesList method) + +
  • remove_all_tracks() (sleap.io.dataset.Labels method)
  • remove_empty_frames() (sleap.io.dataset.Labels method) @@ -4100,10 +4120,10 @@

    R

  • (sleap.io.dataset.LabelsDataCache method)
  • -
  • remove_predictions() (sleap.io.dataset.Labels method) -
    • +
    • remove_predictions() (sleap.io.dataset.Labels method) +
    • remove_second_bests_from_cost_matrix() (in module sleap.nn.tracker.kalman)
    • remove_suggestion() (sleap.io.dataset.Labels method) diff --git a/develop/guides/choosing-models.html b/develop/guides/choosing-models.html index 52e71c179..3324e8fa4 100644 --- a/develop/guides/choosing-models.html +++ b/develop/guides/choosing-models.html @@ -9,7 +9,7 @@ - Configuring models — SLEAP (v1.4.1a1) + Configuring models — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/cli.html b/develop/guides/cli.html index 135a70f6b..c7092f7a8 100644 --- a/develop/guides/cli.html +++ b/develop/guides/cli.html @@ -9,7 +9,7 @@ - Command line interfaces — SLEAP (v1.4.1a1) + Command line interfaces — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/colab.html b/develop/guides/colab.html index b476acf4f..07287a7de 100644 --- a/develop/guides/colab.html +++ b/develop/guides/colab.html @@ -9,7 +9,7 @@ - Run training and inference on Colab — SLEAP (v1.4.1a1) + Run training and inference on Colab — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/custom-training.html b/develop/guides/custom-training.html index e50c4fe0d..84f7ccc14 100644 --- a/develop/guides/custom-training.html +++ b/develop/guides/custom-training.html @@ -9,7 +9,7 @@ - Creating a custom training profile — SLEAP (v1.4.1a1) + Creating a custom training profile — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/gui.html b/develop/guides/gui.html index 874b3ea79..37d8c3f28 100644 --- a/develop/guides/gui.html +++ b/develop/guides/gui.html @@ -9,7 +9,7 @@ - GUI — SLEAP (v1.4.1a1) + GUI — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/index.html b/develop/guides/index.html index a4db31b5c..08ac1376d 100644 --- a/develop/guides/index.html +++ b/develop/guides/index.html @@ -9,7 +9,7 @@ - Guides — SLEAP (v1.4.1a1) + Guides — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/merging.html b/develop/guides/merging.html index b376bd0fa..78a8a0ce1 100644 --- a/develop/guides/merging.html +++ b/develop/guides/merging.html @@ -9,7 +9,7 @@ - Importing predictions for labeling — SLEAP (v1.4.1a1) + Importing predictions for labeling — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/proofreading.html b/develop/guides/proofreading.html index 13c5be62d..75c60bd54 100644 --- a/develop/guides/proofreading.html +++ b/develop/guides/proofreading.html @@ -9,7 +9,7 @@ - Tracking and proofreading — SLEAP (v1.4.1a1) + Tracking and proofreading — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/remote.html b/develop/guides/remote.html index 90ebed303..1c7ef2c76 100644 --- a/develop/guides/remote.html +++ b/develop/guides/remote.html @@ -9,7 +9,7 @@ - Running SLEAP remotely — SLEAP (v1.4.1a1) + Running SLEAP remotely — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/skeletons.html b/develop/guides/skeletons.html index 482b26fb7..58ec966e0 100644 --- a/develop/guides/skeletons.html +++ b/develop/guides/skeletons.html @@ -9,7 +9,7 @@ - Skeleton design — SLEAP (v1.4.1a1) + Skeleton design — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/training.html b/develop/guides/training.html index 767ad1382..30132fdf3 100644 --- a/develop/guides/training.html +++ b/develop/guides/training.html @@ -9,7 +9,7 @@ - Training with GUI — SLEAP (v1.4.1a1) + Training with GUI — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/guides/troubleshooting-workflows.html b/develop/guides/troubleshooting-workflows.html index 41691ef7f..3ba2c5148 100644 --- a/develop/guides/troubleshooting-workflows.html +++ b/develop/guides/troubleshooting-workflows.html @@ -9,7 +9,7 @@ - Troubleshooting workflows — SLEAP (v1.4.1a1) + Troubleshooting workflows — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/help.html b/develop/help.html index b37b05aed..808b324e9 100644 --- a/develop/help.html +++ b/develop/help.html @@ -9,7 +9,7 @@ - Help — SLEAP (v1.4.1a1) + Help — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -368,7 +367,7 @@

      Can I install it on a computer without a GPU?

      What if I already have CUDA set up on my system?#

      -

      You can use the system CUDA installation by simply using the installation method.

      +

      You can use the system CUDA installation by simply using the pip package installation method.

      Note that you will need to use a version compatible with TensorFlow 2.6+ (CUDA Toolkit v11.3 and cuDNN v8.2).

      diff --git a/develop/index.html b/develop/index.html index 4db95e4e0..a5596a12d 100644 --- a/develop/index.html +++ b/develop/index.html @@ -9,7 +9,7 @@ - Social LEAP Estimates Animal Poses (SLEAP) — SLEAP (v1.4.1a1) + Social LEAP Estimates Animal Poses (SLEAP) — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - diff --git a/develop/installation.html b/develop/installation.html index 2e64bd9ed..e7d3f181f 100644 --- a/develop/installation.html +++ b/develop/installation.html @@ -9,7 +9,7 @@ - Installation — SLEAP (v1.4.1a1) + Installation — SLEAP (v1.4.1a2) @@ -34,7 +34,6 @@ - @@ -46,7 +45,6 @@ - @@ -319,8 +317,14 @@

      Contents