diff --git a/develop/.buildinfo b/develop/.buildinfo index ff92ce9d0..5fcb3d65d 100644 --- a/develop/.buildinfo +++ b/develop/.buildinfo @@ -1,4 +1,4 @@ # Sphinx build info version 1 # This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done. -config: a7b7f9800faa1bcc2746e1b3ba7ad6cf +config: ea14d6c449de46604b07382202b24124 tags: 645f666f9bcd5a90fca523b33c5a78b7 diff --git a/develop/CODE_OF_CONDUCT.html b/develop/CODE_OF_CONDUCT.html index 1c50c1822..afa95fae3 100644 --- a/develop/CODE_OF_CONDUCT.html +++ b/develop/CODE_OF_CONDUCT.html @@ -9,7 +9,7 @@ -
sleap.util
A miscellaneous set of utility functions.
sleap.info.align
Functions to align instances.
sleap.info.feature_suggestions
Module for generating lists of frames using frame features, pca, kmeans, etc.
sleap.info.labels
Command line utility which prints data about labels file.
sleap.info.metrics
Module for producing prediction metrics for SLEAP datasets.
sleap.info.summary
Module for getting a series which gives some statistic based on labeling data for each frame of some labeled video.
sleap.info.trackcleaner
CLI for TrackCleaner (mostly deprecated).
sleap.info.write_tracking_h5
Generate an HDF5 or CSV file with track occupancy and point location data.
sleap.io.asyncvideo
Support for loading video frames (by chunk) in background process.
sleap.io.convert
Command line utility for converting between various dataset formats.
sleap.io.dataset
A SLEAP dataset collects labeled video frames, together with required metadata.
sleap.io.legacy
Module for legacy LEAP dataset.
sleap.io.pathutils
Utilities for working with file paths.
sleap.io.video
Video reading and writing interfaces for different formats.
sleap.io.videowriter
Module for writing avi/mp4 videos.
sleap.io.visuals
Module for generating videos with visual annotation overlays.
sleap.io.format.adaptor
File format adaptor base class.
sleap.io.format.alphatracker
Adaptor for reading AlphaTracker datasets.
sleap.io.format.coco
Adaptor for reading COCO keypoint detection datasets.
sleap.io.format.csv
Adaptor for writing SLEAP analysis as csv.
sleap.io.format.deeplabcut
Adaptor for reading DeepLabCut datasets.
sleap.io.format.deepposekit
Adaptor for reading DeepPoseKit datasets (HDF5).
sleap.io.format.dispatch
Dispatcher for dynamically supporting multiple dataset file formats.
sleap.io.format.filehandle
File object which can be passed to adaptors.
sleap.io.format.genericjson
Adaptor for reading and writing any generic JSON file.
sleap.io.format.hdf5
Adaptor for reading/writing SLEAP datasets as HDF5 (including slp).
slp
sleap.io.format.labels_json
Adaptor for reading/writing old, JSON dataset format (kind of deprecated).
sleap.io.format.leap_matlab
Adaptor to read (not write) LEAP MATLAB data files.
sleap.io.format.main
Read/write for multiple dataset formats.
sleap.io.format.ndx_pose
Adaptor to read and write ndx-pose files.
sleap.io.format.nix
sleap.io.format.sleap_analysis
Adaptor to read and write analysis HDF5 files.
sleap.io.format.text
Adaptor for reading and writing any generic text file.
sleap.nn.callbacks
Training-related tf.keras callbacks.
sleap.nn.viz
Visualization and plotting utilities.
sleap.nn.config.data
sleap.nn.config.model
sleap.nn.config.optimization
sleap.nn.config.outputs
sleap.nn.config.training_job
Serializable configuration classes for specifying all training job parameters.
sleap.nn.config.utils
Utilities for config building and validation.
sleap.nn.architectures.common
Common utilities for architecture and model building.
sleap.nn.data.utils
Miscellaneous utility functions for data processing.
Transforms source for best fit on to target.
Rotates every instance so that line from node_a to node_b aligns.
Gets most stable pair of nodes and aligned instances along these nodes.
Returns single (instance, node, 2) matrix with points for all instances.
Returns mean and standard deviation for every node given aligned points.
Returns pair of nodes which are at stable distance (over min threshold).
Returns sorted list of node pairs with mean and standard dev distance.
Returns mean of aligned points for instances.
Class for a set of groups of FrameItem objects.
Each item can have at most one group; each group is represented as an int.
Adds item to group.
Adds all items in list to group.
Returns group that contain item.
Returns new FrameGroupSet with groups sampled from current groups.
Note that the order of items in the new groups will not match order of items in the groups from which samples are drawn.
Just a simple wrapper for (video, frame_idx), plus method to get image.
Container for items, each item can “own” one or more rows of data.
Transform data using bag of features based on brisk features.
Extends an ownership list with number of rows owned by next item.
Flattens each row of data to 1-d array.
Sets items for Stack to all items from current GroupSet.
Returns rows of data which belong to item.
Returns indexes of rows in data which belong to item.
Sets data to raw image for each FrameItem.
Transforms data into bag of features vector of hog descriptors.
Adds GroupSet using k-means clustering on data.
Adds GroupSet by sampling frames from each video.
Transforms data by applying PCA.
Adds GroupSet by sampling items from current GroupSet.
Enables easy per-video pipeline parallelization for feature suggestions.
Create a FeatureSuggestionPipeline with the desired parameters, and then call ParallelFeaturePipeline.run() with the pipeline and the list @@ -579,25 +579,25 @@
FeatureSuggestionPipeline
ParallelFeaturePipeline.run()
SuggestionFrame
Apply pipeline to single video by idx. Can be called in process.
Make class object from pipeline and list of videos.
Runs pipeline on all videos in parallel and returns suggestions.
Converts serialized data from processes back into SuggestionFrames.
Calculate (a * b) matrix of pairwise costs using cost function.
Given two lists of corresponding Instances, returns (instances * nodes) matrix of distances between corresponding nodes.
Given list of Instances, returns (instances * nodes * 2) matrix.
Sorts two lists of Instances to find best overall correspondence for a given cost function (e.g., total distance between points).
For each node for each instance in the first list, pairs it with the closest corresponding node from any instance in the second list.
Distances between ground truth and predicted nodes over a set of frames.
Given two instances, returns array of distances for closest points ignoring node identities.
Given two instances, returns array of distances for corresponding nodes.
Given an array of distances, returns number which are <= threshold.
Given an array of distances, returns number which are not <= threshold.
Class to calculate various statistical series for labeled frames.
Each method returns a series which is a dictionary in which keys are frame index and value are some numerical value for the frame.
Get series with statistic of instance scores in each frame.
Get series with total number of labeled points in each frame.
Get series with statistic of point displacement in each frame.
Point displacement is the distance between the point location in frame and the location of the corresponding point (same node, @@ -382,7 +382,7 @@
Get series with statistic of point scores in each frame.
Get sum of displacement for single node of each instance per frame.
sleap-track
Wraps TrackCleaner for easier cli api.
TrackCleaner
Note: the datasets are stored column-major as expected by MATLAB.
Get list of edge names as np.string_.
np.string_
Get list of node names as np.string_.
Builds numpy matrices with track occupancy and point location data.
Note: This function assumes either all instances have tracks or no instances have tracks.
Get list of track names as np.string_.
Writes HDF5 file with matrices of track occupancy and coordinates.
Removes matrix rows/columns for unoccupied tracks.
Write CSV file with data from given dictionary.
Write HDF5 file with data from given dictionary.
This class represents a labeled instance.
Add points for skeleton nodes that are missing in the instance.
This is useful when modifying the skeleton so the nodes appears in the GUI.
Create an instance from a numpy array.
Create an instance from an array of points.
Return the instance’s points in array form.
Whether two instances match by value.
Checks the types, points, track, and frame index.
Return the instance node coordinates as a numpy array.
Alias for points_array.
points_array
Apply affine transformation matrix to points in the instance.
A list of Instance`s associated with a `LabeledFrame.
Instance`s associated with a `LabeledFrame
This class should only be used for the LabeledFrame.instances attribute.
LabeledFrame.instances
Append an Instance or PredictedInstance to the list, setting the frame.
Instance
PredictedInstance
item – The Instance or PredictedInstance to append to the list.
Remove all instances from list, setting instance.frame to None.
Return a shallow copy of the list of instances as a list.
Note: This will not return an InstancesList object, but a normal list.
InstancesList
Extend the list with a list of `Instance`s or `PredictedInstance`s.
instances – A list of Instance or PredictedInstance objects to add to the -list.
None
Insert object before index.
Return the LabeledFrame associated with this list of instances.
LabeledFrame
Remove and return instance at index, setting instance.frame to None.
Remove instance from list, setting instance.frame to None.
Holds labeled data for a single frame of a video.
Merge two frames, return conflicts if any.
A conflict occurs when * each frame has Instances which don’t perfectly match those
Merge data from new frames into a Labels object.
Labels
Everything that can be merged cleanly is merged, any conflicts are returned.
Video
dictionary in which keys are frame index (int) -and value is list of :class:`Instance`s
Retrieve instances (if any) matching specifications.
track – The Track to match. Note that None will only -match instances where :attribute:`Instance.track` is +match instances where :attribute:`Instance.track` is None. If track is -1, then we’ll match any track.
Track
user – Whether to only match user (non-predicted) instances.
Return index of given Instance.
Add instance to frame.
Return merged LabeledFrames for same video and frame index.
The merged list of :class:`LabeledFrame`s.
Return the instances as an array of shape (instances, nodes, 2).
Plot the frame with all instances.
Plot the frame with all predicted instances.
Remove instances with no visible nodes from the labeled frame.
Removes any instances without a track assignment.
A labelled point and any metadata associated with it.
Are either of the coordinates a NaN value.
Return the point as a numpy array.
PointArray is a sub-class of numpy recarray which stores Point objects as records.
Converts a PointArray (or child) to a new instance.
PointArray
This will convert an object to the same type as itself, so a PredictedPointArray will result in the same.
PredictedPointArray
Construct a point array where points are all set to default.
The constructed PointArray will have specified size and each value in the array is assigned the default values for @@ -1006,7 +936,7 @@
A predicted instance is an output of the inference procedure.
Create a predicted instance from data arrays.
Create a PredictedInstance from an Instance.
The fields are copied in a shallow manner with the exception of points. For each point in the instance a PredictedPoint is created with score set to default @@ -1063,7 +993,7 @@
PredictedPoint
A predicted point is an output of the inference procedure.
It has all the properties of a labeled point, plus a score.
Create a PredictedPoint from a Point
PredictedPointArray is analogous to PointArray except for predicted points.
Convert a PredictedPointArray to a normal PointArray.
A track object is associated with a set of animal/object instances across multiple frames of video. This allows tracking of unique entities in the video over time and space.
Check if two tracks match by value.
Create a cattr converter for Lists of Instances/PredictedInstances.
This is required because cattrs doesn’t automatically detect the class when the attributes of one class are a subset of another.
Supports fetching chunks from video in background process.
Close the async video server and communication ports.
Create object and start loading frames in background process.
Sends request for loading video in background process.
Class which loads video frames in background on request.
All interactions with video server should go through AsyncVideo which runs in local thread.
AsyncVideo
Method to be run in sub-process; can be overridden in sub-class
Entrypoint for sleap-convert CLI for converting .slp to different formats.
sleap-convert
The Labels class collects the data for a SLEAP project.
This class is front-end for all interactions with loading, writing, and modifying these labels. The actual storage backend for the data @@ -448,13 +448,13 @@
Add instance to frame, updating track occupancy.
Add a suggested frame to the labels.
Add track to labels, updating occupancy.
Add a video to the labels if it is not already in it.
Video instances are added automatically when adding labeled frames, but this function allows for adding videos to the labels before any @@ -494,25 +494,25 @@
Add labeled frame to list of labeled frames.
Append the suggested frames.
Delete all suggestions.
Merge frames and other data from one dataset into another.
Anything that can be merged cleanly is merged into base_labels.
Frames conflict just in case each labels object has a matching @@ -556,7 +556,7 @@
Return a full deep copy of the labels. .. admonition:: Notes
@@ -568,19 +568,19 @@ sleap.io.dataset -delete_suggestions(video)[source]# +delete_suggestions(video)[source]# Delete suggestions for specified video. -describe()[source]# +describe()[source]# Print basic statistics about the labels dataset. -export(filename: str)[source]# +export(filename: str)[source]# Export labels to analysis HDF5 format. This expects the labels to contain data for a single video (e.g., predictions). @@ -608,24 +608,9 @@ sleap.io.dataset - - -export_csv(filename: str)[source]# -Export labels to CSV format. - -Parameters -filename – Output path for the CSV format file. - - - -Notes -This will write the contents of the labels out as a CSV file. - - - -export_nwb(filename: str, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]# +export_nwb(filename: str, overwrite: bool = False, session_description: str = 'Processed SLEAP pose data', identifier: Optional[str] = None, session_start_time: Optional[datetime.datetime] = None)[source]# Export all PredictedInstance objects in a Labels object to an NWB file. Use Labels.numpy to create a pynwb.NWBFile with a separate @@ -685,7 +670,7 @@ sleap.io.dataset -extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]# +extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]# Merge data from another Labels object or LabeledFrame list. Arg:new_frames: the object from which to copy data @@ -704,7 +689,7 @@ sleap.io.dataset -extract(inds, copy: bool = False) → sleap.io.dataset.Labels[source]# +extract(inds, copy: bool = False) → sleap.io.dataset.Labels[source]# Extract labeled frames from indices and return a new Labels object. :param inds: Any valid indexing keys, e.g., a range, slice, list of label indices, @@ -737,7 +722,7 @@ sleap.io.dataset -find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# +find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# Search for labeled frames given video and/or frame index. Parameters @@ -762,7 +747,7 @@ sleap.io.dataset -find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# +find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# Find the first occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -786,7 +771,7 @@ sleap.io.dataset -find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# +find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# Find the last occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -807,13 +792,13 @@ sleap.io.dataset -find_suggestion(video, frame_idx)[source]# +find_suggestion(video, frame_idx)[source]# Find SuggestionFrame by video and frame index. -find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# +find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# Get instances for a given video, track, and range of frames. Parameters @@ -832,7 +817,7 @@ sleap.io.dataset -static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# +static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# Finish conflicted merge from complex_merge_between. Parameters @@ -846,7 +831,7 @@ sleap.io.dataset -frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# +frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# Return an iterator over all labeled frames in a video. Parameters @@ -865,7 +850,7 @@ sleap.io.dataset -get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# +get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# Return labeled frames matching key or return None if not found. This is a safe version of labels[...] that will not raise an exception if the item is not found. @@ -898,31 +883,31 @@ sleap.io.dataset -get_next_suggestion(video, frame_idx, seek_direction=1)[source]# +get_next_suggestion(video, frame_idx, seek_direction=1)[source]# Return a (video, frame_idx) tuple seeking from given frame. -get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# +get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# Return all suggestions as a list of SuggestionFrame items. -get_track_count(video: sleap.io.video.Video) → int[source]# +get_track_count(video: sleap.io.video.Video) → int[source]# Return the number of occupied tracks for a given video. -get_track_occupancy(video: sleap.io.video.Video) → List[source]# +get_track_occupancy(video: sleap.io.video.Video) → List[source]# Return track occupancy list for given video. -get_unlabeled_suggestion_inds() → List[int][source]# +get_unlabeled_suggestion_inds() → List[int][source]# Find labeled frames for unlabeled suggestions and return their indices. This is useful for generating a list of example indices for inference on unlabeled suggestions. @@ -940,7 +925,7 @@ sleap.io.dataset -get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# +get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# Return a list of suggested frame indices. Parameters @@ -959,7 +944,7 @@ sleap.io.dataset -has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# +has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# Check if the labels contain a specified frame. Parameters @@ -996,25 +981,25 @@ sleap.io.dataset -index(value) → int[source]# +index(value) → int[source]# Return index of labeled frame in list of labeled frames. -insert(index, value: sleap.instance.LabeledFrame)[source]# +insert(index, value: sleap.instance.LabeledFrame)[source]# Insert labeled frame at given index. -instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# +instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# Return number of instances matching video/frame index. -instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# +instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# Iterate over instances in the labels, optionally with filters. Parameters @@ -1043,13 +1028,13 @@ sleap.io.dataset -classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# +classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# Load file, detecting format from filename. -classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# +classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# Create a callback for finding missing videos. The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have @@ -1072,13 +1057,13 @@ sleap.io.dataset -static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# +static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# Merge data from dict_b into dict_a. -merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# +merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# Merge LabeledFrame objects that are for the same video frame. Parameters @@ -1089,7 +1074,7 @@ sleap.io.dataset -merge_nodes(base_node: str, merge_node: str)[source]# +merge_nodes(base_node: str, merge_node: str)[source]# Merge two nodes and update data accordingly. Parameters @@ -1113,7 +1098,7 @@ sleap.io.dataset -numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# +numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# Construct a numpy array from instance points. Parameters @@ -1155,25 +1140,25 @@ sleap.io.dataset -remove(value: sleap.instance.LabeledFrame)[source]# +remove(value: sleap.instance.LabeledFrame)[source]# Remove given labeled frame. -remove_all_tracks()[source]# +remove_all_tracks()[source]# Remove all tracks from labels, updating (but not removing) instances. -remove_empty_frames()[source]# +remove_empty_frames()[source]# Remove frames with no instances. -remove_empty_instances(keep_empty_frames: bool = True)[source]# +remove_empty_instances(keep_empty_frames: bool = True)[source]# Remove instances with no visible points. Parameters @@ -1190,7 +1175,7 @@ sleap.io.dataset -remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# +remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# Remove a given labeled frame. Parameters @@ -1205,7 +1190,7 @@ sleap.io.dataset -remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# +remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# Remove a list of frames from the labels. Parameters @@ -1216,13 +1201,13 @@ sleap.io.dataset -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# Remove instance from frame, updating track occupancy. -remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear predicted instances from the labels. Useful prior to merging operations to prevent overlapping instances from new predictions. @@ -1245,7 +1230,7 @@ sleap.io.dataset -remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# +remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# Remove a suggestion from the list by video and frame index. Parameters @@ -1259,13 +1244,13 @@ sleap.io.dataset -remove_track(track: sleap.instance.Track)[source]# +remove_track(track: sleap.instance.Track)[source]# Remove a track from the labels, updating (but not removing) instances. -remove_untracked_instances(remove_empty_frames: bool = True)[source]# +remove_untracked_instances(remove_empty_frames: bool = True)[source]# Remove instances that do not have a track assignment. Parameters @@ -1277,13 +1262,13 @@ sleap.io.dataset -remove_unused_tracks()[source]# +remove_unused_tracks()[source]# Remove tracks that are not used by any instances. -remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear user instances from the labels. Useful prior to merging operations to prevent overlapping instances from new labels. @@ -1306,7 +1291,7 @@ sleap.io.dataset -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove a video from the labels and all associated labeled frames. Parameters @@ -1317,7 +1302,7 @@ sleap.io.dataset -save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# +save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# Save the labels to a file. Parameters @@ -1345,7 +1330,7 @@ sleap.io.dataset -classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# +classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# Save file, detecting format from filename. Parameters @@ -1366,7 +1351,7 @@ sleap.io.dataset -save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# +save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# Write images for labeled frames from all videos to hdf5 file. Note that this will make an HDF5 video, not an HDF5 labels dataset. @@ -1398,7 +1383,7 @@ sleap.io.dataset -save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# +save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# Write images for labeled frames from all videos to imgstore datasets. This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores. @@ -1432,7 +1417,7 @@ sleap.io.dataset -set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# +set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# Set the suggested frames. @@ -1444,7 +1429,7 @@ sleap.io.dataset -split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# +split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# Split labels randomly. Parameters @@ -1475,7 +1460,7 @@ sleap.io.dataset -to_dict(skip_labels: bool = False) → Dict[str, Any][source]# +to_dict(skip_labels: bool = False) → Dict[str, Any][source]# Serialize all labels to dicts. Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of @@ -1506,7 +1491,7 @@ sleap.io.dataset -to_json()[source]# +to_json()[source]# Serialize all labels in the underling list of LabeledFrame(s) to JSON. Returns @@ -1517,7 +1502,7 @@ sleap.io.dataset -to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# +to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# Create a pipeline for reading the dataset. Parameters @@ -1542,13 +1527,13 @@ sleap.io.dataset -track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# +track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# Set track on given instance, updating occupancy. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap track assignment for instances in two tracks. If you need to change the track to or from None, you’ll need to use track_set_instance() for each specific @@ -1600,7 +1585,7 @@ sleap.io.dataset -with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# +with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# Return a new Labels containing only user labels. This is useful as a preprocessing step to train on only user-labeled data. @@ -1626,89 +1611,89 @@ sleap.io.dataset -class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# +class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# Class for maintaining cache of data in labels dataset. -add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Add an instance to the labels. -add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# +add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# Add a track to the labels. -find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# +find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# Return a list of frame idxs, with optional start position/order. -find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# +find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# Return list of LabeledFrames matching video/frame_idx, or None. -get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# +get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# Return list of (video_idx, frame_idx) tuples matching video/filter. -get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# +get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# Return (possibly cached) count of frames matching video/filter. -get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# +get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# Access track occupancy cache that adds video/track as needed. -get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# +get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# Return track occupancy information for specified video. -remove_frame(frame: sleap.instance.LabeledFrame)[source]# +remove_frame(frame: sleap.instance.LabeledFrame)[source]# Remove frame and update cache as needed. -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Remove an instance and update the cache as needed. -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove video and update cache as needed. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap tracks and update cache as needed. -update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# +update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# Build (or rebuilds) various caches. -update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# +update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# Updated the cached count. Should be called after frame is modified. @@ -1716,7 +1701,7 @@ sleap.io.dataset -sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# +sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# Find a path to a missing file given a set of paths to search in. Parameters @@ -1733,7 +1718,7 @@ sleap.io.dataset -sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# +sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# Load a SLEAP labels file. SLEAP labels files (slp) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, diff --git a/develop/api/sleap.io.format.adaptor.html b/develop/api/sleap.io.format.adaptor.html index aeb787e8e..8f58998c1 100644 --- a/develop/api/sleap.io.format.adaptor.html +++ b/develop/api/sleap.io.format.adaptor.html @@ -9,7 +9,7 @@ - sleap.io.format.adaptor — SLEAP (v1.4.1a2) + sleap.io.format.adaptor — SLEAP (v1.3.4) @@ -322,7 +322,7 @@ sleap.io.format.adaptor File format adaptor base class. -class sleap.io.format.adaptor.Adaptor[source]# +class sleap.io.format.adaptor.Adaptor[source]# File format adaptor base class. An adaptor handles reading and/or writing a specific file format. To add support for a new file format, you’ll create a new class which inherits from @@ -335,13 +335,13 @@ sleap.io.format.adaptor -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -353,19 +353,19 @@ sleap.io.format.adaptor -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -391,13 +391,13 @@ sleap.io.format.adaptor -read(file: sleap.io.format.filehandle.FileHandle) → object[source]# +read(file: sleap.io.format.filehandle.FileHandle) → object[source]# Reads the file and returns the appropriate deserialized object. -write(filename: str, source_object: object)[source]# +write(filename: str, source_object: object)[source]# Writes the object to a file. @@ -405,7 +405,7 @@ sleap.io.format.adaptor -class sleap.io.format.adaptor.SleapObjectType(value)[source]# +class sleap.io.format.adaptor.SleapObjectType(value)[source]# Types of files that an adaptor could read/write. diff --git a/develop/api/sleap.io.format.alphatracker.html b/develop/api/sleap.io.format.alphatracker.html index b96b39528..623ef5bdc 100644 --- a/develop/api/sleap.io.format.alphatracker.html +++ b/develop/api/sleap.io.format.alphatracker.html @@ -9,7 +9,7 @@ - sleap.io.format.alphatracker — SLEAP (v1.4.1a2) + sleap.io.format.alphatracker — SLEAP (v1.3.4) @@ -329,7 +329,7 @@ sleap.io.format.alphatracker create a video object which wraps the individual frame images. -class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# +class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# Reads AlphaTracker JSON file with annotations for both single and multiple animals. @@ -339,7 +339,7 @@ sleap.io.format.alphatracker -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. Checks the format of the file at three different levels: - First, the upper-level format of file.json must be a list of dictionaries. @@ -364,7 +364,7 @@ sleap.io.format.alphatracker -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -376,19 +376,19 @@ sleap.io.format.alphatracker -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -400,7 +400,7 @@ sleap.io.format.alphatracker -get_alpha_tracker_frame_dict(filename: str = '')[source]# +get_alpha_tracker_frame_dict(filename: str = '')[source]# Returns a deep copy of the dictionary used for frames. Parameters @@ -418,7 +418,7 @@ sleap.io.format.alphatracker -
Delete suggestions for specified video.
Print basic statistics about the labels dataset.
Export labels to analysis HDF5 format.
This expects the labels to contain data for a single video (e.g., predictions).
Export labels to CSV format.
filename – Output path for the CSV format file.
Notes
This will write the contents of the labels out as a CSV file.
Export all PredictedInstance objects in a Labels object to an NWB file.
Use Labels.numpy to create a pynwb.NWBFile with a separate @@ -685,7 +670,7 @@ sleap.io.dataset -extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]# +extend_from(new_frames: Union[sleap.io.dataset.Labels, List[sleap.instance.LabeledFrame]], unify: bool = False)[source]# Merge data from another Labels object or LabeledFrame list. Arg:new_frames: the object from which to copy data @@ -704,7 +689,7 @@ sleap.io.dataset -extract(inds, copy: bool = False) → sleap.io.dataset.Labels[source]# +extract(inds, copy: bool = False) → sleap.io.dataset.Labels[source]# Extract labeled frames from indices and return a new Labels object. :param inds: Any valid indexing keys, e.g., a range, slice, list of label indices, @@ -737,7 +722,7 @@ sleap.io.dataset -find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# +find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# Search for labeled frames given video and/or frame index. Parameters @@ -762,7 +747,7 @@ sleap.io.dataset -find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# +find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# Find the first occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -786,7 +771,7 @@ sleap.io.dataset -find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# +find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# Find the last occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -807,13 +792,13 @@ sleap.io.dataset -find_suggestion(video, frame_idx)[source]# +find_suggestion(video, frame_idx)[source]# Find SuggestionFrame by video and frame index. -find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# +find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# Get instances for a given video, track, and range of frames. Parameters @@ -832,7 +817,7 @@ sleap.io.dataset -static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# +static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# Finish conflicted merge from complex_merge_between. Parameters @@ -846,7 +831,7 @@ sleap.io.dataset -frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# +frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# Return an iterator over all labeled frames in a video. Parameters @@ -865,7 +850,7 @@ sleap.io.dataset -get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# +get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# Return labeled frames matching key or return None if not found. This is a safe version of labels[...] that will not raise an exception if the item is not found. @@ -898,31 +883,31 @@ sleap.io.dataset -get_next_suggestion(video, frame_idx, seek_direction=1)[source]# +get_next_suggestion(video, frame_idx, seek_direction=1)[source]# Return a (video, frame_idx) tuple seeking from given frame. -get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# +get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# Return all suggestions as a list of SuggestionFrame items. -get_track_count(video: sleap.io.video.Video) → int[source]# +get_track_count(video: sleap.io.video.Video) → int[source]# Return the number of occupied tracks for a given video. -get_track_occupancy(video: sleap.io.video.Video) → List[source]# +get_track_occupancy(video: sleap.io.video.Video) → List[source]# Return track occupancy list for given video. -get_unlabeled_suggestion_inds() → List[int][source]# +get_unlabeled_suggestion_inds() → List[int][source]# Find labeled frames for unlabeled suggestions and return their indices. This is useful for generating a list of example indices for inference on unlabeled suggestions. @@ -940,7 +925,7 @@ sleap.io.dataset -get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# +get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# Return a list of suggested frame indices. Parameters @@ -959,7 +944,7 @@ sleap.io.dataset -has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# +has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# Check if the labels contain a specified frame. Parameters @@ -996,25 +981,25 @@ sleap.io.dataset -index(value) → int[source]# +index(value) → int[source]# Return index of labeled frame in list of labeled frames. -insert(index, value: sleap.instance.LabeledFrame)[source]# +insert(index, value: sleap.instance.LabeledFrame)[source]# Insert labeled frame at given index. -instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# +instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# Return number of instances matching video/frame index. -instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# +instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# Iterate over instances in the labels, optionally with filters. Parameters @@ -1043,13 +1028,13 @@ sleap.io.dataset -classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# +classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# Load file, detecting format from filename. -classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# +classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# Create a callback for finding missing videos. The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have @@ -1072,13 +1057,13 @@ sleap.io.dataset -static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# +static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# Merge data from dict_b into dict_a. -merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# +merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# Merge LabeledFrame objects that are for the same video frame. Parameters @@ -1089,7 +1074,7 @@ sleap.io.dataset -merge_nodes(base_node: str, merge_node: str)[source]# +merge_nodes(base_node: str, merge_node: str)[source]# Merge two nodes and update data accordingly. Parameters @@ -1113,7 +1098,7 @@ sleap.io.dataset -numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# +numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# Construct a numpy array from instance points. Parameters @@ -1155,25 +1140,25 @@ sleap.io.dataset -remove(value: sleap.instance.LabeledFrame)[source]# +remove(value: sleap.instance.LabeledFrame)[source]# Remove given labeled frame. -remove_all_tracks()[source]# +remove_all_tracks()[source]# Remove all tracks from labels, updating (but not removing) instances. -remove_empty_frames()[source]# +remove_empty_frames()[source]# Remove frames with no instances. -remove_empty_instances(keep_empty_frames: bool = True)[source]# +remove_empty_instances(keep_empty_frames: bool = True)[source]# Remove instances with no visible points. Parameters @@ -1190,7 +1175,7 @@ sleap.io.dataset -remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# +remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# Remove a given labeled frame. Parameters @@ -1205,7 +1190,7 @@ sleap.io.dataset -remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# +remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# Remove a list of frames from the labels. Parameters @@ -1216,13 +1201,13 @@ sleap.io.dataset -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# Remove instance from frame, updating track occupancy. -remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear predicted instances from the labels. Useful prior to merging operations to prevent overlapping instances from new predictions. @@ -1245,7 +1230,7 @@ sleap.io.dataset -remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# +remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# Remove a suggestion from the list by video and frame index. Parameters @@ -1259,13 +1244,13 @@ sleap.io.dataset -remove_track(track: sleap.instance.Track)[source]# +remove_track(track: sleap.instance.Track)[source]# Remove a track from the labels, updating (but not removing) instances. -remove_untracked_instances(remove_empty_frames: bool = True)[source]# +remove_untracked_instances(remove_empty_frames: bool = True)[source]# Remove instances that do not have a track assignment. Parameters @@ -1277,13 +1262,13 @@ sleap.io.dataset -remove_unused_tracks()[source]# +remove_unused_tracks()[source]# Remove tracks that are not used by any instances. -remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear user instances from the labels. Useful prior to merging operations to prevent overlapping instances from new labels. @@ -1306,7 +1291,7 @@ sleap.io.dataset -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove a video from the labels and all associated labeled frames. Parameters @@ -1317,7 +1302,7 @@ sleap.io.dataset -save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# +save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# Save the labels to a file. Parameters @@ -1345,7 +1330,7 @@ sleap.io.dataset -classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# +classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# Save file, detecting format from filename. Parameters @@ -1366,7 +1351,7 @@ sleap.io.dataset -save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# +save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# Write images for labeled frames from all videos to hdf5 file. Note that this will make an HDF5 video, not an HDF5 labels dataset. @@ -1398,7 +1383,7 @@ sleap.io.dataset -save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# +save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# Write images for labeled frames from all videos to imgstore datasets. This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores. @@ -1432,7 +1417,7 @@ sleap.io.dataset -set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# +set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# Set the suggested frames. @@ -1444,7 +1429,7 @@ sleap.io.dataset -split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# +split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# Split labels randomly. Parameters @@ -1475,7 +1460,7 @@ sleap.io.dataset -to_dict(skip_labels: bool = False) → Dict[str, Any][source]# +to_dict(skip_labels: bool = False) → Dict[str, Any][source]# Serialize all labels to dicts. Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of @@ -1506,7 +1491,7 @@ sleap.io.dataset -to_json()[source]# +to_json()[source]# Serialize all labels in the underling list of LabeledFrame(s) to JSON. Returns @@ -1517,7 +1502,7 @@ sleap.io.dataset -to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# +to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# Create a pipeline for reading the dataset. Parameters @@ -1542,13 +1527,13 @@ sleap.io.dataset -track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# +track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# Set track on given instance, updating occupancy. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap track assignment for instances in two tracks. If you need to change the track to or from None, you’ll need to use track_set_instance() for each specific @@ -1600,7 +1585,7 @@ sleap.io.dataset -with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# +with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# Return a new Labels containing only user labels. This is useful as a preprocessing step to train on only user-labeled data. @@ -1626,89 +1611,89 @@ sleap.io.dataset -class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# +class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# Class for maintaining cache of data in labels dataset. -add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Add an instance to the labels. -add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# +add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# Add a track to the labels. -find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# +find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# Return a list of frame idxs, with optional start position/order. -find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# +find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# Return list of LabeledFrames matching video/frame_idx, or None. -get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# +get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# Return list of (video_idx, frame_idx) tuples matching video/filter. -get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# +get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# Return (possibly cached) count of frames matching video/filter. -get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# +get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# Access track occupancy cache that adds video/track as needed. -get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# +get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# Return track occupancy information for specified video. -remove_frame(frame: sleap.instance.LabeledFrame)[source]# +remove_frame(frame: sleap.instance.LabeledFrame)[source]# Remove frame and update cache as needed. -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Remove an instance and update the cache as needed. -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove video and update cache as needed. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap tracks and update cache as needed. -update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# +update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# Build (or rebuilds) various caches. -update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# +update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# Updated the cached count. Should be called after frame is modified. @@ -1716,7 +1701,7 @@ sleap.io.dataset -sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# +sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# Find a path to a missing file given a set of paths to search in. Parameters @@ -1733,7 +1718,7 @@ sleap.io.dataset -sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# +sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# Load a SLEAP labels file. SLEAP labels files (slp) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, diff --git a/develop/api/sleap.io.format.adaptor.html b/develop/api/sleap.io.format.adaptor.html index aeb787e8e..8f58998c1 100644 --- a/develop/api/sleap.io.format.adaptor.html +++ b/develop/api/sleap.io.format.adaptor.html @@ -9,7 +9,7 @@ - sleap.io.format.adaptor — SLEAP (v1.4.1a2) + sleap.io.format.adaptor — SLEAP (v1.3.4) @@ -322,7 +322,7 @@ sleap.io.format.adaptor File format adaptor base class. -class sleap.io.format.adaptor.Adaptor[source]# +class sleap.io.format.adaptor.Adaptor[source]# File format adaptor base class. An adaptor handles reading and/or writing a specific file format. To add support for a new file format, you’ll create a new class which inherits from @@ -335,13 +335,13 @@ sleap.io.format.adaptor -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -353,19 +353,19 @@ sleap.io.format.adaptor -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -391,13 +391,13 @@ sleap.io.format.adaptor -read(file: sleap.io.format.filehandle.FileHandle) → object[source]# +read(file: sleap.io.format.filehandle.FileHandle) → object[source]# Reads the file and returns the appropriate deserialized object. -write(filename: str, source_object: object)[source]# +write(filename: str, source_object: object)[source]# Writes the object to a file. @@ -405,7 +405,7 @@ sleap.io.format.adaptor -class sleap.io.format.adaptor.SleapObjectType(value)[source]# +class sleap.io.format.adaptor.SleapObjectType(value)[source]# Types of files that an adaptor could read/write. diff --git a/develop/api/sleap.io.format.alphatracker.html b/develop/api/sleap.io.format.alphatracker.html index b96b39528..623ef5bdc 100644 --- a/develop/api/sleap.io.format.alphatracker.html +++ b/develop/api/sleap.io.format.alphatracker.html @@ -9,7 +9,7 @@ - sleap.io.format.alphatracker — SLEAP (v1.4.1a2) + sleap.io.format.alphatracker — SLEAP (v1.3.4) @@ -329,7 +329,7 @@ sleap.io.format.alphatracker create a video object which wraps the individual frame images. -class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# +class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# Reads AlphaTracker JSON file with annotations for both single and multiple animals. @@ -339,7 +339,7 @@ sleap.io.format.alphatracker -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. Checks the format of the file at three different levels: - First, the upper-level format of file.json must be a list of dictionaries. @@ -364,7 +364,7 @@ sleap.io.format.alphatracker -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -376,19 +376,19 @@ sleap.io.format.alphatracker -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -400,7 +400,7 @@ sleap.io.format.alphatracker -get_alpha_tracker_frame_dict(filename: str = '')[source]# +get_alpha_tracker_frame_dict(filename: str = '')[source]# Returns a deep copy of the dictionary used for frames. Parameters @@ -418,7 +418,7 @@ sleap.io.format.alphatracker -
Use Labels.numpy to create a pynwb.NWBFile with a separate @@ -685,7 +670,7 @@
Labels.numpy
pynwb.NWBFile
Merge data from another Labels object or LabeledFrame list.
new_frames: the object from which to copy data @@ -704,7 +689,7 @@
Extract labeled frames from indices and return a new Labels object. :param inds: Any valid indexing keys, e.g., a range, slice, list of label indices,
@@ -737,7 +722,7 @@ sleap.io.dataset -find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# +find(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None, return_new: bool = False) → List[sleap.instance.LabeledFrame][source]# Search for labeled frames given video and/or frame index. Parameters @@ -762,7 +747,7 @@ sleap.io.dataset -find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# +find_first(video: sleap.io.video.Video, frame_idx: Optional[int] = None, use_cache: bool = False) → Optional[sleap.instance.LabeledFrame][source]# Find the first occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -786,7 +771,7 @@ sleap.io.dataset -find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# +find_last(video: sleap.io.video.Video, frame_idx: Optional[int] = None) → Optional[sleap.instance.LabeledFrame][source]# Find the last occurrence of a matching labeled frame. Matches on frames for the given video and/or frame index. @@ -807,13 +792,13 @@ sleap.io.dataset -find_suggestion(video, frame_idx)[source]# +find_suggestion(video, frame_idx)[source]# Find SuggestionFrame by video and frame index. -find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# +find_track_occupancy(video: sleap.io.video.Video, track: Union[sleap.instance.Track, int], frame_range=None) → List[sleap.instance.Instance][source]# Get instances for a given video, track, and range of frames. Parameters @@ -832,7 +817,7 @@ sleap.io.dataset -static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# +static finish_complex_merge(base_labels: sleap.io.dataset.Labels, resolved_frames: List[sleap.instance.LabeledFrame])[source]# Finish conflicted merge from complex_merge_between. Parameters @@ -846,7 +831,7 @@ sleap.io.dataset -frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# +frames(video: sleap.io.video.Video, from_frame_idx: int = - 1, reverse=False)[source]# Return an iterator over all labeled frames in a video. Parameters @@ -865,7 +850,7 @@ sleap.io.dataset -get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# +get(key: Union[int, slice, numpy.integer, numpy.ndarray, list, range, sleap.io.video.Video, Tuple[sleap.io.video.Video, Union[numpy.integer, numpy.ndarray, int, list, range]]], *secondary_key: Union[int, slice, numpy.integer, numpy.ndarray, list, range], use_cache: bool = False, raise_errors: bool = False) → Union[sleap.instance.LabeledFrame, List[sleap.instance.LabeledFrame]][source]# Return labeled frames matching key or return None if not found. This is a safe version of labels[...] that will not raise an exception if the item is not found. @@ -898,31 +883,31 @@ sleap.io.dataset -get_next_suggestion(video, frame_idx, seek_direction=1)[source]# +get_next_suggestion(video, frame_idx, seek_direction=1)[source]# Return a (video, frame_idx) tuple seeking from given frame. -get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# +get_suggestions() → List[sleap.gui.suggestions.SuggestionFrame][source]# Return all suggestions as a list of SuggestionFrame items. -get_track_count(video: sleap.io.video.Video) → int[source]# +get_track_count(video: sleap.io.video.Video) → int[source]# Return the number of occupied tracks for a given video. -get_track_occupancy(video: sleap.io.video.Video) → List[source]# +get_track_occupancy(video: sleap.io.video.Video) → List[source]# Return track occupancy list for given video. -get_unlabeled_suggestion_inds() → List[int][source]# +get_unlabeled_suggestion_inds() → List[int][source]# Find labeled frames for unlabeled suggestions and return their indices. This is useful for generating a list of example indices for inference on unlabeled suggestions. @@ -940,7 +925,7 @@ sleap.io.dataset -get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# +get_video_suggestions(video: sleap.io.video.Video, user_labeled: bool = True) → List[int][source]# Return a list of suggested frame indices. Parameters @@ -959,7 +944,7 @@ sleap.io.dataset -has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# +has_frame(lf: Optional[sleap.instance.LabeledFrame] = None, video: Optional[sleap.io.video.Video] = None, frame_idx: Optional[int] = None, use_cache: bool = True) → bool[source]# Check if the labels contain a specified frame. Parameters @@ -996,25 +981,25 @@ sleap.io.dataset -index(value) → int[source]# +index(value) → int[source]# Return index of labeled frame in list of labeled frames. -insert(index, value: sleap.instance.LabeledFrame)[source]# +insert(index, value: sleap.instance.LabeledFrame)[source]# Insert labeled frame at given index. -instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# +instance_count(video: sleap.io.video.Video, frame_idx: int) → int[source]# Return number of instances matching video/frame index. -instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# +instances(video: Optional[sleap.io.video.Video] = None, skeleton: Optional[sleap.skeleton.Skeleton] = None)[source]# Iterate over instances in the labels, optionally with filters. Parameters @@ -1043,13 +1028,13 @@ sleap.io.dataset -classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# +classmethod load_file(filename: str, video_search: Optional[Union[Callable, List[str]]] = None, *args, **kwargs)[source]# Load file, detecting format from filename. -classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# +classmethod make_video_callback(search_paths: Optional[List] = None, use_gui: bool = False, context: Optional[Dict[str, bool]] = None) → Callable[source]# Create a callback for finding missing videos. The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have @@ -1072,13 +1057,13 @@ sleap.io.dataset -static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# +static merge_container_dicts(dict_a: Dict, dict_b: Dict) → Dict[source]# Merge data from dict_b into dict_a. -merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# +merge_matching_frames(video: Optional[sleap.io.video.Video] = None)[source]# Merge LabeledFrame objects that are for the same video frame. Parameters @@ -1089,7 +1074,7 @@ sleap.io.dataset -merge_nodes(base_node: str, merge_node: str)[source]# +merge_nodes(base_node: str, merge_node: str)[source]# Merge two nodes and update data accordingly. Parameters @@ -1113,7 +1098,7 @@ sleap.io.dataset -numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# +numpy(video: Optional[Union[sleap.io.video.Video, int]] = None, all_frames: bool = True, untracked: bool = False, return_confidence: bool = False) → numpy.ndarray[source]# Construct a numpy array from instance points. Parameters @@ -1155,25 +1140,25 @@ sleap.io.dataset -remove(value: sleap.instance.LabeledFrame)[source]# +remove(value: sleap.instance.LabeledFrame)[source]# Remove given labeled frame. -remove_all_tracks()[source]# +remove_all_tracks()[source]# Remove all tracks from labels, updating (but not removing) instances. -remove_empty_frames()[source]# +remove_empty_frames()[source]# Remove frames with no instances. -remove_empty_instances(keep_empty_frames: bool = True)[source]# +remove_empty_instances(keep_empty_frames: bool = True)[source]# Remove instances with no visible points. Parameters @@ -1190,7 +1175,7 @@ sleap.io.dataset -remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# +remove_frame(lf: sleap.instance.LabeledFrame, update_cache: bool = True)[source]# Remove a given labeled frame. Parameters @@ -1205,7 +1190,7 @@ sleap.io.dataset -remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# +remove_frames(lfs: List[sleap.instance.LabeledFrame])[source]# Remove a list of frames from the labels. Parameters @@ -1216,13 +1201,13 @@ sleap.io.dataset -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, in_transaction: bool = False)[source]# Remove instance from frame, updating track occupancy. -remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_predictions(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear predicted instances from the labels. Useful prior to merging operations to prevent overlapping instances from new predictions. @@ -1245,7 +1230,7 @@ sleap.io.dataset -remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# +remove_suggestion(video: sleap.io.video.Video, frame_idx: int)[source]# Remove a suggestion from the list by video and frame index. Parameters @@ -1259,13 +1244,13 @@ sleap.io.dataset -remove_track(track: sleap.instance.Track)[source]# +remove_track(track: sleap.instance.Track)[source]# Remove a track from the labels, updating (but not removing) instances. -remove_untracked_instances(remove_empty_frames: bool = True)[source]# +remove_untracked_instances(remove_empty_frames: bool = True)[source]# Remove instances that do not have a track assignment. Parameters @@ -1277,13 +1262,13 @@ sleap.io.dataset -remove_unused_tracks()[source]# +remove_unused_tracks()[source]# Remove tracks that are not used by any instances. -remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# +remove_user_instances(new_labels: Optional[sleap.io.dataset.Labels] = None)[source]# Clear user instances from the labels. Useful prior to merging operations to prevent overlapping instances from new labels. @@ -1306,7 +1291,7 @@ sleap.io.dataset -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove a video from the labels and all associated labeled frames. Parameters @@ -1317,7 +1302,7 @@ sleap.io.dataset -save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# +save(filename: str, with_images: bool = False, embed_all_labeled: bool = False, embed_suggested: bool = False)[source]# Save the labels to a file. Parameters @@ -1345,7 +1330,7 @@ sleap.io.dataset -classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# +classmethod save_file(labels: sleap.io.dataset.Labels, filename: str, default_suffix: str = '', *args, **kwargs)[source]# Save file, detecting format from filename. Parameters @@ -1366,7 +1351,7 @@ sleap.io.dataset -save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# +save_frame_data_hdf5(output_path: str, format: str = 'png', user_labeled: bool = True, all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.HDF5Video][source]# Write images for labeled frames from all videos to hdf5 file. Note that this will make an HDF5 video, not an HDF5 labels dataset. @@ -1398,7 +1383,7 @@ sleap.io.dataset -save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# +save_frame_data_imgstore(output_dir: str = './', format: str = 'png', all_labeled: bool = False, suggested: bool = False, progress_callback: Optional[Callable[[int, int], None]] = None) → List[sleap.io.video.ImgStoreVideo][source]# Write images for labeled frames from all videos to imgstore datasets. This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores. @@ -1432,7 +1417,7 @@ sleap.io.dataset -set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# +set_suggestions(suggestions: List[sleap.gui.suggestions.SuggestionFrame])[source]# Set the suggested frames. @@ -1444,7 +1429,7 @@ sleap.io.dataset -split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# +split(n: Union[float, int], copy: bool = True) → Tuple[sleap.io.dataset.Labels, sleap.io.dataset.Labels][source]# Split labels randomly. Parameters @@ -1475,7 +1460,7 @@ sleap.io.dataset -to_dict(skip_labels: bool = False) → Dict[str, Any][source]# +to_dict(skip_labels: bool = False) → Dict[str, Any][source]# Serialize all labels to dicts. Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of @@ -1506,7 +1491,7 @@ sleap.io.dataset -to_json()[source]# +to_json()[source]# Serialize all labels in the underling list of LabeledFrame(s) to JSON. Returns @@ -1517,7 +1502,7 @@ sleap.io.dataset -to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# +to_pipeline(batch_size: Optional[int] = None, prefetch: bool = True, frame_indices: Optional[List[int]] = None, user_labeled_only: bool = True) → sleap.pipelines.Pipeline[source]# Create a pipeline for reading the dataset. Parameters @@ -1542,13 +1527,13 @@ sleap.io.dataset -track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# +track_set_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance, new_track: sleap.instance.Track)[source]# Set track on given instance, updating occupancy. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap track assignment for instances in two tracks. If you need to change the track to or from None, you’ll need to use track_set_instance() for each specific @@ -1600,7 +1585,7 @@ sleap.io.dataset -with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# +with_user_labels_only(user_instances_only: bool = True, with_track_only: bool = False, copy: bool = True) → sleap.io.dataset.Labels[source]# Return a new Labels containing only user labels. This is useful as a preprocessing step to train on only user-labeled data. @@ -1626,89 +1611,89 @@ sleap.io.dataset -class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# +class sleap.io.dataset.LabelsDataCache(labels: Labels)[source]# Class for maintaining cache of data in labels dataset. -add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +add_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Add an instance to the labels. -add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# +add_track(video: sleap.io.video.Video, track: sleap.instance.Track)[source]# Add a track to the labels. -find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# +find_fancy_frame_idxs(video, from_frame_idx, reverse)[source]# Return a list of frame idxs, with optional start position/order. -find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# +find_frames(video: sleap.io.video.Video, frame_idx: Optional[Union[int, Iterable[int]]] = None) → Optional[List[sleap.instance.LabeledFrame]][source]# Return list of LabeledFrames matching video/frame_idx, or None. -get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# +get_filtered_frame_idxs(video: Optional[sleap.io.video.Video] = None, filter: str = '') → Set[Tuple[int, int]][source]# Return list of (video_idx, frame_idx) tuples matching video/filter. -get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# +get_frame_count(video: Optional[sleap.io.video.Video] = None, filter: str = '') → int[source]# Return (possibly cached) count of frames matching video/filter. -get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# +get_track_occupancy(video: sleap.io.video.Video, track: sleap.instance.Track) → sleap.rangelist.RangeList[source]# Access track occupancy cache that adds video/track as needed. -get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# +get_video_track_occupancy(video: sleap.io.video.Video) → Dict[sleap.instance.Track, sleap.rangelist.RangeList][source]# Return track occupancy information for specified video. -remove_frame(frame: sleap.instance.LabeledFrame)[source]# +remove_frame(frame: sleap.instance.LabeledFrame)[source]# Remove frame and update cache as needed. -remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# +remove_instance(frame: sleap.instance.LabeledFrame, instance: sleap.instance.Instance)[source]# Remove an instance and update the cache as needed. -remove_video(video: sleap.io.video.Video)[source]# +remove_video(video: sleap.io.video.Video)[source]# Remove video and update cache as needed. -track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# +track_swap(video: sleap.io.video.Video, new_track: sleap.instance.Track, old_track: Optional[sleap.instance.Track], frame_range: tuple)[source]# Swap tracks and update cache as needed. -update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# +update(new_frame: Optional[sleap.instance.LabeledFrame] = None)[source]# Build (or rebuilds) various caches. -update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# +update_counts_for_frame(frame: sleap.instance.LabeledFrame)[source]# Updated the cached count. Should be called after frame is modified. @@ -1716,7 +1701,7 @@ sleap.io.dataset -sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# +sleap.io.dataset.find_path_using_paths(missing_path: str, search_paths: List[str]) → str[source]# Find a path to a missing file given a set of paths to search in. Parameters @@ -1733,7 +1718,7 @@ sleap.io.dataset -sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# +sleap.io.dataset.load_file(filename: str, detect_videos: bool = True, search_paths: Optional[Union[List[str], str]] = None, match_to: Optional[sleap.io.dataset.Labels] = None) → sleap.io.dataset.Labels[source]# Load a SLEAP labels file. SLEAP labels files (slp) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, diff --git a/develop/api/sleap.io.format.adaptor.html b/develop/api/sleap.io.format.adaptor.html index aeb787e8e..8f58998c1 100644 --- a/develop/api/sleap.io.format.adaptor.html +++ b/develop/api/sleap.io.format.adaptor.html @@ -9,7 +9,7 @@ - sleap.io.format.adaptor — SLEAP (v1.4.1a2) + sleap.io.format.adaptor — SLEAP (v1.3.4) @@ -322,7 +322,7 @@ sleap.io.format.adaptor File format adaptor base class. -class sleap.io.format.adaptor.Adaptor[source]# +class sleap.io.format.adaptor.Adaptor[source]# File format adaptor base class. An adaptor handles reading and/or writing a specific file format. To add support for a new file format, you’ll create a new class which inherits from @@ -335,13 +335,13 @@ sleap.io.format.adaptor -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -353,19 +353,19 @@ sleap.io.format.adaptor -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -391,13 +391,13 @@ sleap.io.format.adaptor -read(file: sleap.io.format.filehandle.FileHandle) → object[source]# +read(file: sleap.io.format.filehandle.FileHandle) → object[source]# Reads the file and returns the appropriate deserialized object. -write(filename: str, source_object: object)[source]# +write(filename: str, source_object: object)[source]# Writes the object to a file. @@ -405,7 +405,7 @@ sleap.io.format.adaptor -class sleap.io.format.adaptor.SleapObjectType(value)[source]# +class sleap.io.format.adaptor.SleapObjectType(value)[source]# Types of files that an adaptor could read/write. diff --git a/develop/api/sleap.io.format.alphatracker.html b/develop/api/sleap.io.format.alphatracker.html index b96b39528..623ef5bdc 100644 --- a/develop/api/sleap.io.format.alphatracker.html +++ b/develop/api/sleap.io.format.alphatracker.html @@ -9,7 +9,7 @@ - sleap.io.format.alphatracker — SLEAP (v1.4.1a2) + sleap.io.format.alphatracker — SLEAP (v1.3.4) @@ -329,7 +329,7 @@ sleap.io.format.alphatracker create a video object which wraps the individual frame images. -class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# +class sleap.io.format.alphatracker.AlphaTrackerAdaptor[source]# Reads AlphaTracker JSON file with annotations for both single and multiple animals. @@ -339,7 +339,7 @@ sleap.io.format.alphatracker -can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# +can_read_file(file: sleap.io.format.filehandle.FileHandle) → bool[source]# Returns whether this adaptor can read this file. Checks the format of the file at three different levels: - First, the upper-level format of file.json must be a list of dictionaries. @@ -364,7 +364,7 @@ sleap.io.format.alphatracker -can_write_filename(filename: str) → bool[source]# +can_write_filename(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. @@ -376,19 +376,19 @@ sleap.io.format.alphatracker -does_match_ext(filename: str) → bool[source]# +does_match_ext(filename: str) → bool[source]# Returns whether this adaptor can write format of this filename. -does_read() → bool[source]# +does_read() → bool[source]# Returns whether this adaptor supports reading. -does_write() → bool[source]# +does_write() → bool[source]# Returns whether this adaptor supports writing. @@ -400,7 +400,7 @@ sleap.io.format.alphatracker -get_alpha_tracker_frame_dict(filename: str = '')[source]# +get_alpha_tracker_frame_dict(filename: str = '')[source]# Returns a deep copy of the dictionary used for frames. Parameters @@ -418,7 +418,7 @@ sleap.io.format.alphatracker -
Search for labeled frames given video and/or frame index.
Find the first occurrence of a matching labeled frame.
Matches on frames for the given video and/or frame index.
Find the last occurrence of a matching labeled frame.
Find SuggestionFrame by video and frame index.
Get instances for a given video, track, and range of frames.
Finish conflicted merge from complex_merge_between.
Return an iterator over all labeled frames in a video.
Return labeled frames matching key or return None if not found.
This is a safe version of labels[...] that will not raise an exception if the item is not found.
labels[...]
Return a (video, frame_idx) tuple seeking from given frame.
Return all suggestions as a list of SuggestionFrame items.
Return the number of occupied tracks for a given video.
Return track occupancy list for given video.
Find labeled frames for unlabeled suggestions and return their indices.
This is useful for generating a list of example indices for inference on unlabeled suggestions.
Return a list of suggested frame indices.
Check if the labels contain a specified frame.
Return index of labeled frame in list of labeled frames.
Insert labeled frame at given index.
Return number of instances matching video/frame index.
Iterate over instances in the labels, optionally with filters.
Load file, detecting format from filename.
Create a callback for finding missing videos.
The callback can be used while loading a saved project and allows the user to find videos which have been moved (or have @@ -1072,13 +1057,13 @@
Merge data from dict_b into dict_a.
Merge LabeledFrame objects that are for the same video frame.
Merge two nodes and update data accordingly.
Construct a numpy array from instance points.
Remove given labeled frame.
Remove all tracks from labels, updating (but not removing) instances.
Remove frames with no instances.
Remove instances with no visible points.
Remove a given labeled frame.
Remove a list of frames from the labels.
Remove instance from frame, updating track occupancy.
Clear predicted instances from the labels.
Useful prior to merging operations to prevent overlapping instances from new predictions.
Remove a suggestion from the list by video and frame index.
Remove a track from the labels, updating (but not removing) instances.
Remove instances that do not have a track assignment.
Remove tracks that are not used by any instances.
Clear user instances from the labels.
Useful prior to merging operations to prevent overlapping instances from new labels.
Remove a video from the labels and all associated labeled frames.
Save the labels to a file.
Save file, detecting format from filename.
Write images for labeled frames from all videos to hdf5 file.
Note that this will make an HDF5 video, not an HDF5 labels dataset.
Write images for labeled frames from all videos to imgstore datasets.
This only writes frames that have been labeled. Videos without any labeled frames will be included as empty imgstores.
Set the suggested frames.
Split labels randomly.
Serialize all labels to dicts.
Serializes the labels in the underling list of LabeledFrames to a dict structure. This function returns a nested dict structure composed entirely of @@ -1506,7 +1491,7 @@
Serialize all labels in the underling list of LabeledFrame(s) to JSON.
Create a pipeline for reading the dataset.
Set track on given instance, updating occupancy.
Swap track assignment for instances in two tracks.
If you need to change the track to or from None, you’ll need to use track_set_instance() for each specific @@ -1600,7 +1585,7 @@
track_set_instance()
Return a new Labels containing only user labels.
This is useful as a preprocessing step to train on only user-labeled data.
Class for maintaining cache of data in labels dataset.
Add an instance to the labels.
Add a track to the labels.
Return a list of frame idxs, with optional start position/order.
Return list of LabeledFrames matching video/frame_idx, or None.
Return list of (video_idx, frame_idx) tuples matching video/filter.
Return (possibly cached) count of frames matching video/filter.
Access track occupancy cache that adds video/track as needed.
Return track occupancy information for specified video.
Remove frame and update cache as needed.
Remove an instance and update the cache as needed.
Remove video and update cache as needed.
Swap tracks and update cache as needed.
Build (or rebuilds) various caches.
Updated the cached count. Should be called after frame is modified.
Find a path to a missing file given a set of paths to search in.
Load a SLEAP labels file.
SLEAP labels files (slp) contain all the metadata for a labeling project or the predicted labels from a video. This includes the skeleton, videos, labeled frames, diff --git a/develop/api/sleap.io.format.adaptor.html b/develop/api/sleap.io.format.adaptor.html index aeb787e8e..8f58998c1 100644 --- a/develop/api/sleap.io.format.adaptor.html +++ b/develop/api/sleap.io.format.adaptor.html @@ -9,7 +9,7 @@ -
An adaptor handles reading and/or writing a specific file format. To add support for a new file format, you’ll create a new class which inherits from @@ -335,13 +335,13 @@
Returns whether this adaptor can read this file.
Returns whether this adaptor can write format of this filename.
Returns whether this adaptor supports reading.
Returns whether this adaptor supports writing.
Reads the file and returns the appropriate deserialized object.
Writes the object to a file.
Types of files that an adaptor could read/write.
Reads AlphaTracker JSON file with annotations for both single and multiple animals.
Checks the format of the file at three different levels: - First, the upper-level format of file.json must be a list of dictionaries. @@ -364,7 +364,7 @@
Returns a deep copy of the dictionary used for frames.