With let_it_be_3D we want to extend the functions of aniposelib and bring them into a pipeline structure.
Our goals are having as few manual steps required as possible and standardized quality assurance/collection of metadata.
We provide additional methods for video synchronisation
, adjustment for different framerates
, validation of anipose calibration
, adjustment of intrinsic calibrations to croppings
, manual marker detection
, checking and correcting filenames
and normalisation of the 3D triangulated dataframe
.
See the pipeline flowchart!
flowchart TD;
video_dir_R(Recording directory) ~~~ video_dir_C(Calibration directory);
id1(Recording object) ~~~ id2(Calibration object) ~~~ id3(Calibration validation objects);
subgraph Processing recording videos:
video_dir_R --> |Get video metadata \nfrom filename and recording config| id1;
id1-->|Temporal synchronisation| id4>DeepLabCut analysis and downsampling];
end
subgraph Processing calibration videos
video_dir_C --> |Get video metadata \nfrom filename and recording config| id2 & id3;
id2-->|Temporal synchronisation| id5>Video downsampling];
id3-->id6>Marker detection];
end
id5-->id7{Anipose calibration};
subgraph Calibration validation
id7-->id8[/Good calibration reached?/];
id6-->id8;
end
subgraph Triangulation
id8-->|No|id7;
id8-->|Yes|id9>Triangulation];
id4-->id9-->id10>Normalization];
id10-->id11[(Database)];
end
Pipeline explained Step-by-Step!
- read video metadata from filename and recording config file
- intrinsic calibrations
-
use anipose intrinsic calibration
-
run or load intrinsic calibration based on uncropped checkerboard videos adjust intrinsic calibration for video cropping
-
- synchronize videos temporally based on a blinking signal
- run marker detection on videos manually or using DeepLabCut networks
- write videos and marker detection files to the same framerate
- run extrinsic Anipose camera calibration
- validate calibration based on known distances and angles (ground truth) between calibration validation markers
-
triangulate recordings
-
rotate dataframe, translate to origin, normalize to centimeter
-
add metadata to database
# Clone this repository
$ git clone https://github.com/retune-commons/let_it_be_3D.git
# Go to the folder in which you cloned the repository
$ cd let_it_be_3D
# Install dependencies
# first, install deeplabcut into a new environment as described here: (https://deeplabcut.github.io/DeepLabCut/docs/installation.html)
$ conda env update --file env.yml
# Open Walkthrough.ipynb in jupyter lab
$ jupyter lab
# Update project_config.yaml to your needs and you're good to go!
Calibration
from pathlib import Path
from core.triangulation_calibration_module import Calibration
rec_config = Path("test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml")
calibration_object = Calibration(
calibration_directory=rec_config.parent,
recording_config_filepath=rec_config,
project_config_filepath="test_data/project_config.yaml",
output_directory=rec_config.parent,
)
calibration_object.run_synchronization()
calibration_object.run_calibration(verbose=2)
TriangulationRecordings
from core.triangulation_calibration_module import TriangulationRecordings
rec_config = "test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml"
directory = "test_data/Server_structure/VGlut2-flp/September2022/206_F2-63/220922_OTE/"
triangulation_object = TriangulationRecordings(
directory=directory,
recording_config_filepath=rec_config,
project_config_filepath="test_data/project_config.yaml",
recreate_undistorted_plots = True,
output_directory=directory
)
triangulation_object.run_synchronization()
triangulation_object.exclude_markers(
all_markers_to_exclude_config_path="test_data/markers_to_exclude_config.yaml",
verbose=False,
)
triangulation_object.run_triangulation(
calibration_toml_filepath="test_data/Server_structure/Calibrations/220922/220922_0_Bottom_Ground1_Ground2_Side1_Side2_Side3.toml"
)
normalised_path, normalisation_error = triangulation_object.normalize(
normalization_config_path="test_data/normalization_config.yaml"
)
CalibrationValidation
from core.triangulation_calibration_module import CalibrationValidation
from pathlib import Path
rec_config = Path("test_data/Server_structure/Calibrations/220922/recording_config_220922.yaml")
calibration_validation_object = CalibrationValidation(
project_config_filepath="test_data/project_config.yaml",
directory=rec_config.parent, recording_config_filepath=rec_config,
recreate_undistorted_plots = True, output_directory=rec_config.parent
)
calibration_validation_object.add_ground_truth_config("test_data/ground_truth_config.yaml")
calibration_validation_object.get_marker_predictions()
calibration_validation_object.run_triangulation(
calibration_toml_filepath="test_data/Server_structure/Calibrations/220922/220922_0_Bottom_Ground1_Ground2_Side1_Side2_Side3.toml",
triangulate_full_recording = True
)
mean_dist_err_percentage, mean_angle_err, reprojerr_nonan_mean = calibration_validation_object.evaluate_triangulation_of_calibration_validation_markers()
Video filename
- calibration:
- has to be a
video
[".AVI", ".avi", ".mov", ".mp4"] - including recording_date (YYMMDD), calibration_tag (as defined in project_config) and cam_id (element of valid_cam_ids in project_config)
- recording_date and calibration_tag have to be separated by an underscore ("_")
- f"{recording_date}{calibration_tag}{cam_id}" = Example: "220922_charuco_Front.mp4"
- has to be a
- calibration_validation:
- has to be a
video
orimage
[".bmp", ".tiff", ".png", ".jpg", ".AVI", ".avi", ".mp4"] - including recording_date (YYMMDD), calibration_validation_tag (as defined in project_config) and cam_id (element of valid_cam_ids in project_config)
- recording_date and calibration_validation_tag have to be separated by an underscore ("_")
- calibration_validation_tag mustn't be "calvin"
- f"{recording_date}_{calibration_validation_tag}" = Example: "220922_position_Top.jpg"
- has to be a
- recording:
- has to be a
video
[".AVI", ".avi", ".mov", ".mp4"] - including recording_date (YYMMDD), cam_id (element of valid_cam_ids in project_config), mouse_line (element of animal_lines in project_config), animal_id (beginning with F, split by "-" and followed by a number) and paradigm (element of paradigms in project_config)
- recording_date, cam_id, mouse_line, animal_id and paradigm have to be separated by an underscore ("_")
- f"{recording_date}{cam_id}{mouse_line}{animal_id}{paradigm}.mp4" = Example: "220922_Side_206_F2-12_OTT.mp4"
- has to be a
Folder structure
- A folder, in which a recordings is stored should match the followed structure to be
detected automatically:
- has to start with the recording_date (YYMMDD)
- has to end with any of the paradigms (as defined in project_config)
- recording date and paradigm have to be separated by an underscore ("_")
- f"{recording_date}_{paradigm}" = Example: "230427_OF"
Please see our API-documentation here!
GNU General Public License v3.0
This is a Defense Circuits Lab project. The pipeline was designed by Konstantin Kobel, Dennis Segebarth and Michael Schellenberger. At the Sfb-Retune Hackathon 2022, Elisa Garulli, Robert Peach and Veronika Selzam joined the taskforce to push the project towards completion.
|
|
If you want to help with writing this pipeline, please get in touch.