This repository has been archived by the owner on Sep 27, 2023. It is now read-only.
[Feature] Support hardware accelerated video encoding and storage #17
Labels
enhancement
New feature or request
For real-time imaging devices, it is often key to be able to record video data in high-quality while keeping the file size manageable. In the open source community, video encoding is typically done through ffmpeg. Ffmpeg indeed provides many options to tailor the encoding to the specific task at hand. It also supports hardware-accelerated encoding through nvenc, intel quicksync and others. Yet, making proper use of it is not straightforward. Several bindings exist for ffmpeg (e.g. https://github.com/kkroening/ffmpeg-python) but as far as I am aware none support encoding from GPU buffers directly (i.e. without a copy to the CPU).
Some interesting notes about being able to encode GPU buffers through ffmpeg+nvenc without going back to the CPU can be found here: https://stackoverflow.com/questions/49862610/opengl-to-ffmpeg-encode
It would be great if such a use case could be covered in MONAI Stream.
Keeping ffmpeg in the loop would enable the implementation of a graceful fallback to CPU-based encoding but if tapping directly into nvenc is easier, this would already be a strong addition to the library.
There is an interesting code sample here about using nvenc directly with GPUMat from opencv:
https://github.com/zteffi/OpenCV-GpuMat-NVIDIA-VIDEO-SDK-Encoder-Sample
Additionally, inline with #74, it would be great to expose such hardware-accelerated video-writer using a simple functional API similar to how it's done in OpenCV's VideoWriter:
https://docs.opencv.org/5.x/dd/d43/tutorial_py_video_display.html
The text was updated successfully, but these errors were encountered: