Skip to content

DeepGrow

Andres Diaz-Pinto edited this page Jul 1, 2021 · 24 revisions

DeepGrow is an interactive segmentation model where the user guides the segmentation with positive and negative clicks. The positive clicks are intended to guide the segmentation towards the region of interest, while the negative clicks are used for neglecting the background (cf. [1]).

The Training process of a DeepGrow model is different compared to traditional deep learning segmentation due to a simulation process of positive and negative guidance (clicks) involved in the training process. The positive and negative guidance maps are based on the false negatives and false positives which are dependent on the predictions. Both DeepGrow 2D & 3D allow the user to annotate only one label at a time. DeepGrow 2D allows the user to annotate the image one slice at a time, whereas DeepGrow 3D can annotate whole volumes.

image

DeepGrow as a model can generalize to multiple imaging modalities such Magnetic Resonance Imaging, Computed Tomography etc.

It can also be adapted as an application in the MONAI Label framework. Here the user can directly leverage simultaneous training of the Deepgrow in the background and utilize it at the same time to annotate more samples to add to the training data pool.

Please note: Deepgrow training for both 2D & 3D is done on pairs of image & binary label mask data

References:

[1] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).

Clone this wiki locally