Skip to content

Commit

Permalink
Add an example demonstrating how to intake a MONAI model (#2)
Browse files Browse the repository at this point in the history
  • Loading branch information
jackpaparian authored Mar 29, 2023
1 parent 49b8acf commit 621c020
Show file tree
Hide file tree
Showing 8 changed files with 348 additions and 11 deletions.
100 changes: 90 additions & 10 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,10 @@ their models on the [Bunkerhill Health](https://www.bunkerhillhealth.com/) infer
1. [Overview](#overview)
2. [Deploying your model](#deploying-your-model)
3. [ModelRunner](#modelrunner)
4. [Example model: hippocampus segmentation](#example-model-hippocampus-segmentation-using-nnunet)
5. [Inputs](#inputs)
6. [Outputs](#outputs)
4. [Example model: hippocampus segmentation with nnU-Net V1](#example-model-hippocampus-segmentation-using-nnunet)
5. [Example model: MONAI's FlexibleUNet](#example-model-monaiflexibleunet)
6. [Inputs](#inputs)
7. [Outputs](#outputs)

## Overview

Expand All @@ -27,7 +28,8 @@ You'll need the following components to run model inference on Bunkerhill:
file to download your model's [PyPI](https://pypi.org/) dependencies
- A Dockerfile to hermetically build your model with its dependencies in a Docker image. The
[Dockerfile for the hippocampus model](bunkerhill/examples/hippocampus/Dockerfile) provides an example
that includes CUDA drivers on an Ubuntu 22.04 image.
that includes CUDA drivers on an Ubuntu 22.04 image. Depending on where your pretrained weights are
stored, the Dockerfile will either need to copy them or download them into the Docker image.
- Test cases to assess model correctness. We also ask that you transfer test data so Bunkerhill
can continue to measure correctness throughout deployment. An example of model tests is provided
for the hippocampus example model at
Expand Down Expand Up @@ -150,11 +152,19 @@ docker build \
-f bunkerhill/examples/hippocampus/Dockerfile
```

### Define a local directory for inference inputs and outputs

The model requires a directory where the inference inputs and outputs can be read and written. For
ease of use, consider defining this directory within your home directory.
```shell
export DATA_DIRNAME=/path/to/home_directory/model_dir
mkdir -m=775 -p $DATA_DIRNAME
```

### Run unit tests

To run the hippocampus model unit tests, run:
```shell
export DATA_DIRNAME=/tmp
docker run -it \
--mount type=bind,source=${DATA_DIRNAME},target=/data \
hippocampus \
Expand All @@ -167,14 +177,13 @@ docker run -it \

To run the hippocampus model as a server awaiting `InferenceRequest` messages, run:
```shell
export DATA_DIRNAME=/tmp
docker run -it \
--mount type=bind,source=${DATA_DIRNAME},target=/data \
hippocampus \
python bunkerhill/examples/hippocampus/model.py
```

#### Generate inference input
#### Generate hippocampus inference input

To generate input for this model, use the
[`nifti_to_modelrunner_input.py`](bunkerhill/utils/nifti_to_modelrunner_input.py) utility to
Expand All @@ -189,10 +198,9 @@ Once `Task04_Hippocampus.tar` has been unpacked, run the following command to co
files into a `ModelRelease` input file:
```shell
export NIFTI_FILENAME=/path/to/Task004_Hippocampus/imagesTs/hippocampus_002_0000.nii.gz
export DATA_DIRNAME=/tmp
export STUDY_UUID=77d1b303-f8b2-4aca-a84c-6c102d3625e1
export SERIES_UUID=e57ac58e-c0e8-44ab-be7e-4d17b32f6a8f
python nifti_to_modelrunner_input.py \
python bunkerhill/utils/nifti_to_modelrunner_input.py \
--nifti_filename=${NIFTI_FILENAME} \
--data_dirname=${DATA_DIRNAME} \
--study_uuid=${STUDY_UUID} \
Expand All @@ -211,7 +219,79 @@ pickled dictionary of inputs named `${STUDY_UUID}_input.pkl`. After running inf
Once the server has started, `client_cli.py` can send `InferenceRequest` messages to the `ModelRunner`:
```shell
export STUDY_UUID=77d1b303-f8b2-4aca-a84c-6c102d3625e1
python client_cli.py \
python bunkerhill/utils/client_cli.py \
--socket_dirname=${DATA_DIRNAME} \
--mounted_data_dirname=/data \
--study_uuid=${STUDY_UUID}
```

Once inference has finished, the `ModelRunner` will write `${STUDY_UUID}_output.pkl` to the mounted
filesystem path and send an `InferenceResponse` back to the client.

## Example model: MonaiFlexibleUNet

The [MonaiFlexibleUNet](bunkerhill/examples/monai/model.py) model demonstrates how to
make an [MONAI](https://monai.io/core.html) model compatible with the Bunkerhill SDK. It
wraps a
[pretrained PyTorch model](https://github.com/Project-MONAI/MONAI/blob/be3d13869d9e0060d17a794d97d528d4e4dcc1fc/monai/networks/nets/flexible_unet.py#L217)
defined in the MONAI Core library.

### Build Docker image

The MonaiFlexibleUNet model is run as a Docker container. To build the example model, run
```shell
docker build \
--build-arg USER_ID=$(id -u) \
-t monai:latest \
. \
-f bunkerhill/examples/monai/Dockerfile
```

### Define a local directory for inference inputs and outputs

The model requires a directory where the inference inputs and outputs can be read and written. For
ease of use, consider defining this directory within your home directory.
```shell
export DATA_DIRNAME=/path/to/home_directory/model_dir
mkdir -m=775 -p $DATA_DIRNAME
```

### Run unit tests

To run the `MonaiFlexibleUNet` unit tests, run:
```shell
docker run -it \
--mount type=bind,source=${DATA_DIRNAME},target=/data \
monai \
python bunkerhill/examples/monai/test_model.py
```

### Interact with `ModelRunner` server

#### Start server

To run the `MonaiFlexibleUNet` model as a server awaiting `InferenceRequest` messages, run:
```shell
docker run -it \
--mount type=bind,source=${DATA_DIRNAME},target=/data \
monai \
python bunkerhill/examples/monai/model.py
```

#### Generate `MonaiFlexibleUNet` inference input

To generate input for this model, use the
[`nifti_to_modelrunner_input.py`](bunkerhill/utils/nifti_to_modelrunner_input.py) utility to
convert an example NifTi image into the expected input format. Follow the above
[Generate hippocampus inference input](#generate-hippocampus-inference-input)
guide to generate example inputs from NifTi files.

#### Send `InferenceRequest` messages

Once the server has started, `client_cli.py` can send `InferenceRequest` messages to the `ModelRunner`:
```shell
export STUDY_UUID=77d1b303-f8b2-4aca-a84c-6c102d3625e1
python bunkerhill/utils/client_cli.py \
--socket_dirname=${DATA_DIRNAME} \
--mounted_data_dirname=/data \
--study_uuid=${STUDY_UUID}
Expand Down
5 changes: 5 additions & 0 deletions bunkerhill/examples/hippocampus/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -95,5 +95,10 @@ ENV PYTHONUNBUFFERED=1

# Add a new user "host_user"
RUN useradd -u ${USER_ID} host_user

# Make host_user the owner of the /app folder and update folder permissions
RUN chown -R host_user:host_user /app
RUN chmod 755 /app

# Change to non-root privilege
USER host_user
103 changes: 103 additions & 0 deletions bunkerhill/examples/monai/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,103 @@
################################################
# REQUIRED SECTION:
# The section below must be kept for all models.
################################################

FROM ubuntu:jammy-20221101

# Sets arguments
ARG USER_ID

# Sets working directory
WORKDIR app

# Installs utilities
RUN apt update -y \
&& apt install -y curl vim wget

# Installs GPU
# see for template: https://gitlab.com/nvidia/container-images/cuda/-/blob/master/dist/11.7.0/ubuntu2204/base/Dockerfile
ENV PATH /usr/local/nvidia/bin:/usr/local/cuda/bin:${PATH}
ENV LD_LIBRARY_PATH /usr/local/nvidia/lib:/usr/local/nvidia/lib64

ENV NVARCH x86_64

ENV CUDA_VERSION 11.7.0
ENV NV_CUDA_CUDART_VERSION 11.7.60-1
ENV NV_CUDA_COMPAT_PACKAGE cuda-compat-11-7
ENV NV_CUDA_LIB_VERSION 11.7.0-1

# see for environment variables: https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/user-guide.html#environment-variables-oci-spec
ENV NVIDIA_VISIBLE_DEVICES all
ENV NVIDIA_DRIVER_CAPABILITIES compute,utility
ENV NVIDIA_REQUIRE_CUDA "cuda>=11.7 brand=tesla,driver>=450,driver<451 brand=tesla,driver>=470,driver<471 brand=unknown,driver>=470,driver<471 brand=nvidia,driver>=470,driver<471 brand=nvidiartx,driver>=470,driver<471 brand=quadrortx,driver>=470,driver<471 brand=unknown,driver>=510,driver<511 brand=nvidia,driver>=510,driver<511 brand=nvidiartx,driver>=510,driver<511 brand=quadrortx,driver>=510,driver<511"

RUN apt install -y --no-install-recommends \
gnupg2 \
ca-certificates
# see: https://github.com/NVIDIA/nvidia-docker/issues/1632#issuecomment-1125739652
RUN curl -fsSL https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH}/3bf863cc.pub | apt-key add -
RUN echo "deb https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/${NVARCH} /" > /etc/apt/sources.list.d/cuda.list
RUN rm -rf /var/lib/apt/lists/*
# see for package versions: https://ubuntu.pkgs.org/22.04/cuda-amd64/
RUN apt-get update && apt-get install -y --no-install-recommends \
cuda-cudart-11-7=${NV_CUDA_CUDART_VERSION} \
cuda-libraries-11-7=${NV_CUDA_LIB_VERSION} \
${NV_CUDA_COMPAT_PACKAGE} \
&& ln -s cuda-11.7 /usr/local/cuda \
&& rm -rf /var/lib/apt/lists/*
RUN echo "/usr/local/nvidia/lib" >> /etc/ld.so.conf.d/nvidia.conf \
&& echo "/usr/local/nvidia/lib64" >> /etc/ld.so.conf.d/nvidia.conf

# Installs Python 3.9
RUN apt update -y \
&& apt install -y software-properties-common
RUN DEBIAN_FRONTEND="noninteractive" add-apt-repository ppa:deadsnakes/ppa
RUN DEBIAN_FRONTEND="noninteractive" apt install -y python3.9-dev

# python3.9 also installs python3.10, and on ubuntu:jammy python3.10 is the
# default version. To override this, manually delete and recreate the python3
# symbolic link.
RUN rm /usr/bin/python3
RUN ln -s python3.9 /usr/bin/python3

RUN DEBIAN_FRONTEND="noninteractive" apt install -y \
python-is-python3 \
python3-gdcm \
python3-opencv \
python3-pip \
python3.9-distutils

###############################################################
# CUSTOM SECTION:
# The section below is specific to the MonaiFlexibleUNet model.
# Please customize it with the commands needed for your model.
###############################################################

# If needed, copy custom pretrained weights for your model into the Docker image.
# The example MonaiFlexibleUNet model downloads its pretrained weights at start-up time, for ease
# of definition, but your model's weights will likely need to be copied into the Docker image as
# shown below:
# COPY path/to/local/model_weights.pth model_weights.pth

# Install PyPI requirements for this example
COPY bunkerhill/examples/monai/requirements.txt requirements.txt
RUN pip install -r requirements.txt

# Prepares /app/model_release
COPY bunkerhill bunkerhill
COPY README.md README.md
COPY setup.py setup.py
RUN pip install --editable .

ENV PYTHONUNBUFFERED=1

# Add a new user "host_user"
RUN useradd -u ${USER_ID} host_user

# Make host_user the owner of the /app folder and update folder permissions
RUN chown -R host_user:host_user /app
RUN chmod 755 /app

# Change to non-root privilege
USER host_user
Empty file.
77 changes: 77 additions & 0 deletions bunkerhill/examples/monai/model.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
"""The class definition and model server entrypoint for the MonaiFlexibleUNet model."""

from typing import Dict

import numpy as np
import torch

from monai.networks.nets import FlexibleUNet

from bunkerhill.base_model import BaseModel
from bunkerhill.bunkerhill_types import Outputs, SeriesInstanceUID
from bunkerhill.model_runner import ModelRunner


class MonaiFlexibleUNet(BaseModel):
"""A wrapper around the trained nnUNet model for the MSD Hippocampus model.
Attributes:
_model: The pretrained PyTorch model to call at inference time.
"""
_SEGMENTATION_OUTPUT_ATTRIBUTE_NAME: str = 'unet_seg_pred'

def __init__(self):
# Set model directory where pretrained model weights will be downloaded and unpacked.
torch.hub.set_dir('/app')

# Try loading a standard Torch model from a pth.tar file downloaded from HuggingFace Hub
# Try compiling that model with torch.compile() for speed
self.device = 'cuda' if torch.cuda.is_available() else 'cpu'
self.model = FlexibleUNet(
in_channels=1, out_channels=1, backbone='efficientnet-b0', pretrained=True
)

# Move model to GPU if available
self.model.to(self.device)

# Set model to eval mode
self.model.eval()

def inference(self, pixel_array: Dict[SeriesInstanceUID, np.ndarray]) -> Outputs:
"""Runs inference on the pixel array for a DICOM series.
Args:
pixel_array: A dict mapping the DICOM series UID to its pixel array.
Returns:
A dictionary containing the output segmentation and softmax ndarrays.
"""
# Ensure pixel_array dict only contains a single series.
if len(pixel_array) > 1:
raise ValueError(f'Model only accepts a single series. {len(pixel_array)} were passed.')

# Convert series pixel array from np.ndarray array to torch.Tensor.
series_instance_uid = list(pixel_array.keys())[0]
series_pixel_array = torch.from_numpy(pixel_array[series_instance_uid])

# Move series_pixel_array to GPU if available and cast dtype to float32
series_pixel_array = series_pixel_array.to(self.device, dtype=torch.float32)

# Add batch dimension to series_pixel_array
series_pixel_array = series_pixel_array.unsqueeze(0)

# Run inference
with torch.no_grad():
segmentation = self.model(series_pixel_array)

# Resize segmentation, move it to CPU, convert dtype to int16, and convert to ndarray.
segmentation = segmentation.squeeze().to('cpu', dtype=torch.int16).numpy()

# Convert nnUNet segmentation and softmax tensors into output attributes.
return {self._SEGMENTATION_OUTPUT_ATTRIBUTE_NAME: {series_instance_uid: segmentation}}


if __name__ == '__main__':
model = MonaiFlexibleUNet()
model_runner = ModelRunner(model)
model_runner.start_run_loop()
2 changes: 2 additions & 0 deletions bunkerhill/examples/monai/requirements.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
monai==1.1.0
torch==2.0.0
Loading

0 comments on commit 621c020

Please sign in to comment.