Experiments performed using reference methods to benchmark for ViM-UNet described in our preprint (accepted to MIDL 2024 - Short Paper):
Here is the detailed instruction on how to install nnU-Net.
TLDR:
- Install PyTorch
- Install nnU-Net from source:
$ git clone https://github.com/MIC-DKFZ/nnUNet.git
$ cd nnUNet
$ pip install -e .
Here is the detailed instruction on how to install U-Mamba.
Below is my piece of installation (dropping it here as some parts needed some attention)
- Create a new mamba environment:
$ mamba env create -n umamba python=3.10 -y
$ mamba activate umamba
- Install
PyTorch
:
mamba install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia
- Install packaging:
pip install packaging
CUDA_HOME
needs to match the installed cuda version, and the path should be visible. For HLRN users, here's an example:export CUDA_HOME=/usr/local/cuda-11.8/
.
- Install
causal-conv1d
:pip install causal-conv1d==1.1.1
- Install Mamba:
pip install mamba-ssm
- Clone the repository from scratch and install U-Mamba (we store the data at
U-Mamba/data
for performing the experiments)
$ git clone https://github.com/bowang-lab/U-Mamba.git
$ cd U-Mamba/umamba
$ pip install -e .
To cite our paper:
@inproceedings{
archit2024vimunet,
title={ViM-{UN}et: Vision Mamba for Biomedical Segmentation},
author={Anwai Archit and Constantin Pape},
booktitle={Medical Imaging with Deep Learning},
year={2024},
url={https://openreview.net/forum?id=PYNwysgFeP}
}