All code was developed and tested with Python 2.8.2 (Anaconda) and PyTorch 1.7.1.
$ conda create -n segmentation python=2.8.2
$ conda activate segmentation
$ pip install -r segmentation_requirements.txt
Download the trained models and fix config file in '/segmentation_model/HRNetV2_W64_OCR/configs/hrnet_w64_seg_ocr.yaml' https://github.com/HRNet/HRNet-Image-Classification
$ python segmentation_model/HRNetV2_W64_OCR/HRNetV2_W64_OCR/train.py
The training script has a number of command-line flags that you can use to configure the model architecture, hyperparameters, and input / output settings:
seed
: random seed. Default is16
epochs
: number of epochs to train. Default is25
batch_size
:input batch size for training. Default is12
lr
: learning rate. Default is1e-5
name
: name of the model in Wandb.log_every
: logging interval. Default is25
vis_every
: image logging interval. Default is10