Original Paper: arXiv
Code Source | Link | License |
---|---|---|
Original Caffe Source | GitHub | BSD (LICENSE.weiliu.ssd) |
MLPerf Reference Source (Inference) | GitHub | Apache V2.0 (LICENSE.mlperf.inference) |
MLPerf Reference Source (Training) | GitHub | Apache V2.0 (LICENSE.mlperf.training) |
Unofficial Impl (amdegroot) | GitHub | MIT (LICENSE.amdegroot.ssd) |
Unofficial Impl (kuangliu) | GitHub | MIT (LICENSE.kuangliu.ssd) |
Download the COCO 2017 dataset using MLPerf's download script:
$ cd /path/to/coco
$ curl -O http://images.cocodataset.org/zips/train2017.zip; unzip train2017.zip
$ curl -O http://images.cocodataset.org/zips/val2017.zip; unzip val2017.zip
$ curl -O http://images.cocodataset.org/annotations/annotations_trainval2017.zip; unzip annotations_trainval2017.zip
Example | Description |
---|---|
train_ssdrn34_coco.py |
Train the official MLPerf config for SSD-Resnet34 on COCO 2017. |
pred_ssdrn34_coco.py |
Predicts a few example images from the COCO 2017 Validation set. Dumps to tensorboard. |
list_coco_cats.py |
List categories in COCO 2017 |
stream_ssd.py |
PyQT App which reads webcam input and runs SSD-Resnet34 on the feed. |
How to run:
# Train SSDRN34
$ python -m ssdrn34.examples.train_ssdrn34_coco
Level 2: In addition to being based on reference code (Level 1), the model in this repository has been studied to provide similar (eyeball validation) loss values when running on provided reference training data.