diff --git a/MLPerf_Compatibility_Table.adoc b/MLPerf_Compatibility_Table.adoc index d2e35e4..67a40ee 100644 --- a/MLPerf_Compatibility_Table.adoc +++ b/MLPerf_Compatibility_Table.adoc @@ -8,23 +8,24 @@ X: Change in benchmark. Submission results can be compared across rounds when th == Training |=== -|Model |0.5 |0.6 |0.7 |1.0 |1.1 |2.0 |2.1 |3.0 | 3.1 | 4.0 -|ResNet-50 v1.5 |X 9+|X -|SSD-ResNet34 |X 4+|X 5+|N/A -|RetinaNet-ResNeXt50 5+|N/A 5+|X -|MaskRCNN |X 8+|X |N/A -|NCF |X 9+|N/A -|NMT |X 2+|X 7+|N/A -|Transformer |X 2+|X 7+|N/A -|MiniGo |X |X 5+|X 3+|N/A -|DLRM 2+|N/A 5+|X 3+|N/A -|DLRM-dcnv2 7+|N/A 3+|X -|BERT 3+|N/A 7+|X -|RNN-T 3+|N/A |X 5+|X |N/A -|3D U-Net 3+|N/A 7+|X -|GPT3 7+|N/A 3+|X -|LLama70B-LoRA 9+|N/A |X -|RGAT 9+|N/A |X +|Model |0.5 |0.6 |0.7 |1.0 |1.1 |2.0 |2.1 |3.0 | 3.1 | 4.0 | 4.1 +|ResNet-50 v1.5 |X 9+|X |N/A +|SSD-ResNet34 |X 4+|X 6+|N/A +|RetinaNet-ResNeXt50 5+|N/A 6+|X +|MaskRCNN |X 8+|X 2+|N/A +|NCF |X 10+|N/A +|NMT |X 2+|X 8+|N/A +|Transformer |X 2+|X 8+|N/A +|MiniGo |X |X 5+|X 4+|N/A +|DLRM 2+|N/A 5+|X 4+|N/A +|DLRM-dcnv2 7+|N/A 4+|X +|BERT 3+|N/A 8+|X +|RNN-T 3+|N/A |X 5+|X 2+|N/A +|3D U-Net 3+|N/A 7+|X |N/A +|GPT3 7+|N/A 4+|X +|Stable Diffusionv2 8+|N/A 3+|X +|LLama70B-LoRA 9+|N/A 2+|X +|RGAT 9+|N/A 2+|X |=== Metric: Time-to-train (measured in minutes) diff --git a/submission_rules.adoc b/submission_rules.adoc index bba8fb6..cdabc79 100644 --- a/submission_rules.adoc +++ b/submission_rules.adoc @@ -430,10 +430,9 @@ Here is the list of mandatory files for all submissions in any division/category * model-info.json * .json -For some models mlperf_log_accuracy.json can get very large. Because of this we truncate mlperf_log_accuracy.log in submissions +For some models mlperf_log_accuracy.json can get very large. Because of this we truncate mlperf_log_accuracy.json in submissions using a tool. -A submiter will run the tool before submitting to mlperf and ***keep*** the original mlperf_log_accuracy.log files inside their organization. -The original files might be requested by mlperf during submission review so you need to store them. +A submiter will run the tool before submitting to mlperf and submit the truncated mlperf_log_accuracy.json files inside their organization. Run the tool as follows, assuming is your local subumission tree and the location of the github submission repo: ```