Click here to see the table of contents.
Note that this README is automatically generated - don't edit! See more info.
See more info.
- Category: Modular MLPerf benchmarks.
- CM GitHub repository: mlcommons@ck
- GitHub directory for this script: GitHub
- CM meta description for this script: _cm.yaml
- CM "database" tags to find this script: app,mlcommons,mlperf,inference,nvidia-harness,nvidia
- Output cached?: False
cm pull repo mlcommons@ck
cm run script --help
-
cm run script --tags=app,mlcommons,mlperf,inference,nvidia-harness,nvidia[,variations] [--input_flags]
-
cm run script "app mlcommons mlperf inference nvidia-harness nvidia[,variations]" [--input_flags]
-
cm run script 689e865b0059479b [--input_flags]
Click here to expand this section.
import cmind
r = cmind.access({'action':'run'
'automation':'script',
'tags':'app,mlcommons,mlperf,inference,nvidia-harness,nvidia'
'out':'con',
...
(other input keys for this script)
...
})
if r['return']>0:
print (r['error'])
cm run script --tags=gui --script="app,mlcommons,mlperf,inference,nvidia-harness,nvidia"
Use this online GUI to generate CM CMD.
TBD
-
No group (any variation can be selected)
Click here to expand this section.
_batch_size.#
- Environment variables:
- CM_MODEL_BATCH_SIZE:
None
- CM_MODEL_BATCH_SIZE:
- Workflow:
- Environment variables:
_cuda
- Environment variables:
- CM_MLPERF_DEVICE:
gpu
- CM_MLPERF_DEVICE_LIB_NAMESPEC:
cudart
- CM_MLPERF_DEVICE:
- Workflow:
- Environment variables:
-
Group "device"
Click here to expand this section.
_cpu
(default)- Environment variables:
- CM_MLPERF_DEVICE:
cpu
- CM_MLPERF_DEVICE:
- Workflow:
- Environment variables:
-
Group "framework"
Click here to expand this section.
_pytorch
- Environment variables:
- CM_MLPERF_BACKEND:
pytorch
- CM_MLPERF_BACKEND:
- Workflow:
- Environment variables:
-
Group "model"
Click here to expand this section.
_resnet50
(default)- Environment variables:
- CM_MODEL:
resnet50
- CM_MODEL:
- Workflow:
- Environment variables:
_retinanet
- Environment variables:
- CM_MODEL:
retinanet
- CM_MODEL:
- Workflow:
- Environment variables:
_cpu,_resnet50
Click here to expand this section.
--count=value
→CM_MLPERF_LOADGEN_QUERY_COUNT=value
--max_batchsize=value
→CM_MLPERF_LOADGEN_MAX_BATCHSIZE=value
--mlperf_conf=value
→CM_MLPERF_CONF=value
--mode=value
→CM_MLPERF_LOADGEN_MODE=value
--output_dir=value
→CM_MLPERF_OUTPUT_DIR=value
--performance_sample_count=value
→CM_MLPERF_LOADGEN_PERFORMANCE_SAMPLE_COUNT=value
--scenario=value
→CM_MLPERF_LOADGEN_SCENARIO=value
--user_conf=value
→CM_MLPERF_USER_CONF=value
Above CLI flags can be used in the Python CM API as follows:
r=cm.access({... , "count":...}
Click here to expand this section.
These keys can be updated via --env.KEY=VALUE
or env
dictionary in @input.json
or using script flags.
- CM_BATCH_COUNT:
1
- CM_BATCH_SIZE:
1
- CM_FAST_COMPILATION:
yes
- CM_MLPERF_LOADGEN_SCENARIO:
Offline
Click here to expand this section.
- Read "deps" on other CM scripts from meta
- detect,os
- CM script: detect-os
- detect,cpu
- CM script: detect-cpu
- get,sys-utils-cm
- CM script: get-sys-utils-cm
- get,cuda,_cudnn
- CM script: get-cuda
- get,tensorrt
- CM script: get-tensorrt
- get,generic,sys-util,_glog-dev
- CM script: get-generic-sys-util
- get,generic,sys-util,_gflags-dev
- CM script: get-generic-sys-util
- get,loadgen
- CM names:
--adr.['loadgen']...
- CM script: get-mlperf-inference-loadgen
- CM names:
- get,mlcommons,inference,src
- CM names:
--adr.['inference-src']...
- CM script: get-mlperf-inference-src
- CM names:
- get,nvidia,mlperf,inference,common-code
- CM names:
--adr.['nvidia-inference-common-code']...
- CM script: get-mlperf-inference-nvidia-common-code
- CM names:
- get,dataset,preprocessed,imagenet,_NCHW
if (CM_MODEL == resnet50)
- CM names:
--adr.['imagenet-preprocessed']...
- CM script: get-preprocessed-dataset-imagenet
- get,ml-model,resnet50,_onnx
if (CM_MODEL == resnet50)
- CM names:
--adr.['ml-model', 'resnet50-model']...
- CM script: get-ml-model-resnet50
- CM script: get-ml-model-resnet50-tvm
- get,dataset,preprocessed,openimages,_validation,_NCHW
if (CM_MODEL == retinanet)
- CM names:
--adr.['openimages-preprocessed']...
- CM script: get-preprocessed-dataset-openimages
- get,ml-model,retinanet,_onnx,_fp32
if (CM_MODEL == retinanet)
- CM names:
--adr.['ml-model', 'retinanet-model']...
- CM script: get-ml-model-retinanet
- generate,nvidia,engine
if (CM_MLPERF_DEVICE != cpu)
- CM names:
--adr.tensorrt-engine-generator...
- CM script: generate-nvidia-engine
- generate,user-conf,mlperf,inference
- CM names:
--adr.['user-conf-generator']...
- CM script: generate-mlperf-inference-user-conf
- CM names:
- detect,os
- Run "preprocess" function from customize.py
- Read "prehook_deps" on other CM scripts from meta
- Run native script if exists
- Read "posthook_deps" on other CM scripts from meta
- Run "postrocess" function from customize.py
- Read "post_deps" on other CM scripts from meta
- compile,cpp-program
- CM names:
--adr.['compile-program']...
- CM script: compile-program
- CM names:
- benchmark,program
- CM names:
--adr.['runner']...
- CM script: benchmark-program
- CM names:
- compile,cpp-program
CM_DATASET_*
CM_HW_NAME
CM_MLPERF_*
CM_MLPERF_CONF
CM_MLPERF_USER_CONF