From 984fd20f278c968836f5645a04ff9f063249f232 Mon Sep 17 00:00:00 2001 From: Grigori Fursin Date: Sat, 15 Apr 2023 14:14:45 +0100 Subject: [PATCH] fixed broken links --- ck/README.md | 16 ++-- ck/ck/kernel.py | 8 +- ck/ck/repo/module/ck-platform/README.md | 2 +- .../ck-platform/ck_032630d041b4fd8a/config.py | 4 +- ck/ck/repo/module/package/module.py | 4 +- ck/docs/mlperf-automation/README.md | 8 +- .../mlperf-automation/components/README.md | 2 +- ck/docs/mlperf-automation/datasets/README.md | 2 +- ck/docs/mlperf-automation/dse/README.md | 6 +- .../mlperf-automation/inference/workflow.md | 2 +- ck/docs/mlperf-automation/reproduce/README.md | 6 +- .../reproduce/ck-1b165548d8adbe4d.md | 2 +- .../reproduce/ck-3c77b273b4c7d878.md | 2 +- .../reproduce/ck-3e0ad4b09998375d.md | 2 +- .../reproduce/ck-4f1a470a8a034bc3.md | 2 +- .../reproduce/ck-6582273dd3646e28.md | 4 +- .../reproduce/ck-94cc7bdd1f23cce3.md | 2 +- .../reproduce/ck-9fb65e57d8c61db4.md | 2 +- .../reproduce/ck-a399f837b48b0d1b.md | 2 +- .../reproduce/ck-ae88dc4516a7084e.md | 4 +- .../reproduce/ck-b14c70816eca59c6.md | 4 +- .../reproduce/ck-c3d81b4b869e8e07.md | 2 +- ...image-classification-jetson-nano-tflite.md | 2 +- .../ck-image-classification-rpi4-tflite.md | 2 +- ...age-classification-x86-64-openvino-2019.md | 2 +- .../ck-image-classification-x86-64-tflite.md | 2 +- .../ck-image-classification-x86-64-tflite2.md | 2 +- .../ck-object-detection-rpi4-coral-tflite.md | 2 +- .../ck-object-detection-rpi4-tflite.md | 2 +- .../reproduce/ck-object-detection-x86-64.md | 2 +- .../demo-webcam-object-detection-x86-64.md | 4 +- .../mlperf-automation/results/ck-dashboard.md | 2 +- ck/docs/mlperf-automation/setup/common.md | 2 +- ck/docs/mlperf-automation/submit/README.md | 4 +- ck/docs/mlperf-automation/tasks/README.md | 6 +- .../tasks/task-image-classification.md | 6 +- .../tasks/task-object-detection.md | 6 +- .../tasks/task-recommendation.md | 4 +- ck/docs/mlperf-automation/tbd/automation.md | 2 +- ck/docs/mlperf-automation/tools/ck.md | 2 +- .../tools/continuous-integration.md | 2 +- .../mlperf-inference-v1.1-submission-demo.md | 2 +- ...-automating-mlperf-with-tvm-and-ck-demo.md | 2 +- ...-2021-automating-mlperf-with-tvm-and-ck.md | 6 +- ck/docs/src/commands.md | 16 ++-- ck/docs/src/first-steps.md | 6 +- ck/docs/src/how-to-contribute.md | 2 +- ck/docs/src/installation.md | 2 +- ck/docs/src/introduction.md | 74 +++++++++---------- ck/docs/src/misc.md | 2 +- ck/docs/src/portable-workflows.md | 40 +++++----- ck/docs/src/typical-usage.md | 28 +++---- ck/incubator/cbench/README.md | 24 +++--- ck/incubator/cbench/cbench/config.py | 4 +- ck/incubator/cbench/setup.py | 2 +- cm-mlops/CONTRIBUTING.md | 2 +- cm-mlops/README.md | 2 +- cm-mlops/automation/cache/_cm.json | 2 +- cm-mlops/automation/docker/_cm.json | 2 +- cm-mlops/automation/experiment/_cm.json | 2 +- cm-mlops/automation/project/_cm.json | 2 +- cm-mlops/automation/script/_cm.json | 2 +- cm-mlops/automation/utils/_cm.json | 2 +- .../info.html | 20 ++--- .../participate-request-asplos2018/info.html | 20 ++--- cm-mlops/challenge/repro-asplos2020/info.html | 2 +- cm-mlops/challenge/repro-mlsys2020/info.html | 6 +- .../repro-request-asplos2018/info.html | 14 ++-- cm-mlops/challenge/repro-sml2020/info.html | 6 +- cm-mlops/script/activate-python-venv/_cm.json | 2 +- .../README-extra.md | 2 +- .../app-mlperf-inference-cpp/README-extra.md | 2 +- .../script/app-mlperf-inference-cpp/_cm.yaml | 2 +- .../app-mlperf-inference-reference/_cm.yaml | 2 +- .../app-mlperf-inference/README-extra.md | 2 +- cm-mlops/script/app-mlperf-inference/_cm.yaml | 2 +- .../_cm.yaml | 2 +- cm-mlops/script/gui/_cm.yaml | 2 +- .../_cm.yaml | 2 +- .../run-mlperf-inference-app/README-extra.md | 2 +- cm/cmind/repo/automation/automation/_cm.json | 2 +- cm/cmind/repo/automation/ck/_cm.json | 2 +- cm/cmind/repo/automation/core/_cm.json | 2 +- cm/cmind/repo/automation/repo/_cm.json | 2 +- docs/archive/taskforce-2022.md | 10 +-- docs/artifact-evaluation/checklist.md | 4 +- docs/artifact-evaluation/faq.md | 2 +- docs/artifact-evaluation/reviewing.md | 4 +- docs/artifact-evaluation/submission.md | 12 +-- docs/history.md | 2 +- docs/tutorials/mlperf-inference-submission.md | 2 +- docs/tutorials/sc22-scc-mlperf-part2.md | 2 +- docs/tutorials/sc22-scc-mlperf-part3.md | 2 +- docs/tutorials/sc22-scc-mlperf.md | 2 +- docs/tutorials/scripts.md | 2 +- 95 files changed, 256 insertions(+), 256 deletions(-) diff --git a/ck/README.md b/ck/README.md index 21272fc6d0..d0201a7903 100644 --- a/ck/README.md +++ b/ck/README.md @@ -93,7 +93,7 @@ and implement the following reusable automation recipes in the CK format: [Datacenter results](https://mlcommons.org/en/inference-datacenter-11), [Edge results](https://mlcommons.org/en/inference-edge-11) - [Reproducibility studies for MLPerf inference benchmark v1.1 automated by CK](https://github.com/mlcommons/ck/tree/master/docs/mlperf-automation/reproduce#reproducibility-reports-mlperf-inference-benchmark-v11) - - [Design space exploration of ML/SW/HW stacks and customizable visualization](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) + - [Design space exploration of ML/SW/HW stacks and customizable visualization](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) Please contact [Grigori Fursin](https://www.linkedin.com/in/grigorifursin) if you are interested to join this community effort! @@ -123,7 +123,7 @@ The latest version of the CK automation suite supported by MLCommons™: ## Current projects * [Automating MLPerf(tm) inference benchmark and packing ML models, data sets and frameworks as CK components with a unified API and meta description](https://github.com/mlcommons/ck/blob/master/docs/mlperf-automation/README.md) -* Developing customizable dashboards for MLPerf™ to help end-users select ML/SW/HW stacks on a Pareto frontier: [aggregated MLPerf™ results]( https://cknowledge.io/?q="mlperf-inference-all" ) +* Developing customizable dashboards for MLPerf™ to help end-users select ML/SW/HW stacks on a Pareto frontier: [aggregated MLPerf™ results]( https://cknow.io/?q="mlperf-inference-all" ) * Providing a common format to share artifacts at ML, systems and other conferences: [video](https://youtu.be/DIkZxraTmGM), [Artifact Evaluation](https://cTuning.org/ae) * Redesigning CK together with the community based on user feedback: [incubator](https://github.com/mlcommons/ck/tree/master/incubator) * [Other real-world use cases](https://cKnowledge.org/partners.html) from MLPerf™, Qualcomm, Arm, General Motors, IBM, the Raspberry Pi foundation, ACM and other great partners; @@ -255,13 +255,13 @@ to make it easier to integrate them with web services and CI platforms as descri ## CK portal -We have developed the [cKnowledge.io portal](https://cKnowledge.io) to help the community +We have developed the [cKnowledge.io portal](https://cknow.io) to help the community organize and find all the CK workflows and components similar to PyPI: -* [Search CK components](https://cKnowledge.io) -* [Browse CK components](https://cKnowledge.io/browse) -* [Find reproduced results from papers]( https://cKnowledge.io/reproduced-results ) -* [Test CK workflows to benchmark and optimize ML Systems]( https://cKnowledge.io/demo ) +* [Search CK components](https://cknow.io) +* [Browse CK components](https://cknow.io/browse) +* [Find reproduced results from papers]( https://cknow.io/reproduced-results ) +* [Test CK workflows to benchmark and optimize ML Systems]( https://cknow.io/demo ) @@ -277,7 +277,7 @@ The community provides Docker containers to test CK and components using differe ## Contributions Users can extend the CK functionality via [CK modules](https://github.com/mlcommons/ck/tree/master/ck/repo/module) -or external [GitHub reposities](https://cKnowledge.io/repos) in the CK format +or external [GitHub reposities](https://cknow.io/repos) in the CK format as described [here](https://ck.readthedocs.io/en/latest/src/typical-usage.html). Please check [this documentation](https://ck.readthedocs.io/en/latest/src/how-to-contribute.html) diff --git a/ck/ck/kernel.py b/ck/ck/kernel.py index 8c2f958af0..d74a79681e 100755 --- a/ck/ck/kernel.py +++ b/ck/ck/kernel.py @@ -54,10 +54,10 @@ "cmd": "ck $#module_uoa#$ (cid1/uid1) (cid2/uid2) (cid3/uid3) key_i=value_i ... @file.json", # Collective Knowledge Base (ckb) - "wiki_data_web": "https://cKnowledge.io/c/", + "wiki_data_web": "https://cknow.io/c/", # Collective Knowledge Base (ckb) "private_wiki_data_web": "https://github.com/mlcommons/ck/wiki/ckb_", - "api_web": "https://cKnowledge.io/c/module/", + "api_web": "https://cknow.io/c/module/", "status_url": "https://raw.githubusercontent.com/mlcommons/ck/master/setup.py", "help_examples": " Example of obtaining, compiling and running a shared benchmark on Linux with GCC:\n $ ck pull repo:ctuning-programs\n $ ck compile program:cbench-automotive-susan --speed\n $ ck run program:cbench-automotive-susan\n\n Example of an interactive CK-powered article:\n http://cknowledge.org/repo\n", @@ -148,7 +148,7 @@ "index_port": "9200", "index_use_curl": "no", - "cknowledge_api": "https://cKnowledge.io/api/v1/?", + "cknowledge_api": "https://cknow.io/api/v1/?", # "download_missing_components":"yes", "check_missing_modules": "yes", @@ -6747,7 +6747,7 @@ def short_help(i): h += 'CK Google group: https://bit.ly/ck-google-group\n' h += 'CK Slack channel: https://cKnowledge.org/join-slack\n' - h += 'Stable CK components: https://cKnowledge.io' + h += 'Stable CK components: https://cknow.io' if o == 'con': out(h) diff --git a/ck/ck/repo/module/ck-platform/README.md b/ck/ck/repo/module/ck-platform/README.md index 0f033fea7f..8e05f0b403 100644 --- a/ck/ck/repo/module/ck-platform/README.md +++ b/ck/ck/repo/module/ck-platform/README.md @@ -2,7 +2,7 @@ Check `test/init-graph.bat` -https://cknowledge.io/c/result/fgg-test/?v=1.0.0#gfursin_1 +https://cknow.io/c/result/fgg-test/?v=1.0.0#gfursin_1 ## Push results to a graph diff --git a/ck/ck/repo/module/ck-platform/ck_032630d041b4fd8a/config.py b/ck/ck/repo/module/ck-platform/ck_032630d041b4fd8a/config.py index 23437715b7..03d9417982 100644 --- a/ck/ck/repo/module/ck-platform/ck_032630d041b4fd8a/config.py +++ b/ck/ck/repo/module/ck-platform/ck_032630d041b4fd8a/config.py @@ -12,7 +12,7 @@ CK_CFG_MODULE_REPO_UOA="befd7892b0d469e9" # CK module UOA for REPO -CR_DEFAULT_SERVER="https://cKnowledge.io" +CR_DEFAULT_SERVER="https://cknow.io" CR_DEFAULT_SERVER_URL=CR_DEFAULT_SERVER+"/api/v1/?" CR_DEFAULT_SERVER_USER="crowd-user" CR_DEFAULT_SERVER_API_KEY="43fa84787ff65c2c00bf740e3853c90da8081680fe1025e8314e260888265033" @@ -142,7 +142,7 @@ def update(i): # Check release notes server_url=cfg.get('server_url','') - if server_url=='': server_url='https://cKnowledge.io/api/v1/?' + if server_url=='': server_url='https://cknow.io/api/v1/?' from . import comm_min r=comm_min.send({'url':server_url, diff --git a/ck/ck/repo/module/package/module.py b/ck/ck/repo/module/package/module.py index be5709b97b..e53051ccff 100644 --- a/ck/ck/repo/module/package/module.py +++ b/ck/ck/repo/module/package/module.py @@ -1930,7 +1930,7 @@ def show(i): h+='

\n' h+='See

ck install package --help
for more installation options.\n' - h+='See related CK soft detection plugins,\n' + h+='See related CK soft detection plugins,\n' h+=' CK documentation,\n' h+=' "how to contribute" guide,\n' h+=' ACM ReQuEST-ASPLOS\'18 report\n' @@ -2699,7 +2699,7 @@ def add(i): # Check related soft first suoa=i.get('soft','') if suoa=='': - return {'return':1, 'error':'related software detection plugin is not specified. Specify the existing one using --soft={name from https://cknowledge.io/c/soft} or create a new one using "ck add soft {repo}:soft:{name}"'} + return {'return':1, 'error':'related software detection plugin is not specified. Specify the existing one using --soft={name from https://cknow.io/c/soft} or create a new one using "ck add soft {repo}:soft:{name}"'} # Load soft to get UID and tags r=ck.access({'action':'load', diff --git a/ck/docs/mlperf-automation/README.md b/ck/docs/mlperf-automation/README.md index d37f97a12f..26a65966d8 100644 --- a/ck/docs/mlperf-automation/README.md +++ b/ck/docs/mlperf-automation/README.md @@ -80,7 +80,7 @@ Speech | [CK ±](tasks/task-speech-pytorch.md) | | | | | See other CK packages with open source datasets shared by the community (to be standardized and connected with the new submission system): -[View](https://cknowledge.io/?q=%22package%3Adataset-*%22) +[View](https://cknow.io/?q=%22package%3Adataset-*%22) ## CK packages with ML models used for MLPerf submissions @@ -96,7 +96,7 @@ See other CK packages with open source datasets shared by the community ## Customizable dashboards -* [All aggregated MLPerf inference benchmark results](https://cknowledge.io/?q=%22mlperf-inference-all%22) +* [All aggregated MLPerf inference benchmark results](https://cknow.io/?q=%22mlperf-inference-all%22) # Table of contents @@ -122,8 +122,8 @@ See other CK packages with open source datasets shared by the community * [MLPerf™ object detection workflow](https://github.com/mlcommons/ck/blob/master/docs/mlperf-automation/tasks/task-object-detection.md) * [Docker image for MLPerf™ with OpenVINO]( https://github.com/mlcommons/ck-mlops/tree/main/docker/mlperf-inference-v0.7.openvino) * [Jupyter notebook for ML DSE](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/f28u9epifr0nn09/ck-dse-demo-object-detection.ipynb) -* [Webcam test of the MLPerf object detection model with TFLite](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-webcam-linux-azure#test) -* [Public scoreboard with MLPerf DSE](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) +* [Webcam test of the MLPerf object detection model with TFLite](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-webcam-linux-azure#test) +* [Public scoreboard with MLPerf DSE](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) # Further improvements diff --git a/ck/docs/mlperf-automation/components/README.md b/ck/docs/mlperf-automation/components/README.md index 11c7040947..ccd93eda22 100644 --- a/ck/docs/mlperf-automation/components/README.md +++ b/ck/docs/mlperf-automation/components/README.md @@ -52,4 +52,4 @@ Feel free to help and/or provide your feedback [here](https://github.com/mlcommo # Coordinator -* [Grigori Fursin](https://cKnowledge.io/@gfursin) +* [Grigori Fursin](https://cKnowledge.org/gfursin) diff --git a/ck/docs/mlperf-automation/datasets/README.md b/ck/docs/mlperf-automation/datasets/README.md index 678da6b4e0..4683cc51ab 100644 --- a/ck/docs/mlperf-automation/datasets/README.md +++ b/ck/docs/mlperf-automation/datasets/README.md @@ -3,4 +3,4 @@ * [COCO 2017](coco2017.md) * [ImageNet 2012](imagenet2012.md) -* [CK datasets from the community - must be cleaned and unified](https://cknowledge.io/?q=module_uoa%3A%22package%22+AND+dataset) \ No newline at end of file +* [CK datasets from the community - must be cleaned and unified](https://cknow.io/?q=module_uoa%3A%22package%22+AND+dataset) \ No newline at end of file diff --git a/ck/docs/mlperf-automation/dse/README.md b/ck/docs/mlperf-automation/dse/README.md index 2b04e0967d..ec99dccf5c 100644 --- a/ck/docs/mlperf-automation/dse/README.md +++ b/ck/docs/mlperf-automation/dse/README.md @@ -7,12 +7,12 @@ # CK based object detection DSE * [Jupyter notebook](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/f28u9epifr0nn09/ck-dse-demo-object-detection.ipynb) -* [CK dashboard](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) +* [CK dashboard](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) # ACM ASPLOS REQUEST DSE tournament -* [event and report](https://cknowledge.io/c/event/repro-request-asplos2018) -* [CK dashboard](https://cknowledge.io/c/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) +* [event and report](https://cknow.io/c/event/repro-request-asplos2018) +* [CK dashboard](https://cknow.io/c/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) To run locally: ``` diff --git a/ck/docs/mlperf-automation/inference/workflow.md b/ck/docs/mlperf-automation/inference/workflow.md index c264c1451d..078c705e1f 100644 --- a/ck/docs/mlperf-automation/inference/workflow.md +++ b/ck/docs/mlperf-automation/inference/workflow.md @@ -79,4 +79,4 @@ ck export mlperf.result # Coordinator -* [Grigori Fursin](https://cKnowledge.io/@gfursin) +* [Grigori Fursin](https://cKnowledge.org/gfursin) diff --git a/ck/docs/mlperf-automation/reproduce/README.md b/ck/docs/mlperf-automation/reproduce/README.md index 7f714ed966..22a9f94006 100644 --- a/ck/docs/mlperf-automation/reproduce/README.md +++ b/ck/docs/mlperf-automation/reproduce/README.md @@ -43,10 +43,10 @@ if you are interested to join this community effort! ## Using CK adaptive containers (to be tested!) -* [MLPerf™ workflows](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22mlperf%22) +* [MLPerf™ workflows](https://cknow.io/?q=module_uoa%3A%22docker%22+AND+%22mlperf%22) -* [CK image classification](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22image-classification%22) -* [CK object detection](https://cknowledge.io/?q=module_uoa%3A%22docker%22+AND+%22object-detection%22) +* [CK image classification](https://cknow.io/?q=module_uoa%3A%22docker%22+AND+%22image-classification%22) +* [CK object detection](https://cknow.io/?q=module_uoa%3A%22docker%22+AND+%22object-detection%22) # Other reproducibility studies diff --git a/ck/docs/mlperf-automation/reproduce/ck-1b165548d8adbe4d.md b/ck/docs/mlperf-automation/reproduce/ck-1b165548d8adbe4d.md index c514c734b3..b18772624f 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-1b165548d8adbe4d.md +++ b/ck/docs/mlperf-automation/reproduce/ck-1b165548d8adbe4d.md @@ -364,7 +364,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-3c77b273b4c7d878.md b/ck/docs/mlperf-automation/reproduce/ck-3c77b273b4c7d878.md index 315c4131aa..2ddb555edb 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-3c77b273b4c7d878.md +++ b/ck/docs/mlperf-automation/reproduce/ck-3c77b273b4c7d878.md @@ -389,7 +389,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-3e0ad4b09998375d.md b/ck/docs/mlperf-automation/reproduce/ck-3e0ad4b09998375d.md index b6ad83b85a..aa5fda26fd 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-3e0ad4b09998375d.md +++ b/ck/docs/mlperf-automation/reproduce/ck-3e0ad4b09998375d.md @@ -311,7 +311,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-4f1a470a8a034bc3.md b/ck/docs/mlperf-automation/reproduce/ck-4f1a470a8a034bc3.md index 2e59fd9474..9fd82618ae 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-4f1a470a8a034bc3.md +++ b/ck/docs/mlperf-automation/reproduce/ck-4f1a470a8a034bc3.md @@ -309,7 +309,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-6582273dd3646e28.md b/ck/docs/mlperf-automation/reproduce/ck-6582273dd3646e28.md index 2fc525deac..56ca7640f4 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-6582273dd3646e28.md +++ b/ck/docs/mlperf-automation/reproduce/ck-6582273dd3646e28.md @@ -145,7 +145,7 @@ ck install package --tags=model,image-classification,tflite,efficientnet,lite0,n More information about this model: [ [CK meta.json](https://github.com/mlcommons/ck-mlops/blob/main/package/model-tflite-mlperf-efficientnet-lite/.cm/meta.json) ] -[ [summary](https://cknowledge.io/c/package/model-tflite-mlperf-efficientnet-lite) ] +[ [summary](https://cknow.io/c/package/model-tflite-mlperf-efficientnet-lite) ] ## SingleStream scenario @@ -345,7 +345,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-94cc7bdd1f23cce3.md b/ck/docs/mlperf-automation/reproduce/ck-94cc7bdd1f23cce3.md index c8bcf3d583..011ac3606e 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-94cc7bdd1f23cce3.md +++ b/ck/docs/mlperf-automation/reproduce/ck-94cc7bdd1f23cce3.md @@ -323,7 +323,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-9fb65e57d8c61db4.md b/ck/docs/mlperf-automation/reproduce/ck-9fb65e57d8c61db4.md index fa95049f5c..a73853e6ab 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-9fb65e57d8c61db4.md +++ b/ck/docs/mlperf-automation/reproduce/ck-9fb65e57d8c61db4.md @@ -308,7 +308,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-a399f837b48b0d1b.md b/ck/docs/mlperf-automation/reproduce/ck-a399f837b48b0d1b.md index e83b8bc28c..f700479512 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-a399f837b48b0d1b.md +++ b/ck/docs/mlperf-automation/reproduce/ck-a399f837b48b0d1b.md @@ -307,7 +307,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-ae88dc4516a7084e.md b/ck/docs/mlperf-automation/reproduce/ck-ae88dc4516a7084e.md index 0ebd2890d8..70bf3b8611 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-ae88dc4516a7084e.md +++ b/ck/docs/mlperf-automation/reproduce/ck-ae88dc4516a7084e.md @@ -146,7 +146,7 @@ ck install package --tags=model,image-classification,tflite,mobilenet-v1,v1-0.25 More information about this model: [ [CK meta.json](https://github.com/mlcommons/ck-mlops/blob/main/package/model-tf-and-tflite-mlperf-mobilenet-v1-20180802/.cm/meta.json) ] -[ [summary](https://cknowledge.io/c/package/model-tf-and-tflite-mlperf-mobilenet-v1-20180802) ] +[ [summary](https://cknow.io/c/package/model-tf-and-tflite-mlperf-mobilenet-v1-20180802) ] ## SingleStream scenario @@ -348,7 +348,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-b14c70816eca59c6.md b/ck/docs/mlperf-automation/reproduce/ck-b14c70816eca59c6.md index e934c72de6..e74fec96d3 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-b14c70816eca59c6.md +++ b/ck/docs/mlperf-automation/reproduce/ck-b14c70816eca59c6.md @@ -146,7 +146,7 @@ ck install package --tags=model,image-classification,tflite,mobilenet-v3,v3-larg ``` More information about this model: [ [CK meta.json](https://github.com/mlcommons/ck-mlops/blob/main/package/model-tf-and-tflite-mlperf-mobilenet-v3/.cm/meta.json) ] -[ [summary](https://cknowledge.io/c/package/model-tf-and-tflite-mlperf-mobilenet-v3) ] +[ [summary](https://cknow.io/c/package/model-tf-and-tflite-mlperf-mobilenet-v3) ] ## SingleStream scenario @@ -348,7 +348,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-c3d81b4b869e8e07.md b/ck/docs/mlperf-automation/reproduce/ck-c3d81b4b869e8e07.md index 48d5ba49fa..2bafe5b055 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-c3d81b4b869e8e07.md +++ b/ck/docs/mlperf-automation/reproduce/ck-c3d81b4b869e8e07.md @@ -325,7 +325,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/reproduce/ck-image-classification-jetson-nano-tflite.md b/ck/docs/mlperf-automation/reproduce/ck-image-classification-jetson-nano-tflite.md index 864a8cbd7a..a498e992c7 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-image-classification-jetson-nano-tflite.md +++ b/ck/docs/mlperf-automation/reproduce/ck-image-classification-jetson-nano-tflite.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210505*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210505*** * Platform: Nvidia Jetson Nano * OS: Ubuntu 18.04.05 LTS 64-bit diff --git a/ck/docs/mlperf-automation/reproduce/ck-image-classification-rpi4-tflite.md b/ck/docs/mlperf-automation/reproduce/ck-image-classification-rpi4-tflite.md index 91250e9420..18cc0cc22b 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-image-classification-rpi4-tflite.md +++ b/ck/docs/mlperf-automation/reproduce/ck-image-classification-rpi4-tflite.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210505*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210505*** * Platform: Raspberry Pi 4 * OS: Ubuntu 20.04 64-bit diff --git a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-openvino-2019.md b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-openvino-2019.md index f262563a80..57f3fe58f1 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-openvino-2019.md +++ b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-openvino-2019.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210808*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210808*** * Platform: x8664 * OS: Ubuntu 18.04 64-bit diff --git a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite.md b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite.md index 0fe8e3efec..35736ff3c5 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite.md +++ b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210506*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210506*** * Platform: x86 64 * OS: Ubuntu 18.04 64-bit diff --git a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite2.md b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite2.md index d4dac7ee4d..e6f03d44a9 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite2.md +++ b/ck/docs/mlperf-automation/reproduce/ck-image-classification-x86-64-tflite2.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210808*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210808*** * Platform: x8664 * OS: Ubuntu 18.04 64-bit diff --git a/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-coral-tflite.md b/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-coral-tflite.md index ccc663567a..5893ad66eb 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-coral-tflite.md +++ b/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-coral-tflite.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210501*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210501*** # MLPerf™ Inference v1.0 - Object Detection - TFLite (with Coral EdgeTPU support) diff --git a/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-tflite.md b/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-tflite.md index c36ad44c6a..ae3eca57a8 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-tflite.md +++ b/ck/docs/mlperf-automation/reproduce/ck-object-detection-rpi4-tflite.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210428*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210428*** # MLPerf™ Inference v1.0 - Object Detection - TFLite diff --git a/ck/docs/mlperf-automation/reproduce/ck-object-detection-x86-64.md b/ck/docs/mlperf-automation/reproduce/ck-object-detection-x86-64.md index d079de983e..532f4306e6 100644 --- a/ck/docs/mlperf-automation/reproduce/ck-object-detection-x86-64.md +++ b/ck/docs/mlperf-automation/reproduce/ck-object-detection-x86-64.md @@ -1,6 +1,6 @@ **[ [TOC](../README.md) ]** -***Reproduced by [Grigori Fursin](https://cKnowledge.io/@gfursin) on 20210428*** +***Reproduced by [Grigori Fursin](https://cKnowledge.org/gfursin) on 20210428*** # MLPerf™ Inference v1.0 - Object Detection - TFLite 2.4.1 with RUY diff --git a/ck/docs/mlperf-automation/reproduce/demo-webcam-object-detection-x86-64.md b/ck/docs/mlperf-automation/reproduce/demo-webcam-object-detection-x86-64.md index 99d7e18109..77d30dffff 100644 --- a/ck/docs/mlperf-automation/reproduce/demo-webcam-object-detection-x86-64.md +++ b/ck/docs/mlperf-automation/reproduce/demo-webcam-object-detection-x86-64.md @@ -27,7 +27,7 @@ python -m pip install cbench ## Initialize demo solution -This is a prototype of [CK-based ML solutions](https://cknowledge.io/docs/intro/introduction.html#portable-ck-solution) +This is a prototype of [CK-based ML solutions](https://cknow.io/docs/intro/introduction.html#portable-ck-solution) to simplify ML deployment. @@ -56,7 +56,7 @@ cb start ## Go the cKnowledge.io webpage with camera -https://cknowledge.io/c/solution/demo-webcam-mlperf-obj-detection-coco-tf-cpu-linux +https://cknow.io/c/solution/demo-webcam-mlperf-obj-detection-coco-tf-cpu-linux Click on "Start webcam" diff --git a/ck/docs/mlperf-automation/results/ck-dashboard.md b/ck/docs/mlperf-automation/results/ck-dashboard.md index 6f7f04eaf7..53cdd46209 100644 --- a/ck/docs/mlperf-automation/results/ck-dashboard.md +++ b/ck/docs/mlperf-automation/results/ck-dashboard.md @@ -3,7 +3,7 @@ # Example of CK dashboards for ML Systems DSE You can record experiments in the CK "experiment" entries and visualize them using CK dashboards -either locally or using the [cKnowledge.io platform](https://cknowledge.io/?q="mlperf-inference-all") +either locally or using the [cKnowledge.io platform](https://cknow.io/?q="mlperf-inference-all") diff --git a/ck/docs/mlperf-automation/setup/common.md b/ck/docs/mlperf-automation/setup/common.md index 03c952c840..58016add6b 100644 --- a/ck/docs/mlperf-automation/setup/common.md +++ b/ck/docs/mlperf-automation/setup/common.md @@ -38,7 +38,7 @@ Path to CK repositories: /mnt/CK Documentation: https://github.com/mlcommons/ck/wiki CK Google group: https://bit.ly/ck-google-group CK Slack channel: https://cKnowledge.org/join-slack -Stable CK components: https://cKnowledge.io +Stable CK components: https://cknow.io ``` Note that you may need to restart your shell after this installation diff --git a/ck/docs/mlperf-automation/submit/README.md b/ck/docs/mlperf-automation/submit/README.md index 706249b321..6db2e4a5b0 100644 --- a/ck/docs/mlperf-automation/submit/README.md +++ b/ck/docs/mlperf-automation/submit/README.md @@ -18,7 +18,7 @@ Please follow [this guide](https://github.com/mlcommons/ck-mlops/tree/main/modul * [Notes about power](power.md) * [Misc inference notes](../inference/notes.md) - * [Generate target latency via CK repos](https://cknowledge.io/c/program/generate-target-latency) - * [Dump CK repo to submission](https://cknowledge.io/c/program/dump-repo-to-submission) + * [Generate target latency via CK repos](https://cknow.io/c/program/generate-target-latency) + * [Dump CK repo to submission](https://cknow.io/c/program/dump-repo-to-submission) * [Submission example from Dell using Nvidia-based machine](https://infohub.delltechnologies.com/p/running-the-mlperf-inference-v0-7-benchmark-on-dell-emc-systems) diff --git a/ck/docs/mlperf-automation/tasks/README.md b/ck/docs/mlperf-automation/tasks/README.md index 1cbba48586..07bf56cd64 100644 --- a/ck/docs/mlperf-automation/tasks/README.md +++ b/ck/docs/mlperf-automation/tasks/README.md @@ -34,9 +34,9 @@ Speech | [CK ±](tasks/task-speech-pytorch.md) | | | | | * [Notes about datasets](../datasets/README.md) * [Notes about models (issues, quantization, etc)](../models/notes.md) -* DLRM: [notes](dlrm.md), [CK packages](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+dlrm), [CK workflows](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+dlrm) -* [Search for CK program workflows with "mlperf"](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+mlperf) -* [Search for CK program workflows with "loadgen"](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+loadgen) +* DLRM: [notes](dlrm.md), [CK packages](https://cknow.io/?q=module_uoa%3A%22program%22+AND+dlrm), [CK workflows](https://cknow.io/?q=module_uoa%3A%22program%22+AND+dlrm) +* [Search for CK program workflows with "mlperf"](https://cknow.io/?q=module_uoa%3A%22program%22+AND+mlperf) +* [Search for CK program workflows with "loadgen"](https://cknow.io/?q=module_uoa%3A%22program%22+AND+loadgen) # Feedback * Contact: grigori@octoml.ai diff --git a/ck/docs/mlperf-automation/tasks/task-image-classification.md b/ck/docs/mlperf-automation/tasks/task-image-classification.md index b101e950e4..e689ca6bf2 100644 --- a/ck/docs/mlperf-automation/tasks/task-image-classification.md +++ b/ck/docs/mlperf-automation/tasks/task-image-classification.md @@ -69,7 +69,7 @@ ck show env ``` You can explore available packages in the [CK GitHub repo](https://github.com/mlcommons/ck-mlops/tree/main/package) -or using the [cKnowledge.io platform](https://cKnowledge.io/c/package). +or using the [cKnowledge.io platform](https://cknow.io/c/package). ## Pull CK repo with the latest MLPerf™ automations from OctoML: ``` @@ -116,7 +116,7 @@ and [CK Python customization script](https://github.com/mlcommons/ck-mlops/blob/ to detect this data set on your machine. You can see other software detection plugins in the [CK repository](https://github.com/mlcommons/ck-mlops/tree/main/soft) -or using the [cKnowledge.io platform](https://cKnowledge.io/c/soft). +or using the [cKnowledge.io platform](https://cknow.io/c/soft). @@ -284,7 +284,7 @@ ls -l Note that you can process, analyze and visualize such CK results from multiple experiments using Python scripts, CK modules and Jupyter notebooks as shown in this [Jupyter notebook example](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/f28u9epifr0nn09/ck-dse-demo-object-detection.ipynb) -and [CK dashboard](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all/). +and [CK dashboard](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all/). diff --git a/ck/docs/mlperf-automation/tasks/task-object-detection.md b/ck/docs/mlperf-automation/tasks/task-object-detection.md index f6b62371e8..c97f7be764 100644 --- a/ck/docs/mlperf-automation/tasks/task-object-detection.md +++ b/ck/docs/mlperf-automation/tasks/task-object-detection.md @@ -71,7 +71,7 @@ ck show env ``` You can explore available packages in the [CK GitHub repo](https://github.com/mlcommons/ck-mlops/tree/main/package) -or using the [cKnowledge.io platform](https://cKnowledge.io/c/package). +or using the [cKnowledge.io platform](https://cknow.io/c/package). ## Pull CK repo with the latest MLPerf™ automations from OctoML: ``` @@ -106,7 +106,7 @@ and [CK Python customization script](https://github.com/mlcommons/ck-mlops/tree/ to detect this data set on your machine. You can see other software detection plugins in the [CK repository](https://github.com/mlcommons/ck-mlops/tree/main/soft) -or using the [cKnowledge.io platform](https://cKnowledge.io/c/soft). +or using the [cKnowledge.io platform](https://cknow.io/c/soft). @@ -235,7 +235,7 @@ ls -l Note that you can process, analyze and visualize such CK results from multiple experiments using Python scripts, CK modules and Jupyter notebooks as shown in this [Jupyter notebook example](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/f28u9epifr0nn09/ck-dse-demo-object-detection.ipynb) -and [CK dashboard](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all/). +and [CK dashboard](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all/). diff --git a/ck/docs/mlperf-automation/tasks/task-recommendation.md b/ck/docs/mlperf-automation/tasks/task-recommendation.md index 34429ec99f..cd520e97c0 100644 --- a/ck/docs/mlperf-automation/tasks/task-recommendation.md +++ b/ck/docs/mlperf-automation/tasks/task-recommendation.md @@ -30,5 +30,5 @@ ck activate venv:mlperf-inference * https://github.com/mlcommons/inference/issues/604 * CK components: - * [CK packages](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+dlrm) - * [CK workflows](https://cknowledge.io/?q=module_uoa%3A%22program%22+AND+dlrm) + * [CK packages](https://cknow.io/?q=module_uoa%3A%22program%22+AND+dlrm) + * [CK workflows](https://cknow.io/?q=module_uoa%3A%22program%22+AND+dlrm) diff --git a/ck/docs/mlperf-automation/tbd/automation.md b/ck/docs/mlperf-automation/tbd/automation.md index 2f3fa31460..03186ede3b 100644 --- a/ck/docs/mlperf-automation/tbd/automation.md +++ b/ck/docs/mlperf-automation/tbd/automation.md @@ -3,5 +3,5 @@ # Ideas to improve automation * Provide tests to automate all CK components and workflows across all available platforms -* Test ML models with real workflows (image classificaiton, object detection, NLP) similar to [CK demo](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-webcam-linux-azure/#test) +* Test ML models with real workflows (image classificaiton, object detection, NLP) similar to [CK demo](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-webcam-linux-azure/#test) * Add more details about how set up RPi4, Nvidia, Coral Edge TPU and other boards from scratch diff --git a/ck/docs/mlperf-automation/tools/ck.md b/ck/docs/mlperf-automation/tools/ck.md index 3df71c2bd4..ea2edc2e21 100644 --- a/ck/docs/mlperf-automation/tools/ck.md +++ b/ck/docs/mlperf-automation/tools/ck.md @@ -48,7 +48,7 @@ Path to CK repositories: /home/fursin/CK/local/venv/ck-octoml-amd/CK Documentation: https://github.com/mlcommons/ck/wiki CK Google group: https://bit.ly/ck-google-group CK Slack channel: https://cKnowledge.org/join-slack -Stable CK components: https://cKnowledge.io +Stable CK components: https://cknow.io ``` Sometimes you may need to add "~.local/bin/ck" to your PATH or restart your shell. diff --git a/ck/docs/mlperf-automation/tools/continuous-integration.md b/ck/docs/mlperf-automation/tools/continuous-integration.md index cbb3f36f24..ab65c48c48 100644 --- a/ck/docs/mlperf-automation/tools/continuous-integration.md +++ b/ck/docs/mlperf-automation/tools/continuous-integration.md @@ -37,7 +37,7 @@ This allows one to use CK automation actions and workflows as web services or co # Examples Python-based CK integration with web platforms: -* [cKnowledge.io platform](https://cKnowledge.io) uses CK framework as a database of CK objects and micro-services. +* [cKnowledge.io platform](https://cknow.io) uses CK framework as a database of CK objects and micro-services. CMD-based CK integration with CLI platforms: diff --git a/ck/docs/mlperf-automation/tutorials/mlperf-inference-v1.1-submission-demo.md b/ck/docs/mlperf-automation/tutorials/mlperf-inference-v1.1-submission-demo.md index 2f7858cb17..b0c47a985a 100644 --- a/ck/docs/mlperf-automation/tutorials/mlperf-inference-v1.1-submission-demo.md +++ b/ck/docs/mlperf-automation/tutorials/mlperf-inference-v1.1-submission-demo.md @@ -338,7 +338,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck-demo.md b/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck-demo.md index a2692de7eb..e10a0ee637 100644 --- a/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck-demo.md +++ b/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck-demo.md @@ -349,7 +349,7 @@ You can remove "ck-mlperf-inference-1.1-dse:" from above commands to process res ## Display other reproduced results at cKnowledge.io -* [List dashboards](https://cknowledge.io/reproduced-results) +* [List dashboards](https://cknow.io/reproduced-results) diff --git a/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck.md b/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck.md index e3e23cf3aa..7864a30895 100644 --- a/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck.md +++ b/ck/docs/mlperf-automation/tutorials/tvmcon-2021-automating-mlperf-with-tvm-and-ck.md @@ -41,8 +41,8 @@ hardware. [ArXiv](https://arxiv.org/abs/2011.01149), [automation actions](https://github.com/mlcommons/ck/tree/master/ck/repo/module), [MLOps components](https://github.com/mlcommons/ck-mlops) - * [ACM REQUEST-ASPLOS'18: the 1st Reproducible Tournament on Pareto-efficient Image Classification](https://cknowledge.io/c/event/repro-request-asplos2018) - * [Live scoreboard](https://cknowledge.io/c/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) + * [ACM REQUEST-ASPLOS'18: the 1st Reproducible Tournament on Pareto-efficient Image Classification](https://cknow.io/c/event/repro-request-asplos2018) + * [Live scoreboard](https://cknow.io/c/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) * [CK-based MLPerf automation](https://github.com/mlcommons/ck/tree/master/docs/mlperf-automation) ![](https://raw.githubusercontent.com/ctuning/ck-guide-images/master/mlperf-ck-automation.png) @@ -50,7 +50,7 @@ hardware. ![](https://raw.githubusercontent.com/ctuning/ck-guide-images/master/mlperf-ck-dse.png) ![](https://raw.githubusercontent.com/ctuning/ck-guide-images/master/mlperf-ck-dse-pareto.png) - * [cKnowledge.io dashboard with reproducible results](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) + * [cKnowledge.io dashboard with reproducible results](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all) * OctoML's MLPerf inference submission v1.1: diff --git a/ck/docs/src/commands.md b/ck/docs/src/commands.md index daa148dea9..ba728c4640 100644 --- a/ck/docs/src/commands.md +++ b/ck/docs/src/commands.md @@ -1,8 +1,8 @@ # CK CLI and API -Most of the CK functionality is implemented using [CK modules](https://cKnowledge.io/modules) -with [automation actions]( https://cKnowledge.io/actions ) and associated -[CK entries (components)]( https://cKnowledge.io/browse ). +Most of the CK functionality is implemented using [CK modules](https://cknow.io/modules) +with [automation actions]( https://cknow.io/actions ) and associated +[CK entries (components)]( https://cknow.io/browse ). Here we describe the main CK functionality to manage repositories, modules, and actions. Remember that you can see all flags for a given automation action from the command line as follows: @@ -28,8 +28,8 @@ ck {action} ... @input.yaml ## CLI to manage CK repositories -* Automation actions are implemented using the internal CK module [*repo*]( https://cknowledge.io/c/module/repo ). -* See the list of all automation actions and their API at [cKnowledge.io platform]( https://cknowledge.io/c/module/repo/#api ). +* Automation actions are implemented using the internal CK module [*repo*]( https://cknow.io/c/module/repo ). +* See the list of all automation actions and their API at [cKnowledge.io platform]( https://cknow.io/c/module/repo/#api ). ### Init new CK repository in the current path ```bash @@ -275,7 +275,7 @@ ck search {CK module} --search_string={string with wildcards} ``` Note that CK supports transparent indexing of all CK JSON meta descriptions by [ElasticSearch](https://www.elastic.co) -to enable fast search and powerful queries. This mode is used in our [cKnowledge.io platform](https://cKnowledge.io). +to enable fast search and powerful queries. This mode is used in our [cKnowledge.io platform](https://cknow.io). Please check these pages to know how to configure your CK installation with ES: * https://github.com/mlcommons/ck/wiki/Customization * https://github.com/mlcommons/ck/wiki/Indexing-entries @@ -524,7 +524,7 @@ For non-internal actions, you can check their API as follows: ck {action name} {module name} --help ``` -You can also check them at the [cKnowledge.io platform](https://cKnowledge.io/modules). +You can also check them at the [cKnowledge.io platform](https://cknow.io/modules). When executing the following command @@ -562,7 +562,7 @@ using the *module_deps* key. See an example in the CK module *program*: * [how it is used in the CK module program](https://github.com/ctuning/ck-autotuning/blob/master/module/program/module.py#L479) Such approach also allows us to visualize the growing knowledge graph: -[interactive graph]( https://cKnowledge.io/kg1 ), +[interactive graph]( https://cknow.io/kg1 ), [video](https://youtu.be/nabXHyot5is). Finally, a given CK module has an access to the 3 dictionaries: diff --git a/ck/docs/src/first-steps.md b/ck/docs/src/first-steps.md index ba0f01273e..41780be481 100644 --- a/ck/docs/src/first-steps.md +++ b/ck/docs/src/first-steps.md @@ -15,7 +15,7 @@ target platform properties and software dependencies and then compile and run a with any compatible dataset and model in a unified way. Note that such approach also supports our [reproducibility initiatives at ML&systems conferences](https://cTuning.org/ae) -to share portable workflows along with [published papers](https://cKnowledge.io/reproduced-papers). +to share portable workflows along with [published papers](https://cknow.io/reproduced-papers). Our goal is to make it easier for the community to reproduce research techniques, compare them, build upon them, and adopt them in production. @@ -92,7 +92,7 @@ You can update any above key from the command line by adding "--" to it. If you When compiling program, CK will first attempt to automatically detect the properties of the platform and all required software dependencies such as compilers and libraries that are already installed on this platform. -CK uses [multiple plugins](https://cKnowledge.io/soft) describing how to detect different software, models, and datasets. +CK uses [multiple plugins](https://cknow.io/soft) describing how to detect different software, models, and datasets. Users can add their own plugins either in their own CK repositories or in already existing ones. @@ -151,7 +151,7 @@ instead of rewriting complex infrastructure from scratch in each research projec Note, that if a given software dependency is not resolved, CK will attempt to automatically install it using CK meta packages -(see the list of shared CK packages at [cKnowledge.io](https://cKnowledge.io/packages)). +(see the list of shared CK packages at [cKnowledge.io](https://cknow.io/packages)). Such meta packages contain JSON meta information and scripts to install and potentially rebuild a given package for a given target platform while reusing existing diff --git a/ck/docs/src/how-to-contribute.md b/ck/docs/src/how-to-contribute.md index bd1a6e92da..70599fe670 100644 --- a/ck/docs/src/how-to-contribute.md +++ b/ck/docs/src/how-to-contribute.md @@ -1,6 +1,6 @@ # Notes -Users extend the CK functionality via external [GitHub reposities](https://cKnowledge.io/repos) in the CK format. +Users extend the CK functionality via external [GitHub reposities](https://cknow.io/repos) in the CK format. See [docs](https://ck.readthedocs.io/en/latest/src/typical-usage.html) for more details. If you want to extend the CK core, please note that we plan to completely rewrite it based on the OO principles diff --git a/ck/docs/src/installation.md b/ck/docs/src/installation.md index ae1e469dd6..e2f6944fc7 100644 --- a/ck/docs/src/installation.md +++ b/ck/docs/src/installation.md @@ -85,7 +85,7 @@ Path to CK repositories: /mnt/CK Documentation: https://github.com/mlcommons/ck/wiki CK Google group: https://bit.ly/ck-google-group CK Slack channel: https://cKnowledge.org/join-slack -Stable CK components: https://cKnowledge.io +Stable CK components: https://cknow.io ``` diff --git a/ck/docs/src/introduction.md b/ck/docs/src/introduction.md index 6cb6a12bb7..d708d5d66b 100644 --- a/ck/docs/src/introduction.md +++ b/ck/docs/src/introduction.md @@ -61,9 +61,9 @@ CK repositories are human-readable databases of reusable CK components that can in any local directory and inside containers, pulled from GitHub and similar services, and shared as standard archive files. CK components simply wrap user artifacts and provide an extensible JSON meta description -with [***common automation actions***](https://cKnowledge.io/actions) for related artifacts. +with [***common automation actions***](https://cknow.io/actions) for related artifacts. -***Automation actions*** are implemented using [***CK modules***]( https://cKnowledge.io/modules ) - Python modules +***Automation actions*** are implemented using [***CK modules***]( https://cknow.io/modules ) - Python modules with functions exposed in a unified way via CK API and CLI and using extensible dictionaries for input/output (I/O). The use of dictionaries makes it easier to support continuous integration tools @@ -151,7 +151,7 @@ ck replay experiment:my-test ``` -The [CK program module](https://cKnowledge.io/c/module/program) describes dependencies on software detection plugins +The [CK program module](https://cknow.io/c/module/program) describes dependencies on software detection plugins and meta packages using simple tags with version ranges that the community has agreed on: ```json @@ -196,10 +196,10 @@ print (r) Based on the feedback from our users, we have recently developed an open ***CK platform*** to help the community share CK components, create live scoreboards, -and participate in collaborative experiments: [https://cKnowledge.io](cKnowledge.io). +and participate in collaborative experiments: [cKnowledge.io](https://cknow.io). * We suggest you to read this [nice blog post](https://michel.steuwer.info/About-CK/) from Michel Steuwer about CK basics! -* You can find a partial list of CK-compatible repositories at [cKnowledge.io/repos](https://cKnowledge.io/repos). +* You can find a partial list of CK-compatible repositories at [cKnowledge.io/repos](https://cknow.io/repos). @@ -213,43 +213,43 @@ due to continuously changing software, hardware, models, data sets, and research The first reason why we have developed CK was to connect our colleagues, students, researchers, and engineers from different workgroups to collaboratively solve these problems and decompose complex systems and research projects -into [reusable, portable, customizable, and non-virtualized CK components](https://cKnowledge.io/browse) -with unified [automation actions, Python APIs, CLI, and JSON meta description](https://cKnowledge.io/actions). +into [reusable, portable, customizable, and non-virtualized CK components](https://cknow.io/browse) +with unified [automation actions, Python APIs, CLI, and JSON meta description](https://cknow.io/actions). We used CK as a common playground to prototype and test different abstractions and automations of many ML&systems tasks -in collaboration with our great [academic and industrial partners](https://cKnowledge.io/partners) +in collaboration with our great [academic and industrial partners](https://cknow.io/partners) while agreeing on APIs and meta descriptions of all components. Over years the project grew from several core CK modules and abstractions -to [150+ CK modules](https://cKnowledge.io/modules) with [600+ actions](https://cknowledge.io/actions) +to [150+ CK modules](https://cknow.io/modules) with [600+ actions](https://cknow.io/actions) automating typical, repetitive, and tedious tasks from ML&systems R&D. See this [fun video](https://youtu.be/nabXHyot5is) -and the [knowledge graph](https://cKnowledge.io/kg1) +and the [knowledge graph](https://cknow.io/kg1) showing the evolution of CK over time. ![CK evolution](../static/evolution2.png) For example, CK now features actions for -[software detection](https://cKnowledge.io/soft), -[package installation](https://cKnowledge.io/packages) -and [platform/OS detection](https://cKnowledge.io/c/os) -to automate the detection and installation of [all the dependencies](https://cknowledge.io/c/solution/mlperf-inference-v0.5-detection-openvino-ssd-mobilenet-coco-500-linux/#dependencies) +[software detection](https://cknow.io/soft), +[package installation](https://cknow.io/packages) +and [platform/OS detection](https://cknow.io/c/os) +to automate the detection and installation of [all the dependencies](https://cknow.io/c/solution/mlperf-inference-v0.5-detection-openvino-ssd-mobilenet-coco-500-linux/#dependencies) including data sets and models required by different research projects. Thanks to unified automation actions, APIs, and JSON meta descriptions of such components, we could apply the DevOps methodology to connect them into platform-agnostic, portable, customizable, and reproducible -[program pipelines (workflows)](https://cKnowledge.io/programs). +[program pipelines (workflows)](https://cknow.io/programs). Such workflows can automatically adapt to evolving environments, models, data sets, and non-virtualized platforms by automatically detecting the properties of a target platform, -finding all required components on a user platform using [CK software detection plugins](https://cKnowledge.io/soft) -based on the list of [all dependencies](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies), -installing missing components using [portable CK meta packages](https://cKnowledge.io/packages), +finding all required components on a user platform using [CK software detection plugins](https://cknow.io/soft) +based on the list of [all dependencies](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies), +installing missing components using [portable CK meta packages](https://cknow.io/packages), building and running code, and unifying and testing outputs. Eventually, CK helped to connect researchers and practitioners to collaboratively co-design, benchmark, optimize, and validate -novel AI, ML, and quantum techniques using the [open repository of knowledge](https://cKnowledge.io) -with [live SOTA scoreboards](https://cKnowledge.io/sota) -and [reproducible papers](https://cKnowledge.io/reproduced-papers). +novel AI, ML, and quantum techniques using the [open repository of knowledge](https://cknow.io) +with [live SOTA scoreboards](https://cknow.io/sota) +and [reproducible papers](https://cknow.io/reproduced-papers). Such scoreboards can be used to find and rebuild the most efficient AI/ML/SW/HW stacks on a [Pareto frontier](https://cKnowledge.org/request) across diverse platforms from supercomputers to edge devices @@ -261,10 +261,10 @@ to simplify the integration and adoption of innovative technology in production. Our goal is to use the CK technology to bring DevOps principles to ML&systems R&D, make it more collaborative, reproducible, and reusable, -enable portable MLOps, and make it possible to understand [what happens]( https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies ) +enable portable MLOps, and make it possible to understand [what happens]( https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies ) inside complex and "black box" computational systems. -Our dream is to see portable workflows shared along with new systems, algorithms, and [published research techniques](https://cKnowledge.io/events) +Our dream is to see portable workflows shared along with new systems, algorithms, and [published research techniques](https://cknow.io/events) to be able to quickly test, reuse and compare them across different data sets, models, software, and hardware! That is why we support related reproducibility and benchmarking initiatives including [artifact evaluation](https://cTuning.org/ae), @@ -281,12 +281,12 @@ and [ACM artifact review and badging](https://www.acm.org/publications/policies/ ## CK platform -* [cKnowledge.io](https://cKnowledge.io): the open portal with stable CK components, workflows, reproduced papers, and SOTA scoreboards for complex computational systems (AI,ML,quantum,IoT): - * [**Browse all CK ML&systems components**](https://cknowledge.io/?q=mlsystems) - * [Browse CK compatible repositories]( https://cknowledge.io/repos ) - * [Browse SOTA scoreboards powered by CK workflows](https://cKnowledge.io/reproduced-results) - * [Browse all shared CK components](https://cKnowledge.io/browse) -* [Check documentation](https://cKnowledge.io/docs) +* [cKnowledge.io](https://cknow.io): the open portal with stable CK components, workflows, reproduced papers, and SOTA scoreboards for complex computational systems (AI,ML,quantum,IoT): + * [**Browse all CK ML&systems components**](https://cknow.io/?q=mlsystems) + * [Browse CK compatible repositories]( https://cknow.io/repos ) + * [Browse SOTA scoreboards powered by CK workflows](https://cknow.io/reproduced-results) + * [Browse all shared CK components](https://cknow.io/browse) +* [Check documentation](https://cknow.io/docs) * [Our reproducibility initiatives for systems and ML conferences](https://cTuning.org/ae) @@ -299,28 +299,28 @@ and [ACM artifact review and badging](https://www.acm.org/publications/policies/ ### CK-powered workflows, automation actions, and reusable artifacts for ML&systems R&D * [Real-world use-cases](https://cKnowledge.org/partners) -* Reproducibility initiatives: [[methodology](https://cTuning.org/ae)], [[events](https://cKnowledge.io/events)] +* Reproducibility initiatives: [[methodology](https://cTuning.org/ae)], [[events](https://cknow.io/events)] * Showroom (public projects powered by CK): * [MLPerf™ automation suite](https://github.com/mlcommons/ck-mlops) * Student Cluster Competition automation: [SCC18](https://github.com/ctuning/ck-scc18), [digital artifacts](https://github.com/ctuning/ck-scc) - * ML-based autotuning project: [reproducible paper demo](https://cKnowledge.io/report/rpi3-crowd-tuning-2017-interactive), [MILEPOST]( https://github.com/ctuning/reproduce-milepost-project ) + * ML-based autotuning project: [reproducible paper demo](https://cknow.io/report/rpi3-crowd-tuning-2017-interactive), [MILEPOST]( https://github.com/ctuning/reproduce-milepost-project ) * [Quantum hackathons](https://cKnowledge.org/quantum) * [ACM SW/HW co-design tournaments for Pareto-efficient deep learning](https://cKnowledge.org/request) * Portable CK workflows and components for ML Systems: https://github.com/mlcommons/ck-mlops - * [GUI to automate ML/SW/HW benchmarking with MLPerf example (under development)](https://cKnowledge.io/test) - * [Reproduced papers]( https://cKnowledge.io/reproduced-papers ) - * [Live scoreboards for reproduced papers]( https://cKnowledge.io/reproduced-results ) + * [GUI to automate ML/SW/HW benchmarking with MLPerf example (under development)](https://cknow.io/test) + * [Reproduced papers]( https://cknow.io/reproduced-papers ) + * [Live scoreboards for reproduced papers]( https://cknow.io/reproduced-results ) * Examples of CK components (automations, API, meta descriptions): - * *program : image-classification-tflite-loadgen* [[cKnowledge.io]( https://cKnowledge.io/c/program/image-classification-tflite-loadgen )] [[GitHub]( https://github.com/ctuning/ck-mlops/tree/master/program/image-classification-tflite-loadgen )] + * *program : image-classification-tflite-loadgen* [[cKnowledge.io]( https://cknow.io/c/program/image-classification-tflite-loadgen )] [[GitHub]( https://github.com/ctuning/ck-mlops/tree/master/program/image-classification-tflite-loadgen )] * *program : image-classification-tflite* [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/program/image-classification-tflite )] * *soft : lib.mlperf.loadgen.static* [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/soft/lib.mlperf.loadgen.static )] * *package : lib-mlperf-loadgen-static* [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/package/lib-mlperf-loadgen-static )] * *package : model-onnx-mlperf-mobilenet* [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/package/model-onnx-mlperf-mobilenet/.cm )] - * *package : lib-tflite* [[cKnowledge.io]( https://cKnowledge.io/c/package/lib-tflite )] [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/package/lib-tflite )] + * *package : lib-tflite* [[cKnowledge.io]( https://cknow.io/c/package/lib-tflite )] [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/package/lib-tflite )] * *docker : ** [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/docker )] * *docker : speech-recognition.rnnt* [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/main/docker/mlperf-inference-speech-recognition-rnnt )] * *package : model-tf-** [[GitHub]( https://github.com/mlcommons/ck-mlops/tree/master/package )] - * *script : mlperf-inference-v0.7.image-classification* [[cKnowledge.io]( https://cknowledge.io/c/script/mlperf-inference-v0.7.image-classification )] + * *script : mlperf-inference-v0.7.image-classification* [[cKnowledge.io]( https://cknow.io/c/script/mlperf-inference-v0.7.image-classification )] * *jnotebook : object-detection* [[GitHub](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/5yqb6fy1nbywi7x/medium-object-detection.20190923.ipynb)] [](https://www.youtube.com/watch?v=DIkZxraTmGM) diff --git a/ck/docs/src/misc.md b/ck/docs/src/misc.md index 1bcab262c2..5b40a397e5 100644 --- a/ck/docs/src/misc.md +++ b/ck/docs/src/misc.md @@ -1,6 +1,6 @@ # Miscellaneous * [CK Wiki]( https://github.com/mlcommons/ck/wiki ) -* [cKnowledge.io docs]( https://cKnowledge.io/docs ) +* [cKnowledge.io docs]( https://cknow.io/docs ) diff --git a/ck/docs/src/portable-workflows.md b/ck/docs/src/portable-workflows.md index e11f6e5d78..8fb99e628c 100644 --- a/ck/docs/src/portable-workflows.md +++ b/ck/docs/src/portable-workflows.md @@ -10,13 +10,13 @@ We started adding the following CK modules and actions with a unified API and I/ These CK modules automate and unify the detection of different properties of user platforms and environments. -* *module:os* [[API](https://cknowledge.io/c/module/platform/#api)] [[components](https://cKnowledge.io/c/os)] -* *module:platform* [[API](https://cknowledge.io/c/module/platform/#api)] -* *module:platform.os* [[API](https://cknowledge.io/c/module/platform.os/#api)] -* *module:platform.cpu* [[API](https://cknowledge.io/c/module/platform.cpu/#api)] -* *module:platform.gpu* [[API](https://cknowledge.io/c/module/platform.gpu/#api)] -* *module:platform.gpgpu* [[API](https://cknowledge.io/c/module/platform.gpgpu/#api)] -* *module:platform.nn* [[API](https://cknowledge.io/c/module/platform.nn/#api)] +* *module:os* [[API](https://cknow.io/c/module/platform/#api)] [[components](https://cknow.io/c/os)] +* *module:platform* [[API](https://cknow.io/c/module/platform/#api)] +* *module:platform.os* [[API](https://cknow.io/c/module/platform.os/#api)] +* *module:platform.cpu* [[API](https://cknow.io/c/module/platform.cpu/#api)] +* *module:platform.gpu* [[API](https://cknow.io/c/module/platform.gpu/#api)] +* *module:platform.gpgpu* [[API](https://cknow.io/c/module/platform.gpgpu/#api)] +* *module:platform.nn* [[API](https://cknow.io/c/module/platform.nn/#api)] Examples: ```bash @@ -31,7 +31,7 @@ ck detect platform.gpgpu --cuda This CK module automates the detection of a given software or files (datasets, models, libraries, compilers, frameworks, tools, scripts) on a given platform using CK names, UIDs, and tags: -* *module:soft* [[API](https://cknowledge.io/c/module/soft/#api)] [[components](https://cKnowledge.io/c/soft)] +* *module:soft* [[API](https://cknow.io/c/module/soft/#api)] [[components](https://cknow.io/c/soft)] It helps to understand a user platform and environment to prepare portable workflows. @@ -46,7 +46,7 @@ ck detect soft:compiler.llvm --target_os=android23-arm64 ## Virtual environment -* *module:env* [[API](https://cknowledge.io/c/module/env/#api)] +* *module:env* [[API](https://cknow.io/c/module/env/#api)] Whenever a given software or files are found using software detection plugins, CK creates a new "env" component in the local CK repository @@ -83,14 +83,14 @@ When a given software is not detected on our system, we usually want to install That's why we have developed the following CK module that can automate installation of missing packages (models, datasets, tools, frameworks, compilers, etc): -* *module:package* [[API](https://cknowledge.io/c/module/package/#api)] [[components](https://cKnowledge.io/c/package)] +* *module:package* [[API](https://cknow.io/c/module/package/#api)] [[components](https://cknow.io/c/package)] This is a meta package manager that provides a unified API to automatically download, build, and install packages for a given target (including mobile and edge devices) using existing building tools and package managers. All above modules can now support portable workflows that can automatically adapt to a given environment -based on [soft dependencies](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies). +based on [soft dependencies](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies). Examples: @@ -107,7 +107,7 @@ See an example of variations to customize a given package: [lib-tflite](https:// We also provided an abstraction for ad-hoc scripts: -* *module:script* [[API](https://cknowledge.io/c/module/script/#api)] [[components](https://cKnowledge.io/c/script)] +* *module:script* [[API](https://cknow.io/c/module/script/#api)] [[components](https://cknow.io/c/script)] See an example of the CK component with a script used for MLPerf™ benchmark submissions: [GitHub](https://github.com/ctuning/ck-mlperf/tree/master/script/mlperf-inference-v0.7.image-classification) @@ -117,7 +117,7 @@ See an example of the CK component with a script used for MLPerf™ benchmar Next we have implemented a CK module to provide a common API to compile, run, and validate programs while automatically adapting to any platform and environment: -* *module:program* [[API](https://cknowledge.io/c/module/program/#api)] [[components](https://cKnowledge.io/c/program)] +* *module:program* [[API](https://cknow.io/c/module/program/#api)] [[components](https://cknow.io/c/program)] A user describes dependencies on CK packages in the CK program meta as well as commands to build, pre-process, run, post-process, and validate a given program. @@ -134,7 +134,7 @@ ck run program:image-corner-detection --repeat=1 --env.OMP_NUM_THREADS=4 We have developed an abstraction to record and reply experiments using the following CK module: -* *module:experiment* [[API](https://cknowledge.io/c/module/experiment/#api)] [[components](https://cKnowledge.io/c/experiment)] +* *module:experiment* [[API](https://cknow.io/c/module/experiment/#api)] [[components](https://cknow.io/c/experiment)] This module records all resolved dependencies, inputs and outputs when running above CK programs thus allowing to preserve experiments with all the provenance and replay them later on the same or different machine: @@ -154,10 +154,10 @@ ck zip experiment:my_experiment Since we can record all experiments in a unified way, we can also visualize them in a unified way. That's why we have developed a simple web server that can help to create customizable dashboards: -* *module:web* [[API](https://cknowledge.io/c/module/web/#api)] +* *module:web* [[API](https://cknow.io/c/module/web/#api)] See examples of such dashboards: -* [view online at cKnowledge.io platform](https://cKnowledge.io/reproduced-results) +* [view online at cknow.io platform](https://cknow.io/reproduced-results) * [view locally (with or without Docker)](https://github.com/ctuning/ck-mlperf/tree/master/docker/image-classification-tflite.dashboard.ubuntu-18.04) @@ -178,7 +178,7 @@ We plan to develop a GUI to make the process of generating such papers more user It is possible to use CK from Jupyter and Colab notebooks. We provided an abstraction to share Jupyter notebooks in CK repositories: -* *module:jnotebook* [[API](https://cknowledge.io/c/module/jnotebook/#api)] [[components](https://cKnowledge.io/c/jnotebook)] +* *module:jnotebook* [[API](https://cknow.io/c/module/jnotebook/#api)] [[components](https://cknow.io/c/jnotebook)] You can see an example of a Jupyter notebook with CK commands to process MLPerf™ benchmark results [here](https://nbviewer.jupyter.org/urls/dl.dropbox.com/s/5yqb6fy1nbywi7x/medium-object-detection.20190923.ipynb). @@ -189,7 +189,7 @@ You can see an example of a Jupyter notebook with CK commands to process MLPerf& We provided an abstraction to build, pull, and run Docker images: -* *module:docker* [[API](https://cknowledge.io/c/module/docker/#api)] [[components](https://cKnowledge.io/c/docker)] +* *module:docker* [[API](https://cknow.io/c/module/docker/#api)] [[components](https://cknow.io/c/docker)] You can see examples of Docker images with unified CK commands to automate the MLPerf™ benchmark [here](https://github.com/ctuning/ck-mlperf/tree/master/docker). @@ -201,13 +201,13 @@ You can see examples of Docker images with unified CK commands to automate the M During the past few years we converted all the workflows and components from our past ML&systems R&D including the [MILEPOST and cTuning.org project](https://github.com/ctuning/reproduce-milepost-project) to the CK format. -There are now [150+ CK modules](https://cKnowledge.io/modules) with actions automating and abstracting +There are now [150+ CK modules](https://cknow.io/modules) with actions automating and abstracting many tedious and repetitive tasks in ML&systems R&D including model training and prediction, universal autotuning, ML/SW/HW co-design, model testing and deployment, paper generation and so on: * [A high level overview of portable CK workflows](https://cknowledge.org/high-level-overview.pdf) * [A Collective Knowledge workflow for collaborative research into multi-objective autotuning and machine learning techniques (collaboration with the Raspberry Pi foundation)]( https://cKnowledge.org/report/rpi3-crowd-tuning-2017-interactive ) * [A summary of main CK-based projects with academic and industrial partners]( https://cKnowledge.org/partners.html ) -* [cKnowledge.io platform documentation]( https://cKnowledge.io/docs ) +* [cKnowledge.io platform documentation]( https://cknow.io/docs ) Don't hesitate to [contact us](https://cKnowledge.org/contacts.html) if you have a feedback or want to know more about our plans! diff --git a/ck/docs/src/typical-usage.md b/ck/docs/src/typical-usage.md index f3c226af39..94575b24b7 100644 --- a/ck/docs/src/typical-usage.md +++ b/ck/docs/src/typical-usage.md @@ -21,7 +21,7 @@ that are based on our portable and customizable CK workflow: ## Initialize a new CK repository in the current directory (can be existing Git repo) -If you plan to contribute to already [existing CK repositories]( http://cKnowledge.io/repos ) +If you plan to contribute to already [existing CK repositories]( http://cknow.io/repos ) you can skip this subsection. Otherwise, you need to manually create a new CK repository. You need to choose some user friendly name such as "my-new-repo" @@ -124,7 +124,7 @@ Whenever someones pull your repository, CK will automatically pull all other req You are now ready to add a new CK workflow to compile and run some algorithm or a benchmark in a unified way. Since CK concept is about reusing and extending existing components with a common API similar to Wikipedia, -we suggest you to look at [this index]( https://cKnowledge.io/programs ) of shared CK programs +we suggest you to look at [this index]( https://cknow.io/programs ) of shared CK programs in case someone have already shared a CK workflows for the same or similar program! If you found a similar program, for example "image-corner-detection" @@ -359,7 +359,7 @@ from the Student Cluster Competition'18. ## Update software dependencies If you new program rely on extra software dependencies (compilers, libraries, models, datasets) -you must first find the ones you need in this [online index](https://cKnowledge.io/soft) +you must first find the ones you need in this [online index](https://cknow.io/soft) of software detection plugins. You can then specify the tags and versions either using *compile_deps* or *run_deps* keys in the *meta.json* of your new program as follows: @@ -431,7 +431,7 @@ to a user environment. We have developed a simple mechanism in the CK workflow to reuse basic (small) datasets such a individual images. -You can find already shared datasets using this [online index]( https://cknowledge.io/c/dataset ). +You can find already shared datasets using this [online index]( https://cknow.io/c/dataset ). If you want to reuse them in your program workflow, you can find the related one, check its tags (see the [meta.json](https://github.com/ctuning/ck-autotuning/blob/master/dataset/image-jpeg-fgg/.cm/meta.json) @@ -485,10 +485,10 @@ For example one may need a different procedure when using TensorFlow or PyTorch ## Add new CK software detection plugins If CK software plugin doesn't exist for a given code, data, or models, -you can add a new one either in your own repository or in [already existing ones](https://cKnowledge.io/repos). +you can add a new one either in your own repository or in [already existing ones](https://cknow.io/repos). We suggest you to find the most close software detection plugin using -[this online index](http://cKnowledge.io/soft), +[this online index](http://cknow.io/soft), pull this repository, and make a copy in your repository as follows: ```bash @@ -643,7 +643,7 @@ Whenever a required software is not found, CK will automatically search for existing packages with the same tags for a given target in all installed CK repositories. -[CK package module]( https://cKnowledge.io/c/module/package ) provides a unified JSON API +[CK package module]( https://cknow.io/c/module/package ) provides a unified JSON API to automatically download, install, and potentially rebuild a given package (software, datasets, models, etc) in a portable way across Linux, Windows, MacOS, Android, and other supported platforms. It is also a unified front-end for other @@ -658,9 +658,9 @@ In such case, you may be interested to provide a new CK package to be reused eit or by the broad community to automate the installation. Similar to adding CK software detection plugins, you must first find the most close package -from this [online index](https://cKnowledge.io/packages), download it, +from this [online index](https://cknow.io/packages), download it, and make a new copy in your repository unless you want to share it immediately with the community -in already [existing CK repositories]( https://cKnowledge.io/repos ). +in already [existing CK repositories]( https://cknow.io/repos ). For example, let's copy a CK protobuf package that downloads a given protobuf version in a tgz archive and uses cmake to build it: @@ -779,7 +779,7 @@ as example: Note that we described only a small part of all available functions of the CK package manager that we have developed in collaboration with our [http://cKnowledge.org/partners.html partners and users]. We continue documenting them and started working on a user-friendly GUI -to add new software and packages via web. You can try it [here](https://cknowledge.io/add-artifact). +to add new software and packages via web. You can try it [here](https://cknow.io/add-artifact). @@ -849,7 +849,7 @@ One of the CK goals is to be a plug&play connector between non-portable workflow CK can work both in native environments and containers. While portable CK workflows can fail in the latest environment, they will work fine inside a container with a stable environment. -We have added the CK module [*docker*]( https://cKnowledge.io/c/module/docker ) +We have added the CK module [*docker*]( https://cknow.io/c/module/docker ) to make it easier to build, share, and run Docker descriptions. Please follow the Readme in the [ck-docker]( https://github.com/ctuning/ck-docker ) for more details. @@ -995,7 +995,7 @@ where results can be automatically updated by the community. The stable snapshot can still be published as a [traditional PDF paper](https://arxiv.org/abs/1801.08024). However, it is still a complex process. We have started documenting this functionality [here](https://github.com/ctuning/ck/wiki/Interactive-articles) -and plan to gradually improve it. When we have more resources, we plan to add a web-based GUI to the [cKnowledge.io platform](https://cKnowledge.io) +and plan to gradually improve it. When we have more resources, we plan to add a web-based GUI to the [cknow.io platform](https://cknow.io) to make it easier to create such live, reproducible, and interactive articles. @@ -1006,9 +1006,9 @@ to make it easier to create such live, reproducible, and interactive articles. ## Publish CK repositories, workflows, and components -We are developing an open [cKnowledge.io platform](https://cKnowledge.io) to let users +We are developing an open [cKnowledge.io platform](https://cknow.io) to let users share and reuse CK repositories, workflows, and components similar to PyPI. -Please follow [this guide]( https://cKnowledge.io/docs ) to know more. +Please follow [this guide]( https://cknow.io/docs ) to know more. diff --git a/ck/incubator/cbench/README.md b/ck/incubator/cbench/README.md index fc76e98931..8325221806 100644 --- a/ck/incubator/cbench/README.md +++ b/ck/incubator/cbench/README.md @@ -11,27 +11,27 @@ Windows: [![Windows Build status](https://ci.appveyor.com/api/projects/status/yj We have successfully completed the prototyping phase of the Collective Knowledge technology to make it easier to reproduce AI&ML and deploy it in production with the help of portable CK workflows, reusable artifacts and MLOps as described in this [white paper](https://arxiv.org/abs/2006.07161) -and the [CK presentation](https://cKnowledge.io/presentation/ck). +and the [CK presentation](https://cknow.io/presentation/ck). We are now preparing the second phase of this project to make CK simpler to use, more stable and more user friendly - -don't hesitate to get in touch with the [CK author](https://cKnowledge.io/@gfursin) to know more! +don't hesitate to get in touch with the [CK author](https://cKnowledge.org/gfursin) to know more! ## Introduction cBench is a small and cross-platform framework -connected with the [open Collective Knowledge portal](https://cKnowledge.io) +connected with the [open Collective Knowledge portal](https://cknow.io) to help researchers and practitioners -[reproduce ML&systems research](https://cKnowledge.io/reproduced-papers) +[reproduce ML&systems research](https://cknow.io/reproduced-papers) on their own bare-metal platforms, participate in collaborative benchmarking and optimization, -and share results on [live scoreobards](https://cKnowledge.io/reproduced-results). +and share results on [live scoreobards](https://cknow.io/reproduced-results). -You can try to reproduce MLPerf™ inference benchmark on your machine using [this solution](https://cKnowledge.io/test) -and see public results from the community on this [scoreboard](https://cknowledge.io/c/result/sota-mlperf-object-detection-v0.5-crowd-benchmarking). +You can try to reproduce MLPerf™ inference benchmark on your machine using [this solution](https://cknow.io/test) +and see public results from the community on this [scoreboard](https://cknow.io/c/result/sota-mlperf-object-detection-v0.5-crowd-benchmarking). cBench is a part of the [Collective Knowledge project (CK)](https://cKnowledge.org) -and uses [portable CK solutions](https://cknowledge.io/docs/intro/introduction.html#portable-ck-solution) +and uses [portable CK solutions](https://cknow.io/docs/intro/introduction.html#portable-ck-solution) to describe how to download, build, benchmark and optimize applications across different hardware, software, models and data sets. @@ -67,7 +67,7 @@ Install cbench: python3 -m pip install cbench ``` -Initialize the [CK solution for MLPerf™](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows): +Initialize the [CK solution for MLPerf™](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows): ``` cb init demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows @@ -79,7 +79,7 @@ Participate in crowd-benchmarking: cb benchmark demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows ``` -See your results on a public [SOTA dashboard](https://cknowledge.io/c/result/sota-mlperf-object-detection-v0.5-crowd-benchmarking). +See your results on a public [SOTA dashboard](https://cknow.io/c/result/sota-mlperf-object-detection-v0.5-crowd-benchmarking). You can also use the stable Docker image to participate in crowd-benchmarking: @@ -87,13 +87,13 @@ You can also use the stable Docker image to participate in crowd-benchmarking: sudo docker run ctuning/cbench-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows /bin/bash -c "cb benchmark demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows" ``` -You can also check [all dependencies for this solution](https://cknowledge.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies). +You can also check [all dependencies for this solution](https://cknow.io/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows/#dependencies). ## Documentation -* [Online docs for the Collective Knowledge technology](https://cKnowledge.io/docs) +* [Online docs for the Collective Knowledge technology](https://cknow.io/docs) ## Feedback diff --git a/ck/incubator/cbench/cbench/config.py b/ck/incubator/cbench/cbench/config.py index 23437715b7..03d9417982 100644 --- a/ck/incubator/cbench/cbench/config.py +++ b/ck/incubator/cbench/cbench/config.py @@ -12,7 +12,7 @@ CK_CFG_MODULE_REPO_UOA="befd7892b0d469e9" # CK module UOA for REPO -CR_DEFAULT_SERVER="https://cKnowledge.io" +CR_DEFAULT_SERVER="https://cknow.io" CR_DEFAULT_SERVER_URL=CR_DEFAULT_SERVER+"/api/v1/?" CR_DEFAULT_SERVER_USER="crowd-user" CR_DEFAULT_SERVER_API_KEY="43fa84787ff65c2c00bf740e3853c90da8081680fe1025e8314e260888265033" @@ -142,7 +142,7 @@ def update(i): # Check release notes server_url=cfg.get('server_url','') - if server_url=='': server_url='https://cKnowledge.io/api/v1/?' + if server_url=='': server_url='https://cknow.io/api/v1/?' from . import comm_min r=comm_min.send({'url':server_url, diff --git a/ck/incubator/cbench/setup.py b/ck/incubator/cbench/setup.py index 42dbabf396..5b989a33fe 100644 --- a/ck/incubator/cbench/setup.py +++ b/ck/incubator/cbench/setup.py @@ -21,7 +21,7 @@ 'cbench.__init__', os.path.join('cbench', '__init__.py')).__version__ # Default portal -portal_url='https://cKnowledge.io' +portal_url='https://cknow.io' ############################################################ diff --git a/cm-mlops/CONTRIBUTING.md b/cm-mlops/CONTRIBUTING.md index 0f0cb00973..0cd51364a4 100644 --- a/cm-mlops/CONTRIBUTING.md +++ b/cm-mlops/CONTRIBUTING.md @@ -36,4 +36,4 @@ See the full list [here](https://github.com/mlcommons/ck/blob/master/CONTRIBUTIN ## Maintainers * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) -* [Grigori Fursin](https://cknowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) diff --git a/cm-mlops/README.md b/cm-mlops/README.md index 47fc715823..495732ea14 100644 --- a/cm-mlops/README.md +++ b/cm-mlops/README.md @@ -26,7 +26,7 @@ design space exploration and deployment. # Maintainers -* [Grigori Fursin](https://cKnowledge.io/@gfursin) +* [Grigori Fursin](https://cKnowledge.org/gfursin) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) # Acknowledgments diff --git a/cm-mlops/automation/cache/_cm.json b/cm-mlops/automation/cache/_cm.json index 7a47479964..ac383f937c 100644 --- a/cm-mlops/automation/cache/_cm.json +++ b/cm-mlops/automation/cache/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Caching cross-platform CM scripts", - "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": 900, "tags": [ "automation" diff --git a/cm-mlops/automation/docker/_cm.json b/cm-mlops/automation/docker/_cm.json index dc5ded6a01..11a5085d0e 100644 --- a/cm-mlops/automation/docker/_cm.json +++ b/cm-mlops/automation/docker/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Managing modular docker containers (under development)", - "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.org/gfursin)", "tags": [ "automation" ], diff --git a/cm-mlops/automation/experiment/_cm.json b/cm-mlops/automation/experiment/_cm.json index f3e01fdd3b..49bb0e6166 100644 --- a/cm-mlops/automation/experiment/_cm.json +++ b/cm-mlops/automation/experiment/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Managing and reproducing experiments (under development)", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "tags": [ "automation" ], diff --git a/cm-mlops/automation/project/_cm.json b/cm-mlops/automation/project/_cm.json index e744d4386a..68042c4319 100644 --- a/cm-mlops/automation/project/_cm.json +++ b/cm-mlops/automation/project/_cm.json @@ -2,7 +2,7 @@ "alias": "project", "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "tags": [ "automation" ], diff --git a/cm-mlops/automation/script/_cm.json b/cm-mlops/automation/script/_cm.json index 0a4355d304..03e9c1cbc4 100644 --- a/cm-mlops/automation/script/_cm.json +++ b/cm-mlops/automation/script/_cm.json @@ -6,7 +6,7 @@ "cache": "cache,541d6f712a6b464e" }, "desc": "Making native scripts more portable, interoperable and deterministic", - "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": 1000, "tags": [ "automation" diff --git a/cm-mlops/automation/utils/_cm.json b/cm-mlops/automation/utils/_cm.json index d847115ddf..f2dc9c5b66 100644 --- a/cm-mlops/automation/utils/_cm.json +++ b/cm-mlops/automation/utils/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Accessing various CM utils", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": 800, "tags": [ "automation" diff --git a/cm-mlops/challenge/participate-ck-quantum-hackathons/info.html b/cm-mlops/challenge/participate-ck-quantum-hackathons/info.html index 0b8a27fceb..57bc32451d 100644 --- a/cm-mlops/challenge/participate-ck-quantum-hackathons/info.html +++ b/cm-mlops/challenge/participate-ck-quantum-hackathons/info.html @@ -22,7 +22,7 @@ In contrast with other initiatives, QCK uses unified workflows with reusable components which can run on classical and quantum platforms, can be extended by the community, - and are connected to a public dashboard + and are connected to a public dashboard to simplify reproducibility and comparison of different algorithms across different platforms! We hope that QCK will be instrumental to unlock the power of quantum computing for everyone! @@ -70,8 +70,8 @@

1st Open QCK Challenge

@@ -120,7 +120,7 @@

4th QCK Hackathon, Oxford, 15 March 2019

@@ -160,7 +160,7 @@

3rd QCK Hackathon, Paris, 27 January 2019

@@ -199,7 +199,7 @@

2nd QCK Hackathon, London, 6 October 2018

computer. Thanks to QCK, the participants had the opportunity to build upon the knowledge gained from the first hackathon and view experimental results on a live - dashboard. + dashboard.

Some participants commented after the event: @@ -214,8 +214,8 @@

2nd QCK Hackathon, London, 6 October 2018

@@ -256,8 +256,8 @@

1st QCK Hackathon, Cambridge, 15 June 2018

diff --git a/cm-mlops/challenge/participate-request-asplos2018/info.html b/cm-mlops/challenge/participate-request-asplos2018/info.html index 6e21b10b14..79689d9783 100644 --- a/cm-mlops/challenge/participate-request-asplos2018/info.html +++ b/cm-mlops/challenge/participate-request-asplos2018/info.html @@ -1,13 +1,13 @@

Results of the 1st reproducible ACM ReQuEST-ASPLOS'18 tournament:

@@ -24,7 +24,7 @@

Goals

to share complete algorithm implementations (code and data) as portable, customizable and reusable -Collective Knowledge +Collective Knowledge workflows. This helps other researchers and end-users to quickly validate such @@ -46,7 +46,7 @@

Goals

Workshop organizers

@@ -122,7 +122,7 @@

Completed tournaments and workshops

  • 2018: 1st + href="https://cknow.io/c/event/repro-request-asplos2018/">1st REQUEST tournament at ASPLOS'18 for co-designing Pareto-efficient image classification (see ACM proceedings and Unified submission goal

    Non-profit cTuning foundation will help authors convert their artifacts and experimental scripts to the CK format during evaluation while reusing AI artifacts already shared by the community in the CK format (see CK AI repositories, - CK modules (wrappers), - CK software detection plugins, - portable CK packages). + CK modules (wrappers), + CK software detection plugins, + portable CK packages). Authors can also try to convert their workflows to the CK format themselves using the distinguished artifact from ACM CGO'17 as an example (see Artifact repository at GitHub, Artifact Appendix, CK notes, @@ -377,7 +377,7 @@

    Open evaluation and live leader board goals

    REQUEST is backed by the ACM Task Force on Data, Software, and Reproducibility in Publication and will use the standard ACM artifact evaluation methodology. Artifact evaluation will be single blind (see PPoPP, CGO, PACT, RTSS and SuperComputing), and the reviews can be made public (see ADAPT) upon the authors' request. - Quality and efficiency metrics will be collected for each submission, and compiled on the REQUEST live scoreboard. + Quality and efficiency metrics will be collected for each submission, and compiled on the REQUEST live scoreboard.

    REQUEST will not determine a single winner, as collapsing all of the metrics into one single metric across all platforms will result in over-engineered solutions. diff --git a/cm-mlops/challenge/repro-asplos2020/info.html b/cm-mlops/challenge/repro-asplos2020/info.html index 229a4a8073..49f7285e58 100644 --- a/cm-mlops/challenge/repro-asplos2020/info.html +++ b/cm-mlops/challenge/repro-asplos2020/info.html @@ -10,7 +10,7 @@

    Results

    diff --git a/cm-mlops/challenge/repro-mlsys2020/info.html b/cm-mlops/challenge/repro-mlsys2020/info.html index c492b87f1a..2b8fee64c6 100644 --- a/cm-mlops/challenge/repro-mlsys2020/info.html +++ b/cm-mlops/challenge/repro-mlsys2020/info.html @@ -9,7 +9,7 @@

    Artifacts

    - The list of artifacts is now available here! + The list of artifacts is now available here!
    @@ -61,7 +61,7 @@

    Reproducibility chairs

    Artifacts

    - The list of artifacts is now available here! + The list of artifacts is now available here!
    @@ -108,7 +108,7 @@

    Artifact submission

    following guidelines. Please, do not forget to describe the required hardware, software, data sets and models in your artifact abstract - this is essential to find appropriate evaluators! -You can find the examples of Artifact Appendices in these MLSys'19 papers. +You can find the examples of Artifact Appendices in these MLSys'19 papers.

    diff --git a/cm-mlops/challenge/repro-request-asplos2018/info.html b/cm-mlops/challenge/repro-request-asplos2018/info.html index 8c10354117..3fd5e97933 100644 --- a/cm-mlops/challenge/repro-request-asplos2018/info.html +++ b/cm-mlops/challenge/repro-request-asplos2018/info.html @@ -1,13 +1,13 @@

    Results of the 1st reproducible ACM ReQuEST-ASPLOS'18 tournament:

    @@ -24,7 +24,7 @@

    Goals

    to share complete algorithm implementations (code and data) as portable, customizable and reusable -Collective Knowledge +Collective Knowledge workflows. This helps other researchers and end-users to quickly validate such @@ -41,7 +41,7 @@

    Goals

    The associated ACM ReQuEST workshop is co-located with ASPLOS 2018 March 24th, 2018 (afternoon), Williamsburg, VA, USA.

    -A ReQuEST introduction and long-term goals: cKnowledge.org/request website +A ReQuEST introduction and long-term goals: cKnowledge.org/request website and ArXiv paper. @@ -397,7 +397,7 @@

    Call for submissions

    The 1st ReQuEST tournament is co-located with ACM ASPLOS'18 and will focus on optimizing the whole model/software/hardware stack for image classification based on the ImageNet Large Scale Visual Recognition Challenge (ILSVRC). Unlike the classical ILSVRC where submissions are ranked according to their classification accuracy, however, ReQuEST submissions will be evaluated according to multiple metrics and trade-offs selected by the authors (e.g. accuracy, speed, throughput, energy consumption, hardware cost, usage cost, etc.) in a unified, reproducible and objective way using the Collective Knowledge framework (CK). - Restricting the competition to a single application domain will allow us to test our open-source ReQuEST tournament infrastructure, validate it across multiple platforms and environments, and prepare a dedicated live scoreboard with results similar to this open SOTA scoreboard. + Restricting the competition to a single application domain will allow us to test our open-source ReQuEST tournament infrastructure, validate it across multiple platforms and environments, and prepare a dedicated live scoreboard with results similar to this open SOTA scoreboard.

    @@ -442,7 +442,7 @@

    Submission


    If you are already familiar with the open-source Collective Knowledge framework (CK), you are encouraged to convert your experimental workflows to to portable CK workflows. Such workflows can automatically set up the environment, detect required software dependencies, install missing packages and run experiments, thus automating artifact evaluation. - (See some examples here.) + (See some examples here.)
    If you are not familiar with CK, worry not! We will gladly help you convert your submission to CK during the evalution diff --git a/cm-mlops/challenge/repro-sml2020/info.html b/cm-mlops/challenge/repro-sml2020/info.html index 1979157074..5189d0ebca 100644 --- a/cm-mlops/challenge/repro-sml2020/info.html +++ b/cm-mlops/challenge/repro-sml2020/info.html @@ -9,7 +9,7 @@

    Artifacts

    - The list of artifacts is now available here! + The list of artifacts is now available here!
    @@ -60,7 +60,7 @@

    Reproducibility chairs

    Artifacts

    - The list of artifacts is now available here! + The list of artifacts is now available here!
    @@ -107,7 +107,7 @@

    Artifact submission

    following guidelines. Please, do not forget to describe the required hardware, software, data sets and models in your artifact abstract - this is essential to find appropriate evaluators! -You can find the examples of Artifact Appendices in these MLSys'19 papers. +You can find the examples of Artifact Appendices in these MLSys'19 papers.

    diff --git a/cm-mlops/script/activate-python-venv/_cm.json b/cm-mlops/script/activate-python-venv/_cm.json index 64711dab47..29d02747f6 100644 --- a/cm-mlops/script/activate-python-venv/_cm.json +++ b/cm-mlops/script/activate-python-venv/_cm.json @@ -4,7 +4,7 @@ "automation_uid": "5b4e0237da074764", "category": "Python automation", "category_sort": 1000, - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "name": "Activate virtual Python environment", "prehook_deps": [ { diff --git a/cm-mlops/script/app-loadgen-generic-python/README-extra.md b/cm-mlops/script/app-loadgen-generic-python/README-extra.md index b581392a96..c366cc1658 100644 --- a/cm-mlops/script/app-loadgen-generic-python/README-extra.md +++ b/cm-mlops/script/app-loadgen-generic-python/README-extra.md @@ -214,4 +214,4 @@ docker run -v /tmp:/tmp -it modularcm/loadgen-generic-python-cpu:ubuntu-22.04 -c * [Gaz Iqbal](https://www.linkedin.com/in/gaziqbal) (OctoML) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML) -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML) diff --git a/cm-mlops/script/app-mlperf-inference-cpp/README-extra.md b/cm-mlops/script/app-mlperf-inference-cpp/README-extra.md index 496f41c4b5..98c35851fa 100644 --- a/cm-mlops/script/app-mlperf-inference-cpp/README-extra.md +++ b/cm-mlops/script/app-mlperf-inference-cpp/README-extra.md @@ -17,7 +17,7 @@ across diverse platforms with continuously changing software and hardware. [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) -and [Grigori Fursin]( https://cKnowledge.io/@gfursin ). +and [Grigori Fursin]( https://cKnowledge.org/gfursin ). diff --git a/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml b/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml index c2c3e5882a..945905b214 100644 --- a/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml +++ b/cm-mlops/script/app-mlperf-inference-cpp/_cm.yaml @@ -7,7 +7,7 @@ automation_uid: 5b4e0237da074764 category: "Modular MLPerf benchmarks" -developers: "[Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml b/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml index a49254067c..6402f7677f 100644 --- a/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml +++ b/cm-mlops/script/app-mlperf-inference-reference/_cm.yaml @@ -8,7 +8,7 @@ automation_uid: 5b4e0237da074764 category: "Modular MLPerf benchmarks" category_sort: 20000 -developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/app-mlperf-inference/README-extra.md b/cm-mlops/script/app-mlperf-inference/README-extra.md index 19769f618e..bd1acdbecb 100644 --- a/cm-mlops/script/app-mlperf-inference/README-extra.md +++ b/cm-mlops/script/app-mlperf-inference/README-extra.md @@ -127,5 +127,5 @@ docker run -it --rm resnet50_onnxruntime:ubuntu20.04 -c "cm run script --tags=ap # Developers [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), -[Grigori Fursin]( https://cKnowledge.io/@gfursin ) +[Grigori Fursin]( https://cKnowledge.org/gfursin ) and [individual contributors](https://github.com/mlcommons/ck/blob/master/CONTRIBUTING.md). diff --git a/cm-mlops/script/app-mlperf-inference/_cm.yaml b/cm-mlops/script/app-mlperf-inference/_cm.yaml index 4ef4f8c578..665c76ac1e 100644 --- a/cm-mlops/script/app-mlperf-inference/_cm.yaml +++ b/cm-mlops/script/app-mlperf-inference/_cm.yaml @@ -8,7 +8,7 @@ automation_uid: 5b4e0237da074764 category: "Modular MLPerf benchmarks" category_sort: 20000 -developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/generate-mlperf-inference-user-conf/_cm.yaml b/cm-mlops/script/generate-mlperf-inference-user-conf/_cm.yaml index 8fb153d1e3..cefec57d3e 100644 --- a/cm-mlops/script/generate-mlperf-inference-user-conf/_cm.yaml +++ b/cm-mlops/script/generate-mlperf-inference-user-conf/_cm.yaml @@ -8,7 +8,7 @@ automation_uid: 5b4e0237da074764 category: "Modular MLPerf benchmarks" category_sort: 20000 -developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Arjun Suresh](https://www.linkedin.com/in/arjunsuresh), [Thomas Zhu](https://www.linkedin.com/in/hanwen-zhu-483614189), [Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/gui/_cm.yaml b/cm-mlops/script/gui/_cm.yaml index fa3a329de1..eb66af9538 100644 --- a/cm-mlops/script/gui/_cm.yaml +++ b/cm-mlops/script/gui/_cm.yaml @@ -8,7 +8,7 @@ automation_uid: 5b4e0237da074764 category: "GUI" category_sort: 21000 -developers: "[Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/import-mlperf-inference-to-experiment/_cm.yaml b/cm-mlops/script/import-mlperf-inference-to-experiment/_cm.yaml index 44fb100899..305c0e8f17 100644 --- a/cm-mlops/script/import-mlperf-inference-to-experiment/_cm.yaml +++ b/cm-mlops/script/import-mlperf-inference-to-experiment/_cm.yaml @@ -8,7 +8,7 @@ automation_uid: 5b4e0237da074764 category: "Modular MLPerf benchmarks" category_sort: 20000 -developers: "[Grigori Fursin](https://cKnowledge.io/@gfursin)" +developers: "[Grigori Fursin](https://cKnowledge.org/gfursin)" # User-friendly tags to find this CM script tags: diff --git a/cm-mlops/script/run-mlperf-inference-app/README-extra.md b/cm-mlops/script/run-mlperf-inference-app/README-extra.md index 1b6889d38c..f5cc63f009 100644 --- a/cm-mlops/script/run-mlperf-inference-app/README-extra.md +++ b/cm-mlops/script/run-mlperf-inference-app/README-extra.md @@ -873,7 +873,7 @@ See extension projects to enable collaborative benchmarking, design space explor # Authors -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) diff --git a/cm/cmind/repo/automation/automation/_cm.json b/cm/cmind/repo/automation/automation/_cm.json index 67106ab58a..54ffc55d73 100644 --- a/cm/cmind/repo/automation/automation/_cm.json +++ b/cm/cmind/repo/automation/automation/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Managing CM automations", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": -2000, "tags": [ "automation" diff --git a/cm/cmind/repo/automation/ck/_cm.json b/cm/cmind/repo/automation/ck/_cm.json index da08f7182f..59549e28d0 100644 --- a/cm/cmind/repo/automation/ck/_cm.json +++ b/cm/cmind/repo/automation/ck/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Accessing legacy CK automations", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": -1000, "tags": [ "automation", diff --git a/cm/cmind/repo/automation/core/_cm.json b/cm/cmind/repo/automation/core/_cm.json index 4987f2a58c..8e6c0f29b7 100644 --- a/cm/cmind/repo/automation/core/_cm.json +++ b/cm/cmind/repo/automation/core/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Accessing some core CM functions", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": 500, "tags": [ "automation", diff --git a/cm/cmind/repo/automation/repo/_cm.json b/cm/cmind/repo/automation/repo/_cm.json index 83026774c4..938e4d3894 100644 --- a/cm/cmind/repo/automation/repo/_cm.json +++ b/cm/cmind/repo/automation/repo/_cm.json @@ -3,7 +3,7 @@ "automation_alias": "automation", "automation_uid": "bbeb15d8f0a944a4", "desc": "Managing CM repositories", - "developers": "[Grigori Fursin](https://cKnowledge.io/@gfursin)", + "developers": "[Grigori Fursin](https://cKnowledge.org/gfursin)", "sort": 2000, "tags": [ "automation", diff --git a/docs/archive/taskforce-2022.md b/docs/archive/taskforce-2022.md index 836e895a94..71c4ef00d4 100644 --- a/docs/archive/taskforce-2022.md +++ b/docs/archive/taskforce-2022.md @@ -18,7 +18,7 @@ ## Moderators -* [Grigori Fursin](https://cKnowledge.io/@gfursin) +* [Grigori Fursin](https://cKnowledge.org/gfursin) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) ## Discord server @@ -178,15 +178,15 @@ See our [R&D roadmap for Q4 2022 and Q1 2023](https://github.com/mlcommons/ck/is * Upload all stable CM components for MLPerf to Zenodo or any other permanent archive to ensure the stability of all CM workflows for MLPerf and modular ML Systems. * Develop CM automation for community crowd-benchmarking of the MLPerf benchmarks across different models, data sets, frameworks, compilers, run-times and platforms. * Develop a customizable dashboard to visualize and analyze all MLPerf crowd-benchmarking results based on these examples from the legacy CK prototype: - [1](https://cknowledge.io/c/result/mlperf-inference-all-image-classification-edge-singlestream), - [2](https://cknowledge.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all). + [1](https://cknow.io/c/result/mlperf-inference-all-image-classification-edge-singlestream), + [2](https://cknow.io/result/crowd-benchmarking-mlperf-inference-classification-mobilenets-all). * Share MLPerf benchmarking results in a database compatible with FAIR principles (mandated by the funding agencies in the USA and Europe) -- ideally, eventually, the MLCommons general datastore. * Connect CM-based MLPerf inference submission system with our [reproducibility initiatives at ML and Systems conferences](https://cTuning.org/ae). Organize open ML/SW/HW optimization and co-design tournaments using CM and the MLPerf methodology - based on our [ACM ASPLOS-REQUEST'18 proof-of-concept](https://cknowledge.io/c/event/repro-request-asplos2018/). + based on our [ACM ASPLOS-REQUEST'18 proof-of-concept](https://cknow.io/c/event/repro-request-asplos2018/). * Enable automatic submission of the Pareto-efficient crowd-benchmarking results (performance/accuracy/energy/size trade-off - - see [this example from the legacy CK prototype](https://cknowledge.io/c/result/mlperf-inference-all-image-classification-edge-singlestream-pareto)) + see [this example from the legacy CK prototype](https://cknow.io/c/result/mlperf-inference-all-image-classification-edge-singlestream-pareto)) to MLPerf on behalf of MLCommons. * Share deployable MLPerf inference containers with Pareto-efficient ML/SW/HW stacks. diff --git a/docs/artifact-evaluation/checklist.md b/docs/artifact-evaluation/checklist.md index 0421efe961..fd4fc012e2 100644 --- a/docs/artifact-evaluation/checklist.md +++ b/docs/artifact-evaluation/checklist.md @@ -194,7 +194,7 @@ and benchmarks - you will be informed in case of positive outcome.* [IPython/Jupyter notebook](https://jupyter.org "https://jupyter.org"), [portable workflow](https://github.com/mlcommons/ck/tree/master/docs), etc. - See [past reproduced papers](https://cKnowledge.io/reproduced-papers "https://cKnowledge.io/reproduced-papers") + See [past reproduced papers](https://cknow.io/reproduced-papers "https://cknow.io/reproduced-papers") and an example of the experimental workflow for multi-objective and machine-learning based autotuning: @@ -245,7 +245,7 @@ and benchmarks - you will be informed in case of positive outcome.* ---- -*This document was prepared by [Grigori Fursin](https://cKnowledge.io/@gfursin "https://cKnowledge.io/@gfursin") +*This document was prepared by [Grigori Fursin](https://cKnowledge.org/gfursin "https://cKnowledge.org/gfursin") with contributions from [Bruce Childers](https://people.cs.pitt.edu/~childers "https://people.cs.pitt.edu/~childers"), [Michael Heroux](https://www.sandia.gov/~maherou "https://www.sandia.gov/~maherou"), [Michela Taufer](https://gcl.cis.udel.edu/personal/taufer/ "https://gcl.cis.udel.edu/personal/taufer/") and others. diff --git a/docs/artifact-evaluation/faq.md b/docs/artifact-evaluation/faq.md index d8edbb0389..7327a59640 100644 --- a/docs/artifact-evaluation/faq.md +++ b/docs/artifact-evaluation/faq.md @@ -138,7 +138,7 @@ and you can not use average and reliably compare empirical results. However, if there is only one expected value for a given experiment (a), then you can use it to compare multiple experiments. This is particularly useful when running experiments across different platforms from different -users as described in this [article](https://cknowledge.io/c/report/rpi3-crowd-tuning-2017-interactive). +users as described in this [article](https://cknow.io/c/report/rpi3-crowd-tuning-2017-interactive). You should also report the variation of empirical results together with all expected values. diff --git a/docs/artifact-evaluation/reviewing.md b/docs/artifact-evaluation/reviewing.md index 9d0c6e90a8..54c60d339d 100644 --- a/docs/artifact-evaluation/reviewing.md +++ b/docs/artifact-evaluation/reviewing.md @@ -117,7 +117,7 @@ and the [NeurIPS reproducibility checklist](https://www.cs.mcgill.ca/~jpineau/Re artifacts are functional before the camera ready paper deadline, and then use a separate AE with the full validation of all experimental results with open reviewing and without strict deadlines. We successfully validated - a similar approach at the [ACM ASPLOS-ReQuEST'18 tournament (SW/HW co-design of Pareto-efficient deep learning)](https://cknowledge.io/c/event/request-reproducible-benchmarking-tournament) + a similar approach at the [ACM ASPLOS-ReQuEST'18 tournament (SW/HW co-design of Pareto-efficient deep learning)](https://cknow.io/c/event/request-reproducible-benchmarking-tournament) and we saw similar initiatives at the [NeurIPS conference](https://openreview.net/group?id=NeurIPS.cc/2019/Reproducibility_Challenge).* @@ -127,7 +127,7 @@ When arranged by the event, an artifact can receive a distinguished artifact awa ---- -*This document was prepared by [Grigori Fursin](https://cKnowledge.io/@gfursin "https://cKnowledge.io/@gfursin") +*This document was prepared by [Grigori Fursin](https://cKnowledge.org/gfursin "https://cKnowledge.org/gfursin") with contributions from [Bruce Childers](https://people.cs.pitt.edu/~childers "https://people.cs.pitt.edu/~childers"), [Michael Heroux](https://www.sandia.gov/~maherou "https://www.sandia.gov/~maherou"), [Michela Taufer](https://gcl.cis.udel.edu/personal/taufer/ "https://gcl.cis.udel.edu/personal/taufer/") and others. diff --git a/docs/artifact-evaluation/submission.md b/docs/artifact-evaluation/submission.md index e74c168eed..2e076272bd 100644 --- a/docs/artifact-evaluation/submission.md +++ b/docs/artifact-evaluation/submission.md @@ -45,7 +45,7 @@ the [NeurIPS reproducibility checklist](https://www.cs.mcgill.ca/~jpineau/Reprod and [AE FAQs](faq.md) before submitting artifacts for evaluation! You can find the examples of Artifact Appendices -in the following [reproduced papers](https://cKnowledge.io/reproduced-papers). +in the following [reproduced papers](https://cknow.io/reproduced-papers). *Since the AE methodology is slightly different at different conferences, we introduced the unified Artifact Appendix @@ -78,7 +78,7 @@ across continously changing software, hardware and data. Most of the time, the authors make their artifacts available to the evaluators via GitHub, GitLab, BitBucket or private repositories. Public artifact sharing allows optional "open evaluation" which we have successfully validated at [ADAPT'16]( https://adapt-workshop.org) -and [ASPLOS-REQUEST'18](https://cknowledge.io/c/event/request-reproducible-benchmarking-tournament). +and [ASPLOS-REQUEST'18](https://cknow.io/c/event/request-reproducible-benchmarking-tournament). It allows the authors to quickly fix encountered issues during evaluation before submitting the final version to archival repositories. @@ -153,15 +153,15 @@ In other cases, AE chairs will tell you how to add stamps to the first page of y -* [Some papers from the past AE](https://cKnowledge.io/?q=%22reproduced-papers%22) (ASPLOS, MICRO, MLSys, Supercomputing, CGO, PPoPP, PACT, IA3, ReQuEST) -* [Dashboards with reproduced results](https://cKnowledge.io/?q=%22reproduced-results%22) +* [Some papers from the past AE](https://cknow.io/?q=%22reproduced-papers%22) (ASPLOS, MICRO, MLSys, Supercomputing, CGO, PPoPP, PACT, IA3, ReQuEST) +* [Dashboards with reproduced results](https://cknow.io/?q=%22reproduced-results%22) * Paper "Highly Efficient 8-bit Low Precision Inference of Convolutional Neural Networks with IntelCaffe" from ACM ASPLOS-ReQuEST'18 * [Paper DOI](https://doi.org/10.1145/3229762.3229763) * [Artifact DOI](https://doi.org/10.1145/3229769) * [Original artifact](https://github.com/intel/caffe/wiki/ReQuEST-Artifact-Installation-Guide) * [Portable automation](https://github.com/ctuning/ck-request-asplos18-caffe-intel) * [Expected results](https://github.com/ctuning/ck-request-asplos18-results-caffe-intel) - * [Public scoreboard](https://cKnowledge.io/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) + * [Public scoreboard](https://cknow.io/result/pareto-efficient-ai-co-design-tournament-request-acm-asplos-2018) * Paper "Software Prefetching for Indirect Memory Accesses" from CGO'17 * [Portable automation at GitHub](https://github.com/SamAinsworth/reproduce-cgo2017-paper) * [CK dashboard snapshot](https://github.com/SamAinsworth/reproduce-cgo2017-paper/files/618737/ck-aarch64-dashboard.pdf) @@ -171,7 +171,7 @@ In other cases, AE chairs will tell you how to add stamps to the first page of y ---- -*This document was prepared by [Grigori Fursin](https://cKnowledge.io/@gfursin "https://cKnowledge.io/@gfursin") +*This document was prepared by [Grigori Fursin](https://cKnowledge.org/gfursin "https://cKnowledge.org/gfursin") with contributions from [Bruce Childers](https://people.cs.pitt.edu/~childers "https://people.cs.pitt.edu/~childers"), [Michael Heroux](https://www.sandia.gov/~maherou "https://www.sandia.gov/~maherou"), [Michela Taufer](https://gcl.cis.udel.edu/personal/taufer/ "https://gcl.cis.udel.edu/personal/taufer/") and others. diff --git a/docs/history.md b/docs/history.md index e758eff482..c8f55fc29d 100644 --- a/docs/history.md +++ b/docs/history.md @@ -9,7 +9,7 @@ and automating [MLPerf benchmarks](https://mlcommons.org). We have spent many months communicating with researchers and developers to understand their technical reports, README files, ad-hoc scripts, tools, command lines, APIs, specifications, dependencies, data formats, models and data -to be able to [reproduce their experiments](https://cknowledge.io/?q=%22reproduced-papers%22) +to be able to [reproduce their experiments](https://cknow.io/?q=%22reproduced-papers%22) and reuse their artifacts across continuously changing software, hardware and data. ![](https://cKnowledge.org/images/cm-gap-beween-mlsys-research-and-production.png?id=1) diff --git a/docs/tutorials/mlperf-inference-submission.md b/docs/tutorials/mlperf-inference-submission.md index b7906c7dfb..7ddb41d1ac 100644 --- a/docs/tutorials/mlperf-inference-submission.md +++ b/docs/tutorials/mlperf-inference-submission.md @@ -351,7 +351,7 @@ See the development roadmap [here](https://github.com/mlcommons/ck/issues/536). # Authors -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) # Acknowledgments diff --git a/docs/tutorials/sc22-scc-mlperf-part2.md b/docs/tutorials/sc22-scc-mlperf-part2.md index 2dd0a2b6ca..92d143ee84 100644 --- a/docs/tutorials/sc22-scc-mlperf-part2.md +++ b/docs/tutorials/sc22-scc-mlperf-part2.md @@ -525,7 +525,7 @@ See the development roadmap [here](https://github.com/mlcommons/ck/issues/536). # Authors -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) diff --git a/docs/tutorials/sc22-scc-mlperf-part3.md b/docs/tutorials/sc22-scc-mlperf-part3.md index 2207285d6b..1e8a751027 100644 --- a/docs/tutorials/sc22-scc-mlperf-part3.md +++ b/docs/tutorials/sc22-scc-mlperf-part3.md @@ -426,7 +426,7 @@ See the development roadmap [here](https://github.com/mlcommons/ck/issues/536). # Authors -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) diff --git a/docs/tutorials/sc22-scc-mlperf.md b/docs/tutorials/sc22-scc-mlperf.md index 42f6a83683..529a984c77 100644 --- a/docs/tutorials/sc22-scc-mlperf.md +++ b/docs/tutorials/sc22-scc-mlperf.md @@ -779,7 +779,7 @@ See the development roadmap [here](https://github.com/mlcommons/ck/issues/536). # Authors -* [Grigori Fursin](https://cKnowledge.io/@gfursin) (OctoML, MLCommons, cTuning foundation) +* [Grigori Fursin](https://cKnowledge.org/gfursin) (OctoML, MLCommons, cTuning foundation) * [Arjun Suresh](https://www.linkedin.com/in/arjunsuresh) (OctoML, MLCommons) diff --git a/docs/tutorials/scripts.md b/docs/tutorials/scripts.md index 6b6cd8b336..2d8939765c 100644 --- a/docs/tutorials/scripts.md +++ b/docs/tutorials/scripts.md @@ -29,7 +29,7 @@ When organizing [artifact evaluation at ML and Systems conferences](https://cTun we have noticed that researchers and engineers spent most of the time trying to understand numerous technical reports, README files, specifications, dependencies, ad-hoc scripts, tools, APIs, models and data sets of all shared projects -to be able to [validate experimental, benchmarking and optimization results](https://cknowledge.io/?q=%22reproduced-papers%22) +to be able to [validate experimental, benchmarking and optimization results](https://cknow.io/?q=%22reproduced-papers%22) and adapt ad-hoc projects to the real world with very diverse software, hardware, user environments, settings and data.