This repository is powered by the Collective Mind workflow automation framework.
The latest sources are available in this repository.
Two key automations developed using CM are Script and Cache, which streamline machine learning (ML) workflows, including managing Docker runs. Both Script and Cache automations are part of the cm4mlops repository.
The CM scripts,
also housed in this repository, consist of hundreds of modular Python-wrapped scripts accompanied
by yaml
metadata, enabling the creation of robust and flexible ML workflows.
- CM Scripts Documentation: Browse
- CM CLI Documentation: https://docs.mlcommons.org/ck/specs/cm-cli/
© 2022-2025 MLCommons. All Rights Reserved.
Grigori Fursin, the cTuning foundation and OctoML donated the CK and CM projects to MLCommons to benefit everyone and encourage collaborative development.
- MLCommons
We thank all contributors for their invaluable feedback and support!
Check our ACM REP'23 keynote and the white paper.
pip install cmind
cmx pull repo mlcommons@ck --dir=cm4mlops/cm4mlops
cmx run script "python app image-classification onnx" --quiet
cmx run script "run-mlperf inference _performance-only _short" --model=resnet50 --precision=float32 --backend=onnxruntime --scenario=Offline --device=cpu --env.CM_SUDO_USER=no --quiet
cmx run script --tags=run,mlperf,inference,generate-run-cmds,_submission,_short --submitter="MLCommons" --adr.inference-src.tags=_branch.dev --pull_changes=yes --pull_inference_changes=yes --submitter="MLCommons" --hw_name=ubuntu-latest_x86 --model=rgat --implementation=python --backend=pytorch --device=cpu --scenario=Offline --test_query_count=500 --adr.compiler.tags=gcc --category=datacenter --quiet --v --target_qps=1
Visit the parent Collective Knowledge project for further details.
If you found the CM automations helpful, kindly reference this article: [ ArXiv ]