Welcome to the MLPerf Automations and Scripts repository! This repository provides tools, automations, and scripts to facilitate running MLPerf benchmarks, with a primary focus on MLPerf Inference benchmarks.
The automations build upon and extend the powerful Collective Mind (CM) script automations to streamline benchmarking and workflow processes.
- Automated Benchmarking – Simplifies running MLPerf Inference benchmarks with minimal manual intervention.
- Modular and Extensible – Easily extend the scripts to support additional benchmarks and configurations.
- Seamless Integration – Compatible with Docker, cloud environments, and local machines.
- Collective Mind (CM) Integration – Utilizes the CM framework to enhance reproducibility and automation.
The Collective Mind (CM) framework is a Python-based package offering both CLI and API support for creating and managing automations. CM automations enhance ML workflows by simplifying complex tasks such as Docker container management and caching.
- Script Automation – Automates script execution across different environments.
- Cache Management – Manages reusable cached results to accelerate workflow processes.
Learn more about CM in the CM4MLOps documentation.
We welcome contributions from the community! To contribute:
- Submit pull requests (PRs) to the
dev
branch. - Review our CONTRIBUTORS.md for guidelines and best practices.
- Explore more about MLPerf Inference automation in the official MLPerf Inference Documentation.
Your contributions help drive the project forward!
Stay tuned for upcoming updates and announcements.
This project is licensed under the Apache 2.0 License.
This project is made possible through the generous support of:
We appreciate their contributions and sponsorship!
Thank you for your interest and support in MLPerf Automations and Scripts!