arjunsuresh
released this
22 Nov 17:35
·
302 commits
to main
since this release
What's Changed
- fixed several URLs (all tests passed) by @gfursin in #342
- Changes for supporting model and dataset download to host - Mixtral by @anandhu-eng in #346
- Support custom git clone branch in docker by @anandhu-eng in #343
- Merge from go, fixes #337 by @arjunsuresh in #348
- Includes cuda version to run suffix by @anandhu-eng in #354
- Fixes for const in script by @arjunsuresh in #355
- Fix docker image naming SCC24, extended CM script tests by @arjunsuresh in #356
- Fixes for MLPerf Inference Github Actions by @arjunsuresh in #362
- Fix typo in gh action by @arjunsuresh in #363
- Fix CUDA num_devices by @arjunsuresh in #365
- Support cleaning of Nvidia SDXL model by @arjunsuresh in #366
- Improvements to Nvidia MLPerf interface by @arjunsuresh in #367
- Fixes to pull changes for Nvidia implementation by @arjunsuresh in #369
- Improvements to MLPerf inference final report generation by @arjunsuresh in #371
- Support get-platform-details for mlperf-inference by @arjunsuresh in #373
- Support system_info.txt in MLPerf inference submission generation by @arjunsuresh in #374
- Cleanups for mlperf inference get-platform-details by @arjunsuresh in #375
- Improvements to get-platform-details by @arjunsuresh in #376
- Improvements for reproducing AMD implementation by @anandhu-eng in #379
- Improvements to amd LLAMA2 70B command generation - Added server scenario by @anandhu-eng in #383
- Build wheels and release them into PYPI by @anandhu-eng in #385
- Do not pass mlperf_conf for inference-src >= 4.1.1 by @arjunsuresh in #404
- Fix version check for mlperf-inference-src by @arjunsuresh in #405
- Added no-compilation-warning variation for loadgen by @arjunsuresh in #406
- Support 8G Nvidia GPUs for MLPerf Inference by @arjunsuresh in #411
- Fix bug on benchmark-program exit check by @arjunsuresh in #412
- Improve the benchmark-program-mlperf run command by @arjunsuresh in #413
- CM4MLOps snapshot with MLPerf inference: 20241024 by @gfursin in #415
- fix batch size duplication issue by @anandhu-eng in #416
- Update cm repo branch - docker by @anandhu-eng in #422
- Fixes for Nvidia MLPerf inference SS and MS by @arjunsuresh in #423
- Improvements for Nvidia MLPerf inference by @arjunsuresh in #428
- added compressed_tools module by @anandhu-eng in #430
- Updated logic for mounting non cache folder by @anandhu-eng in #427
- Fixes for latest MLPerf inference submission checker changes by @arjunsuresh in #431
- Fixes for Latest MLPerf inference changes by @arjunsuresh in #432
- Fixes for latest MLPerf inference changes by @arjunsuresh in #433
- Submission generation fixes by @anandhu-eng in #424
- Support custom path for saving platform details by @anandhu-eng in #418
- Add getting started to cm4mlops docs by @anandhu-eng in #435
- Merge from Mlperf inference by @anandhu-eng in #436
- Fixes for docker detached mode by @arjunsuresh in #438
- Fixes for get-platform-details by @arjunsuresh in #441
- Testing CM Test automation by @arjunsuresh in #442
- Added github action for individual CM tests by @arjunsuresh in #443
- Fix Individual CM script test by @arjunsuresh in #445
- Fix individual CM script test by @arjunsuresh in #446
- Fix gh action for individual CM sript tests by @arjunsuresh in #447
- Initial PR - gh actions for submission generation for non CM based benchmarks by @anandhu-eng in #440
- Fix GH action for individual CM script testing by @arjunsuresh in #449
- Enable docker run for individual CM script tests by @arjunsuresh in #450
- Fixes huggingface downloader by @arjunsuresh in #452
- capture framework version from cm_sut_info.json by @anandhu-eng in #451
- Sync: Mlperf inference by @arjunsuresh in #444
- Support docker_base_image and docker_cm_repo for CM tests by @arjunsuresh in #453
- Improved docker meta for cm test script by @arjunsuresh in #454
- Support test_input_index and test_input_id to customise CM test script by @arjunsuresh in #458
- Improvements to version-detect in get-generic-sys-util by @arjunsuresh in #460
- Use pkg-config deps for get-generic-sys-util by @arjunsuresh in #464
- Fix pstree version detect on macos by @arjunsuresh in #466
- Fix Nvidia mlperf inference retinanet | onnx version by @arjunsuresh in #468
- Support detached mode for nvidia-mlperf-inference-gptj by @arjunsuresh in #469
- Cleanups for get-generic-sys-util by @arjunsuresh in #470
- Fix tmp-run-env.out name by @arjunsuresh in #471
- Add Version RE for g++-11 by @arjunsuresh in #472
- Use berkeley link for imagenet-aux by default by @arjunsuresh in #473
- Code cleanup by @anandhu-eng in #475
- Fixes for Sdxl MLPerf inference by @arjunsuresh in #481
- Added google dns to mlperf-inference docker by @arjunsuresh in #484
- enable submission generation gh action test globally by @anandhu-eng in #487
- Enables docker run in inference submission generation by @anandhu-eng in #486
- Added docker detatched option by @anandhu-eng in #477
- Fixes #261 partially by @Oseltamivir in #426
- Enabled docker run - gh action submission generation by @anandhu-eng in #489
- Fixes for MLPerf inference, intel conda URL by @arjunsuresh in #491
- Dont use '-dt' for Nvidia ml-model-gptj by @arjunsuresh in #492
- Update starting weights filename for SDXL MLPerf inference by @arjunsuresh in #494
- Implements #455: Copy local repo to docker instead of
git clone
by @Oseltamivir in #467 - Fixes for Nvidia MLPerf inference gptj,sdxl by @arjunsuresh in #495
- Cleanups to MLPerf inference preprocess script by @arjunsuresh in #496
- Support sample-ids for coco2014 accuracy script by @arjunsuresh in #497
- Support download to host - amd llama2 by @anandhu-eng in #480
- Added a retry for git clone failure by @arjunsuresh in #499
- Use custom version for dev branch of inference-src by @arjunsuresh in #500
- Fix path error shown to user by @anandhu-eng in #502
- Fixes for the MLPerf inference nightly test failures by @arjunsuresh in #506
- pip install cm4mlops - handle systems where sudo is absent by @anandhu-eng in #504
- logic update for detect sudo by @anandhu-eng in #508
- Fix Nvidia MLPerf inference gptj model name suffix by @arjunsuresh in #509
- Skip sys-utils-install when no sudo is available by @arjunsuresh in #512
- Fixes against issue: #510 by @anandhu-eng in #514
- Support for privileged mode in docker by @anandhu-eng in #511
- Allow default_version update inside variations by @arjunsuresh in #515
- MLPerf inference changes by @arjunsuresh in #518
- Removed dev branch for SDXL by @arjunsuresh in #519
- update_env_if_env -> update_meta_if_env by @arjunsuresh in #520
- Automatically generate system_meta.json if not found by @anandhu-eng in #523
- Add cuda information to system meta by @anandhu-eng in #526
- Improvements to github actions by @arjunsuresh in #527
- Fix README generation in MLPerf inference submission by @arjunsuresh in #529
- Avoid creation of empty model_mapping.json by @arjunsuresh in #530
- Enhancements to Docker for Multi-User Setups by @arjunsuresh in #539
- Enable update mlperf inference measurements.json file from existing file by @arjunsuresh in #540
- Github action updates by @arjunsuresh in #548
- Enable creating test dataset from main mixtral dataset by @anandhu-eng in #550
- Added RTX4090x1 configs by @arjunsuresh in #551
- change in variation name by @anandhu-eng in #552
- Fix skip model for MLPerf inference llama2 and mixtral by @arjunsuresh in #553
- Support min_query_count and max_query_count for mlperf inference by @arjunsuresh in #554
- Remove TEST05 for MLPerf inference by @arjunsuresh in #555
- Fix CM_ML_MODEL_PATH export for mixtral by @arjunsuresh in #556
- Fix the saving of console logs in benchmark-program - fixes #533 by @arjunsuresh in #558
- Fix 3dunet SS latency in configs by @arjunsuresh in #559
- Added sympy dependency for nvidia mlperf inference gptj by @arjunsuresh in #560
- Added new docker image name for Nvidia MLPerf inference LLM models by @arjunsuresh in #562
- Build TRTLLM for Nvidia MLPerf LLM models by @arjunsuresh in #563
- Update default-config.yaml by @arjunsuresh in #564
- fix get_cudnn and get_tensorrt not detecting multi-digit version numbers correctly by @Submandarine in #565
- add gnn dataset download script by @anandhu-eng in #570
- rename mixtral dataset download script by @anandhu-eng in #569
- enable user to submit a result for both closed and open by @anandhu-eng in #535
- use the default sut folder name supplied if cm-sut-json is not there by @anandhu-eng in #537
- Added dependency graph for mlperf-inference runs by @arjunsuresh in #573
- Add get-rocm-devices for AMD GPUs by @Henryfzh in #544
- Add aarch64-Compatible Base Image for MLPerf Inference & Fix TensorRT Version Matching by @Leonard226 in #478
- additional tag for dataset sampling - mixtral by @anandhu-eng in #576
- Update nvidia mlperf inference sdxl configs by @arjunsuresh in #578
- Improve docker detached mode error capture by @arjunsuresh in #579
- Use absolute paths for docker mounts by @arjunsuresh in #580
- Issue #542 - folder structure rearrange - submission generation by @anandhu-eng in #574
- Fixes for SDXL accuracy run by @arjunsuresh in #582
- Fix a typo in benchmark-program by @arjunsuresh in #583
- _cm.json -> _cm.yaml by @arjunsuresh in #584
- Sync <- Mlperf inference for November release by @arjunsuresh in #585
- Support for platform details in submission generation by @anandhu-eng in #581
- Add remaining gh actions for cm submission generation by @anandhu-eng in #587
New Contributors
- @Oseltamivir made their first contribution in #426
- @Submandarine made their first contribution in #565
- @Henryfzh made their first contribution in #544
- @Leonard226 made their first contribution in #478
Full Changelog: r20241005a...cm4mlperf-v2.3.5