Skip to content

Reserving CPU resource in CPU inference #11455

Reserving CPU resource in CPU inference

Reserving CPU resource in CPU inference #11455

Triggered via pull request November 13, 2024 08:21
Status Success
Total duration 58m 37s
Artifacts 12

ubuntu_24.yml

on: pull_request
OpenVINO tokenizers extension  /  OpenVINO tokenizers extension
4m 34s
OpenVINO tokenizers extension / OpenVINO tokenizers extension
Debian Packages  /  Debian Packages
2m 37s
Debian Packages / Debian Packages
Samples  /  Samples
6m 2s
Samples / Samples
Python unit tests  /  Python unit tests
20m 2s
Python unit tests / Python unit tests
Pytorch Layer Tests  /  PyTorch Layer Tests
26m 31s
Pytorch Layer Tests / PyTorch Layer Tests
C++ unit tests  /  C++ unit tests
19m 35s
C++ unit tests / C++ unit tests
TensorFlow Layer Tests  /  TensorFlow Layer Tests
10m 45s
TensorFlow Layer Tests / TensorFlow Layer Tests
ci/gha_overall_status_ubuntu_24
0s
ci/gha_overall_status_ubuntu_24
Fit to window
Zoom out
Zoom in

Artifacts

Produced during runtime
Name Size
build_logs
44.7 KB
openvino_debian_packages
53.2 MB
openvino_developer_package
28.5 MB
openvino_js_package
76.9 MB
openvino_package
53.7 MB
openvino_tests
178 MB
openvino_tokenizers_wheel
14 MB
openvino_wheels
54.7 MB
test-results-cpp
1.26 MB
test-results-python
131 KB
test-results-python-pytorch-layers
230 KB
test-results-python-tf-layers
79.4 KB