Support using different streams in one infer request with latency mode #12397
Triggered via pull request
November 22, 2024 02:46
Status
Failure
Total duration
1h 25m 24s
Artifacts
11
ubuntu_24.yml
on: pull_request
Smart_CI
27s
OpenVINO tokenizers extension
/
OpenVINO tokenizers extension
4m 3s
Samples
/
Samples
7m 6s
Python unit tests
/
Python unit tests
5m 6s
Pytorch Layer Tests
/
PyTorch Layer Tests
31m 35s
C++ unit tests
/
C++ unit tests
4m 10s
TensorFlow Layer Tests
/
TensorFlow Layer Tests
22m 21s
ci/gha_overall_status_ubuntu_24
0s
Annotations
4 errors and 1 warning
C++ unit tests / C++ unit tests
Process completed with exit code 1.
|
Python unit tests / Python unit tests
Process completed with exit code 139.
|
Samples / Samples
Process completed with exit code 1.
|
ci/gha_overall_status_ubuntu_24
Process completed with exit code 1.
|
Python unit tests / Python unit tests
No files were found with the provided path: /__w/openvino/openvino/install/tests/TEST*.html
/__w/openvino/openvino/install/tests/TEST*.xml. No artifacts will be uploaded.
|
Artifacts
Produced during runtime
Name | Size | |
---|---|---|
build_logs
|
132 KB |
|
openvino_debian_packages
|
53.3 MB |
|
openvino_developer_package
|
28.5 MB |
|
openvino_js_package
|
77 MB |
|
openvino_package
|
53.7 MB |
|
openvino_tests
|
178 MB |
|
openvino_tokenizers_wheel
|
14 MB |
|
openvino_wheels
|
54.8 MB |
|
test-results-cpp
|
453 KB |
|
test-results-python-pytorch-layers
|
234 KB |
|
test-results-python-tf-layers
|
94 KB |
|