Skip to content

Different streams for different infer request in latency mode #10952

Different streams for different infer request in latency mode

Different streams for different infer request in latency mode #10952

Triggered via pull request November 22, 2024 02:46
Status Failure
Total duration 2h 8m 45s
Artifacts 10

windows_vs2019_release.yml

on: pull_request
OpenVINO tokenizers extension  /  OpenVINO tokenizers extension
7m 20s
OpenVINO tokenizers extension / OpenVINO tokenizers extension
Samples
7m 10s
Samples
Python unit tests
9m 0s
Python unit tests
Pytorch Layer Tests  /  PyTorch Layer Tests
22m 7s
Pytorch Layer Tests / PyTorch Layer Tests
C++ unit tests  /  C++ unit tests
4m 48s
C++ unit tests / C++ unit tests
CPU functional tests
38m 20s
CPU functional tests
TensorFlow Layer Tests  /  TensorFlow Layer Tests
32m 35s
TensorFlow Layer Tests / TensorFlow Layer Tests
ci/gha_overall_status_windows
0s
ci/gha_overall_status_windows
Fit to window
Zoom out
Zoom in

Annotations

6 errors
Build / Build
Process completed with exit code 1.
C++ unit tests / C++ unit tests
Process completed with exit code 1.
Samples
Process completed with exit code 1.
Python unit tests
Process completed with exit code 1.
CPU functional tests
Process completed with exit code 1.
ci/gha_overall_status_windows
Process completed with exit code 1.

Artifacts

Produced during runtime
Name Size
openvino_js_package
33 MB
openvino_package
39.3 MB
openvino_tests
106 MB
openvino_tokenizers_wheel
13.8 MB
openvino_wheels
39.4 MB
test-results-cpp
461 KB
test-results-functional-cpu
10.5 MB
test-results-python
54.1 KB
test-results-python-pytorch-layers
114 KB
test-results-python-tf-layers
99.5 KB