-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance Regression or Improvement: Pytorch image classification on 50k images of size 224 x 224 with resnet 152 with Tesla T4 GPU:mean_load_model_latency_milli_secs #27077
Comments
Actually detected Anomaly is at June 8th, 2023, according to the metadata published. There is a bug in the UI which doesn't point out to right anamoly. |
Performance change found in the For more information on how to triage the alerts, please look at
|
Performance change found in the For more information on how to triage the alerts, please look at
|
Performance change found in the For more information on how to triage the alerts, please look at
|
I see variability in Batch Size and Batch Latency in GPU flavor of the benchmark, see: http://metrics.beam.apache.org/d/ZpS8Uf44z/python-ml-runinference-benchmarks?from=now-90d&to=now&orgId=1 would increasing batch sizes increase the latency-per-batch? If so, we may need to compute latency per element or fix the batch size. |
Performance change found in the For more information on how to triage the alerts, please look at
|
Performance change found in the
test:
Pytorch image classification on 50k images of size 224 x 224 with resnet 152 with Tesla T4 GPU:apache_beam.testing.benchmarks.inference.pytorch_image_classification_benchmarks
for the metric:mean_load_model_latency_milli_secs
.For more information on how to triage the alerts, please look at
Triage performance alert issues
section of the README.The text was updated successfully, but these errors were encountered: