Skip to content

Commit

Permalink
Cherrypicking #31837 (#31904)
Browse files Browse the repository at this point in the history
* Increase retry backoff for Storage API batch

* longer waits for quota error only

* cleanup

* add to CHANGES.md

* no need for quota backoff. just increase allowed retries

* cleanup
  • Loading branch information
ahmedabu98 authored Jul 16, 2024
1 parent fb96d6f commit dea2623
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 2 deletions.
3 changes: 2 additions & 1 deletion CHANGES.md
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@

* Multiple RunInference instances can now share the same model instance by setting the model_identifier parameter (Python) ([#31665](https://github.com/apache/beam/issues/31665)).
* Added options to control the number of Storage API multiplexing connections ([#31721](https://github.com/apache/beam/pull/31721))
* [BigQueryIO] Better handling for batch Storage Write API when it hits AppendRows throughput quota ([#31837](https://github.com/apache/beam/pull/31837))
* [IcebergIO] All specified catalog properties are passed through to the connector ([#31726](https://github.com/apache/beam/pull/31726))
* Removed a 3rd party LGPL dependency from the Go SDK ([#31765](https://github.com/apache/beam/issues/31765)).
* Support for MapState and SetState when using Dataflow Runner v1 with Streaming Engine (Java) ([[#18200](https://github.com/apache/beam/issues/18200)])
Expand All @@ -83,7 +84,7 @@

## Bugfixes

* Fixed a bug in BigQueryIO batch Storage Write API that frequently exhausted concurrent connections quota ([#31710](https://github.com/apache/beam/pull/31710))
* [BigQueryIO] Fixed a bug in batch Storage Write API that frequently exhausted concurrent connections quota ([#31710](https://github.com/apache/beam/pull/31710))
* Fixed X (Java/Python) ([#X](https://github.com/apache/beam/issues/X)).

## Security Fixes
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -771,7 +771,7 @@ long flush(
invalidateWriteStream();
allowedRetry = 5;
} else {
allowedRetry = 10;
allowedRetry = 35;
}

// Maximum number of times we retry before we fail the work item.
Expand Down

0 comments on commit dea2623

Please sign in to comment.