You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In the cloud world, statically defined bulkheads are not a great fit for limiting concurrency. We should add dynamic (adaptive) concurrency limiters, modeled after Netflix's https://github.com/Netflix/concurrency-limits.
I'm thinking "client-side" concurrency limiter would be a simple CDI interceptor, just like @Bulkhead (we could probably call it @AdaptiveBulkhead?). And we could also add a "server-side" concurrency limiter, which would probably be a few config properties and apply to the entire application. I didn't have much time to look into this yet, so I'm filing this issue just to gather some feedback.
The text was updated successfully, but these errors were encountered:
We discussed on today's call with @Ladicek@Azquelt@Joseph-Cass , actually, we could use the existing annoation but make the configuration dynamically configured by the runtime. For instance, if a method takes a long time to finish, the bulkhead size should be shrinked. The similar approach could apply to other annotations such as Retry. If one previous Retry was configured for 10 times, the reality is that nothing worked. The configuration for maxRetries should be reduced in the next execution.
In the cloud world, statically defined bulkheads are not a great fit for limiting concurrency. We should add dynamic (adaptive) concurrency limiters, modeled after Netflix's https://github.com/Netflix/concurrency-limits.
I'm thinking "client-side" concurrency limiter would be a simple CDI interceptor, just like
@Bulkhead
(we could probably call it@AdaptiveBulkhead
?). And we could also add a "server-side" concurrency limiter, which would probably be a few config properties and apply to the entire application. I didn't have much time to look into this yet, so I'm filing this issue just to gather some feedback.The text was updated successfully, but these errors were encountered: