-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Separate retry settings for different exception types on the same method #436
Comments
I'd be reluctant to do this (just like I'm reluctant to support multiple Also, as you acknowledge, there's a problem selecting the correct handler. Java's On the other hand, this is like 2nd or 3rd time someone asks for [something like] this. In retrospect, I guess it must have happened, after we added the ability to skip/ignore certain fault tolerance strategies for certain exceptions. So not really sure what I think about it... |
try / catch comparison + clarification: Covering more than one case where retry is needed (service not available + service returned with and error code) with different retry intervals is not less relevant than just the one that is covered with the annotation. It would leave certain retry use cases of the same call nice and clean with the safety of an identical context and state between retries and the rest of the use cases to be more error prone, harder to read and most importantly inconsistent. Exception matching: |
I thought about this a little and I think repeatable Imagine a service guarded with Say I declare it like this: @Retry(delay = 1, delayUnit = ChronoUnit.MINUTES, retryOn = RateLimitException.class, abortOn = BusinessException.class)
@Retry(delay = 5, delayUnit = ChronoUnit.SECONDS, retryOn = TransientException.class, abortOn = BusinessException.class)
Result myService(Parameters params) {
...
} Sounds like something you're after. However, You might say that this problem already exists if I declare a @Retry(delay = 1, delayUnit = ChronoUnit.MINUTES, retryOn = RateLimitException.class, abortOn = BusinessException.class)
Result myService(Parameters params) {
// note that for this to work, self-interception must be supported (it is in Quarkus,
// but only for non-private methods, so `doMyService` must not be `private`)
// otherwise, I'd have to use self-injection and call this method on the injected instance of this class
return doMyService(params);
}
@Retry(delay = 5, delayUnit = ChronoUnit.SECONDS, retryOn = TransientException.class, abortOn = BusinessException.class)
Result doMyService(Parameters params) {
...
} And indeed it is true, the same problem exists. But here, it's much easier to understand what actually happens, because these are 2 different retries, one nested inside the other. In fact, in this setup, having different With repeatable |
Annotating nested methods:
Multiple Retry annotations would probably not have worked: But by looking at the current implementation, we would have to store all the retry alternatives at the annotation collection and since we are not yet aware of the exception, we cannot populate all the attributes, like with a single Retry. So we have to do that at runtime and we have to add the selection of the correct So it feels like with the current implementation, that was clearly created with a single Retry annotation in mind, it might be too much effort to add this feature. |
That is correct, but it isn't set in stone. What I'm more interested in is...
semantics. Imagine you could add multiple At this point, I think what you actually want isn't repeatable |
Interesting proposition, and yes, in our case an exponential delay should cover both of our cases:
I'm generally in favour of explicitly declaring different behaviours for different scenarios, like having any kind of two separate retry configurations for the above cases, but I admit that yours would be an elegant solution to my problem, while also adding a feature that is useful in other ways as well. |
Thanks! I'd like to point out that while exponential/Fibonacci backoff would be generally an OK solution for your issue, my proposal (that gives custom backoff strategy access to the causing exception) would also let you implement the behavior you described originally. Now, just need to find the time to implement it :-) |
Hmm, indeed, it could be more flexible than I first imagined :) |
Problem summary
Currently only one @Retry annotation can be placed on each method.
Although an annotation can filter for multiple exceptions, they can only be handled along the same set of parameters.
eg. it would be useful to handle an exception that signals that a remote service is not available with a longer retry delay,
and another exception that is thrown for a race condition with a much shorter one.
Workaround
In our case it was possible to extract the relevant section to a separate method, receiving its own @Retry annotation,
but it would improve readability to have both retires next to each other on the original method, instead hiding away one of them.
Also there might be cases where two or more exception are thrown from the same, potentially 3rd party library, that would require different handling where this workaround would fall short.
Possible solutions
Possible caveats
exceptions matching to multiple Retry annotations. In this case only the first matched Retry should be used.
The text was updated successfully, but these errors were encountered: