-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Access the low level Kafka consumer / producer from the Kafka connector #955
Comments
this is badly needed for any kind of error handling on connectivity issues :( |
How would it help you do handle connectivity issues? |
(just to be sure I design the API correctly) |
hi, please imagine the following example: Kafka -> Quarkus App with smallrye -> Database So in this example, the application is reading from a kafka topic, does a transformation on the data and writes the result into a database.
If it would be possible to for example inject a KafkaConsumer into a For our use case specifically it would actually be perfect if there were a higher-level form of error handling which just means "try again later (after x ms for example, or maybe even on fulfilling a given prefix which would be tried periodically)" - however, that wouldnt be as flexible for general usage |
In the 3.x version, the consumption will be automatically paused if there are no requests. Also, you can use Fault-Tolerance to add a retry (I'm not sure about the backoff - @Ladicek is this supported?). Multi<X> processAndRetryOnFailure(Multi<X> multi) {
multi // For each item, write and retry on failure (use concatenate to preserve ordering if needed)
.onItem().transformToUniAndConcatenate(x -> {
// Write to DB and retry
Uni.createFrom().item(x)
.onItem().transformToUni(x -> writeInDatabase(x))
.onFailure().retry().withBackoff(Duration.ofSeconds(10)).atMost(100)
});
} So, with the next version, when a failure happens, it will stops requesting, and so pause the consumption. |
Hi, thank you for your answer. This does sound very promising, however, I still have some questions concerning your code example:
|
yes, however, as the offset is not committed, if it crashed, it will re-process it.
only the current batch (the number of records contained in a batch is configurable). Once the number of stored messages is greater than the requests, it pauses the consumption.
Yes, if post-acknowledgement is used. |
Unrelated, but since @cescoffier asked -- no, MicroProfile Fault Tolerance's |
@Ladicek isn't this something we could add as a smallrye only thing? |
I did think about that, yes (mostly because I wanted to show how the API should look like from my perspective, but also because it's useful :-) ). The thing with |
Hello again, thank you so much for your ongoing help with all of our questions. We have tried out the mentioned approach today and can report that the proposed solution does work for us. We still have 3 questions though:
Looking forward to reading your response! |
There is an issue about that. Unfortunately, it's not done yet, and this new signature it not supported by the current specification.
You can use |
alright, thank you again so much for your support :) I think this is a good way to start including error handling and are eagerly awaiting version 3.x! |
Hi, I have another case which requires the access to underlying kafka-consumer. I have topics: I want to limit interval of polling in underlying kafka-consumer to achieve following behaviour using consumers from above topics: At this moment, the only solution I see is to use smallrye abstractions to consume from "price" topic and plain consumers for "price-retry-*" topics. What do you think about making consumer polling configurable? |
For some low level stuff, we need to have access from the channel the corresponding Kafka Consumer / Producer.
Idealy, those needs to be recoverable from CDI (directly of via the connector).
This allow for example to manyally pause a consumer.
The text was updated successfully, but these errors were encountered: