You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using the default AsyncOffsetTracker (flush.synchronously=false), if the topic contains a lot of null values, the consumer get paused indefinitely.
Logs:
[2023-10-10 13:20:52,259] TRACE Ignoring record from topic=real-time-analytics.package-tree.state partition=5 offset=2866040 with null value. (io.confluent.connect.elasticsearch.ElasticsearchSinkTask)
[2023-10-10 13:20:52,259] DEBUG Pausing all partitions (io.confluent.connect.elasticsearch.ElasticsearchSinkTask)
[2023-10-10 13:20:52,361] DEBUG Putting 0 records to Elasticsearch. (io.confluent.connect.elasticsearch.ElasticsearchSinkTask)
[2023-10-10 13:20:52,467] DEBUG Putting 0 records to Elasticsearch. (io.confluent.connect.elasticsearch.ElasticsearchSinkTask)
[2023-10-10 13:20:52,571] DEBUG Putting 0 records to Elasticsearch. (io.confluent.connect.elasticsearch.ElasticsearchSinkTask)
Looking at the code, I think it's because the AsyncOffsetTracker.updateOffsets() is never called. It's supposed to be called at the end of a bulk but as no data is sent to Elastic, the bulk is never complete and so we never resume the consumption. This behavior is hard to track as from Kafka point of view there is a consumer and no error is shown in kafka-connect logs without setting log level to TRACE.
As a workaround, changing to SyncOffsetTracker (flush.synchronously=true) solves the issue.
The text was updated successfully, but these errors were encountered:
When using the default
AsyncOffsetTracker
(flush.synchronously=false
), if the topic contains a lot of null values, the consumer get paused indefinitely.Logs:
Looking at the code, I think it's because the
AsyncOffsetTracker.updateOffsets()
is never called. It's supposed to be called at the end of a bulk but as no data is sent to Elastic, the bulk is never complete and so we never resume the consumption. This behavior is hard to track as from Kafka point of view there is a consumer and no error is shown in kafka-connect logs without setting log level toTRACE
.As a workaround, changing to
SyncOffsetTracker
(flush.synchronously=true
) solves the issue.The text was updated successfully, but these errors were encountered: