-
Notifications
You must be signed in to change notification settings - Fork 4.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug]: PubsubIO on Flink Runner not acknowledging old messages #32461
Comments
@je-ik Is this something you could provide some help? or any guideline to fix this issue? |
I have a suspicion that the job needs permission to access pubsub metrics (oldest unacked message age) to work properly, verifying that. |
I found this assumption quite problematic, and the consequence of a wrong watermark is actually dramatic.
If pubsub didn't deliver an old message during the past minute, then the estimated watermark will be wrong. If the watermark has already progressed, then it means old messages don't get acked properly and they will be delivered repeatedly. In summary I think there are two problems:
|
What is your ack deadline in PubSub? FlinkRunner can ack messages only after checkpoint, default ack deadline is 10 seconds and your checkpoint interval is aligned with that ( |
My ACK deadline is 600s, so that shouldn't be the issue |
Adding @Abacn @kennknowles who might have more context. |
What happened?
I'm using "org.apache.beam:beam-runners-flink-1.18:2.57.0".
When I read from pubsub, I found it's not able to acknowledging messages that are generated before the job starts. As a result, the messages are sent to Flink repeatedly, the number of unacked messages stay flat.
I also observed a similiar issue to this one #31510
The ack message count can be higher than the message produce rate.
It can be reproduced with the following code, it's simply reading from pubsub and print out a string.
args
Issue Priority
Priority: 2 (default / most bugs should be filed as P2)
Issue Components
The text was updated successfully, but these errors were encountered: