You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
One of our customers observes that the one of the actor of the kinesis sink is stuck while other actors of the same sink are working fine. Triggering a manual recover resolves the issue.
There are no relevant ERROR/WARN logs from kinesis sink so I suspect the sink writer is stuck in the put_records call from kinesis client. After digging deeper, I found that we are using the default timeout configs from aws sdk. By default only connection timeout is set and read/operation/operation_attempt timeout are not set:
I think we can either set a timeout for the kinesis client or wrap the put_records call with tokio timeout to prevent potential stuck in kinesis writer.
The text was updated successfully, but these errors were encountered:
One of our customers observes that the one of the actor of the kinesis sink is stuck while other actors of the same sink are working fine. Triggering a manual recover resolves the issue.
There are no relevant ERROR/WARN logs from kinesis sink so I suspect the sink writer is stuck in the
put_records
call from kinesis client. After digging deeper, I found that we are using the default timeout configs from aws sdk. By default only connection timeout is set and read/operation/operation_attempt timeout are not set:I think we can either set a timeout for the kinesis client or wrap the
put_records
call with tokio timeout to prevent potential stuck in kinesis writer.The text was updated successfully, but these errors were encountered: