-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Real-time Sync: Add persistency layer #555
Conversation
We ensure that the prost-generated files match those which have been committed
4f83aa6
to
87bd2bf
Compare
0c56f34
to
25e3d8e
Compare
c2d2f82
to
8fce375
Compare
eb0400b
to
be3091d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry for the late review
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ACK, needs cargo fmt
be3091d
to
b2b4b3a
Compare
b2b4b3a
to
fb8d13b
Compare
:description, | ||
:state | ||
)", | ||
[params, &[(":state", &PaymentState::Created)]] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see that when restoring the app (data is already pushed to sync server), the SDK pulls in the swap data and because it's in initially the Created state these swaps are then tracked in the swap update loop (see attached logs). Should we consider adding a state to mark that the swap needs recovering without adding it to the update loop?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have is_local on the swaps. The main question is do we need to exclude them totally from the update loop (and only count on the onchain data) or should we add them to the update loop but without initiating transactions (claim). What do you think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently they are considered in the update loop (as Roei said using is_local from the sync_state table) so state updates are pushed outwards (and merged if necessary), then with either Boltz status/restore they're set to the right final state. May be unnecessary to push outwards and let each instance settle its own final state.
Regarding claiming, I think there was discussion about only letting the initiator broadcast. I think it's the only non-racy way to do it (since the claim checks I've added assume the chain service already has the other instance's tx indexed, which may or may not be true in some edge cases), so if this mitigation is not enough I agree we should just stick to the easy approach.
Thinking of the frequency at which this may happen, the scenario would be if the sync-out and sync-in between clients happens very quickly, so that they both receive the Boltz lockup status update at the same time and try claiming simultaneously. Not that uncommon if you ask me, so I'm inclined to the easier approach too.
Another idea could also be to maybe track the claims for a few confirmations to see whether the claim info actually matches, that way if two instances claim we can still settle on which one got confirmed (via script history), but I think that may just be unnecessary overhead, let me know what you think.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm wondering if we first set them to a Recover state, then after the recovery/chain sync if the swap is ongoing then add it the the update loop for tracking.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should start simple means:
- Not claiming if it is not a local swap
- We can still track all swaps in the swapper web socket loop
- We can let the recover run as currently in its own interval. @dangeross comment is valid IMO and it would be nice to run the swaps retrieved from the sync through the recovery process before persisting them.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Best way I see of going about that is to persist them instantly anyway, but as Ross mentioned with a Recovered
/new state that allows them to be tracked explicitly. It's also easier to debug what's happening that way
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From testing I think Recover state is helpful. The SDK is behaving differently depending if the rt-sync pull is completed before we start to track swaps. The Recover state will also allow us to start tracking the swap if needed.
I also see an issue sometimes with the initial chain sync from a fresh state, if the rt-sync pulls after the first chain sync, where the SDK events are suppressed, in the next chain sync all the pulled rt-sync swaps events are emitted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good points @hydra-yse as Ross mentioned let's add the recovery to the sync in pipeline just before persisting the swaps. Results will be more predictable this way and we will avoid weird behaviors.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in 79b3d8d
Closes #501.