-
Notifications
You must be signed in to change notification settings - Fork 367
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add BumpTransaction event handler #2089
Add BumpTransaction event handler #2089
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At a high level this LGTM, I feel like the two big matches over the type of utxo source could be DRY'd up a bit to leave the two methods just calling one big "build the tx" method with specific inputs/outputs and then signing+broadcasting.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this can be moved out of draft or at least rebase to see if we can extract more commits in its own sub-PR.
"it may not produce relayable transactions due to not satisfying certain Replace-By-Fee (RBF) mempool policy requirements." this is fucking ish, at the very least we can think to modify ongoing nversion=3 rules if we can avoid the state management complexity hit on our side.
|
||
/// A descriptor used to sign for a commitment transaction's HTLC output. | ||
#[derive(Clone, Debug, PartialEq, Eq)] | ||
pub struct HTLCDescriptor { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
in light of bitcoin/bips#1389 we can think how we abstract all our LDK descriptors to real descriptors and as such enable signing of our lightning transactions on hardware devices. more a long-term thing just to have think about downstream projects like vls
}; | ||
|
||
match &self.utxo_source { | ||
UtxoSource::CoinSelection(source) => { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if we provide an interface plugging directly with Bitcoin Core's fundrawtransaction
and other wallet RPC calls i think we should look that the coin selection abstractions in core match a bit ours, nice if we don't have to re-implement a lot of the confidentiality hardening that has been done there and can rely on their assumptions
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we should look that the coin selection abstractions in core match a bit ours, nice if we don't have to re-implement a lot of the confidentiality hardening that has been done there and can rely on their assumptions
Can you describe this in more detail? When users provide a CoinSelectionSource
implementation, we're essentially giving them a transaction "template", letting them complete/finalize it by leveraging their own coin selection algorithm.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So I think I do have 2 set of concerns:
- First one breaking some underlying coin selection algorithms privacy hardening if we don’t consume all the set of inputs in the CPFP, e.g see the
CoinSelectionParams::m_avoid_partial_spends
option insrc/wallet/coinselection.h
. - Second one accepting “naive” UTXOs from the
CoinSelectionSource
because we don’t have sufficient checks beyondmust_spend
like requesting some depth (e.g at least 3 blocks) and a shallow reorg sweeping out our chain of transactions. On the other side we can consider the responsibility of the coin selection algorithms to give us UTXO of some quality, e.gfundrawtransaction
hasmin_conf
param
1aa4ae1
to
d5e25e5
Compare
Codecov ReportPatch coverage:
❗ Your organization is not using the GitHub App Integration. As a result you may experience degraded service beginning May 15th. Please install the Github App Integration for your organization. Read more. Additional details and impacted files@@ Coverage Diff @@
## main #2089 +/- ##
==========================================
- Coverage 90.81% 90.44% -0.38%
==========================================
Files 104 104
Lines 53009 53946 +937
Branches 53009 53946 +937
==========================================
+ Hits 48139 48790 +651
- Misses 4870 5156 +286
☔ View full report in Codecov by Sentry. |
/// A unique identifier used to track bumps for a transaction. The same identifier is always used | ||
/// for all bumps of the same transaction. | ||
#[derive(Copy, Clone, Hash, PartialEq, Eq)] | ||
pub struct BumpId([u8; 32]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding of the current state of the PR, the only usage of BumpId
is in CoinSelectionSource::select_confirmed_utxos
and there is no documentation of how a an implementation of CoinSelectionSource
should process this BumpId
.
In case of multiple second-stage HTLCs claims which have been aggregated in a single batch transaction, the BumpId
can vary as the list of OutPoint
hashes concatenated in the Sha256Hash
can be break up by any HTLC output being claimed individually by a counterparty (e.g a received htlc output timelock expiring and allowing a counterparty timeout transaction confirmation). Under that hypothesis, the selected UTXO(s) for the CPFP should be freed up and reallocated to another package. The “break-up” package should be re-assembled under a new BumpId
and a new broadcast should be operated.
So I’m thinking than BumpTransactionEventHandler
should consume a new event yielded by ChannelMonitor::get_and_clear_pending_monitor_events()
and this as soon as there is 1-block of confirmation, without any wait for a confirmation_threshold
as any block of delay might jeopardize the success of the contested HTLC outputs by our claim transaction.
At the reception of this HTLCClaimEvent
, a new CoinSelectionSource::reorganize_bump_id
can be called with the suggested semantic above. Note, in case of scarce UTXOs, I think a CoinSelectionSource
implementation can choose to reallocate the UTXO to another pending BumpId
with most “value” priority, as the remaining contested HTLC outputs might not be worthy it.
A vector of simplification for “v0.1" anchor output support on our side can be to disable completely aggregation of HTLC outputs, until the BumpTransactionEventHandler
is more mature to manage “bump” reorgs. Still I think there is need to freed up the UTXOs in function of confirmation as we might have less UTXOs than bumping claims, and this could be exploited by an adversary.
IIUC what do you think ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From my understanding of the current state of the PR, the only usage of BumpId is in CoinSelectionSource::select_confirmed_utxos and there is no documentation of how a an implementation of CoinSelectionSource should process this BumpId.
It's still a draft ;) I'll improve the docs in this area on my next push.
So I’m thinking than BumpTransactionEventHandler should consume a new event yielded by ChannelMonitor::get_and_clear_pending_monitor_events() and this as soon as there is 1-block of confirmation, without any wait for a confirmation_threshold as any block of delay might jeopardize the success of the contested HTLC outputs by our claim transaction.
This is already the case with and without anchors. We immediately broadcast/yield a new claim transaction/event as soon as we detect a conflict. We don't have a test covering this for anchors though, so I'll be sure to add one as part of this PR.
At the reception of this HTLCClaimEvent, a new CoinSelectionSource::reorganize_bump_id can be called with the suggested semantic above.
I'm not sure what you mean by this, how is CoinSelectionSource::reorganize_bump_id
meant to work? The CoinSelectionSource
would receive a new BumpId
for the new claim without the conflicting input. If it can't allocate new UTXOs, then it should reuse those it has allocated previously. Ideally, we could give it the BumpId
of the previous claim that became conflicted, but that would require persisting some state, otherwise you would lose it after a restart.
A vector of simplification for “v0.1" anchor output support on our side can be to disable completely aggregation of HTLC outputs, until the BumpTransactionEventHandler is more mature to manage “bump” reorgs. Still I think there is need to freed up the UTXOs in function of confirmation as we might have less UTXOs than bumping claims, and this could be exploited by an adversary.
Even without aggregation, like you said, we may need to reuse UTXOs if we don't have any left and we still have pending claims. This is something we'll really need to document well, otherwise users can lose funds. Perhaps that's enough of an argument to justify removing the CoinSelectionSource
completely, and only rely on our own coin selection implementation leveraging the WalletSource
implementation provided by the user.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We immediately broadcast/yield a new claim transaction/event as soon as we detect a conflict.
Okay, though I still have a doubt this new claim transaction inform well the CoinSelectionSource
about which UTXO has been spent and rather which pending BumpId
can be safely cancelled ? Note I think we can just extend current yield event if so.
If it can't allocate new UTXOs, then it should reuse those it has allocated previously. Ideally, we could give it the
BumpId of the previous claim that became conflicted, but that would require persisting some state, otherwise you
would lose it after a restart.
Reusing those allocated previously is unsafe, as you might give the opportunity to a transaction-relay adversary to confirm a package with a lower-value than another pending package with a higher-value. And if there is a conflict, you might have pending broadcast packages of which the UTXOs can be removed and re-allocated, therefore increasing your odds of success for all your channels transactions requesting time-sensitive confirmations. Giving the BumpId
of the previous claim that became conflicted with responsibility on CoinSelectionSource
implementation sounds a workable solution though it means we would have to store the BumpId
to UTXOs mapping on our side ?
Perhaps that's enough of an argument to justify removing the CoinSelectionSource completely, and only rely on our > own coin selection implementation leveraging the WalletSource implementation provided by the user.
Yeah that’s a good open question. No matter how well we document the CoinSelectionSource
this is still the type of wrong implementation of which some logic issue could break channel safety (e.g over-allocating UTXO for a fee-bump). There is the fact than LDK API is hard enough to implement well and we have the whole ldk-node to ease this.
For the UTXOs management flow I’m suggesting, I think code can speak better so if you wanna I can propose modifications on top of this branch, once your next push is done ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that’s a good open question. No matter how well we document the CoinSelectionSource this is still the type of wrong implementation of which some logic issue could break channel safety (e.g over-allocating UTXO for a fee-bump). There is the fact than LDK API is hard enough to implement well and we have the whole ldk-node to ease this.
Yeah, I agree. Curious to hear @TheBlueMatt's thoughts on this. I imagine most users will just opt for the simpler interface via WalletSource
.
For the UTXOs management flow I’m suggesting, I think code can speak better so if you wanna I can propose modifications on top of this branch, once your next push is done ?
I think I have a better understanding of your concerns now. I'll try addressing it on my next push by tracking the previous claim on the OnchainTxHandler
side and pushing it through the HTLCResolution
event, so that the event handler can remain stateless. Though I think this will require the unification of the PackageId
type in onchaintx.rs
and BumpId
proposed here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though I think this will require the unification of the PackageId type in onchaintx.rs and BumpId proposed here.
Yeah unification of the PackageId
and BumpId
sounds what we would need. Note there is the old #989 issue where we poured thoughts with Matt on what an anchor output fee-bumping reserves/concurrent claims could looks like years ago.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general I kinda worry about a "hey, that UTXO you reserved, you can unlock it now" kinda APIs - its easy to screw up and miss a case where we need to unlock, or have an extra unlock that causes incorrect behavior, right when we need things to be Absolutely Correct.
The worry about the BumpId
makes sense, though, I wonder if we can't just drop it and use the set of outpoints being spent instead - for CoinSelectionSource
there's not much we can do, Core/etc will give us what it gives, for our selection logic, we could scan the set of previously-used UTXOs and disqualify ones that have been used in an overlapping set of inputs, and only reuse if we can't get enough coin with them removed. We can just trivially time out this map by removing entries that are, say, a day's of blocks old.
I imagine most users will just opt for the simpler interface via WalletSource.
I honestly have no idea - I'd imagine most wallets have some kind of coin selector already, and its about as easy to wire that up as it is to just dump the entire set of UTXOs at us. Kinda depends on how we communicate it probably.
d5e25e5
to
ee0ff68
Compare
@ariard this should be addressed now and I also added some documentation around how it should be used within Marking this ready for review now that there's documentation. There was a commit integrating the event handler into tests, but I want to re-work it a bit, so I've left it out for now. |
Yes will review again over coming days. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes ClaimId
sounds a good abstraction to link CoinSelectionSource
and OnchainTxHandler
.
/// The unique identifier for the claim of the HTLCs in the confirmed commitment | ||
/// transaction. | ||
/// | ||
/// Any future instances of `HTLCResolution` bump events that share the same `claim_id` must |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So iirc we don’t implement splitting our package based on timelocks, only at detection when it’s near expiration (CLTV_SHARED_CLAIM_BUFFER
). We still aggregate preimage claims so if an offered HTLC is individually claimed by a counterparty timeout transaction, there is no more risk of UTXOs double-spending, so we can free the “UTXO lock” and re-assign the UTXOs to the new broadcast of the set of HTLCs to claim ? The ClaimId
could be based on the exact same set of HTLCs no, or it’s not an API guarantee ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure if I'm following your comment so will re-iterate to make sure we're on the same page:
Assume we have two HTLCs with different expirations which we have the preimages for. We go to chain for whatever reason and we produce a single transaction claim for both HTLCs. The counterparty then claims one HTLC via the timeout path and it confirms, invalidating our claim.
At this point, we'll receive a new event for the remaining HTLC and its claim_id
will be guaranteed to be the same as the previous event from which we produced the single claim for both HTLCs. While the claim_id
is computed from the set of HTLCs at the time the request is registered (in this case, both HTLCs), the claim_id
remains static even if outputs to claim are added or removed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification.
This is where I wonder to have claim_id
that are dynamically computed (i.e not from the set of HTLCs at the time the request is registered) though along the claiming life of the HTLC outputs (confirmation by counterparty or reorgs).
If we have low fee-bumping reserves, the remaining HTLC preimage on our side (in your example) can be less valuable to claim than another pending package X (claim_id_Y
), without assigned UTXOs. The locked UTXO in favor of this “tampered” claim_id_X
could be re-assigned to claim_id_Y
.
The current CoinSelectionSource
do not present such unselect_confirmed_utxos
function, where BumpTransactionEventHandler
could free a set of UTXOs to re-allocate them to claim_id_Y
. Of course, BumpTransactionEventHandler
need to be feeded with future events (e.g ClaimByCounterparty
or OurClaimReorgedOut
, which are not available as Event
for now.
In my mind, the open question for this current PR is if we need to make ClaimId
dynamic to avoid breaking some API guarantees toward implementations of CoinSelectionSource
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have low fee-bumping reserves, the remaining HTLC preimage on our side (in your example) can be less valuable to claim than another pending package X (claim_id_Y), without assigned UTXOs. The locked UTXO in favor of this “tampered” claim_id_X could be re-assigned to claim_id_Y.
Having them be dynamic (as originally done when the PR was opened) would undo the work that resulted from the discussion in #2089 (comment).
If we have low fee-bumping reserves, the remaining HTLC preimage on our side (in your example) can be less valuable to claim than another pending package X (claim_id_Y), without assigned UTXOs. The locked UTXO in favor of this “tampered” claim_id_X could be re-assigned to claim_id_Y.
As we discussed in other threads, we don't want to be too opinionated here so we defer to the user instead. The pending package X will have its own unique claim_id
and will be handled once its bump event is dispatched. If the CoinSelectionSource
implementation doesn't have any free UTXOs, they can double spend package(s) of their choosing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Having them be dynamic (as originally done when the PR was opened) would undo the work that resulted from the discussion in #2089 (comment).
Yeah, there might no need of dynamically computing ClaimId
, as long as we can give other information to the CoinSelectionSource
such as the value associated to a set of HTLCs, and decide on this to double-spend UTXOs.
As we discussed in other threads, we don't want to be too opinionated here so we defer to the user instead. The pending package X will have its own unique claim_id and will be handled once its bump event is dispatched. If the CoinSelectionSource implementation doesn't have any free UTXOs, they can double spend package(s) of their choosing.
Unlocking UTXOs of static ClaimId
that have been partially double-spent or reorgs out would be still great from fee-bumping reserve management viewpoint. We can see how users are doing with this current API and improve things over time.
/// be considered the same claim, even if the set of HTLCs to claim has changed. Therefore, | ||
/// the idenfier serves as a "UTXO lock" for users, as they can assign the additional inputs | ||
/// required for the claim to this identifier to ensure their claims don't double spend and | ||
/// conflict with each other. However, note that in some cases, it may be required to double |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case of exhaustion of fee-bumping reserve, there is another alternative, you can chain your in-flight claims, where the CPFP fan-out N
unconfirmed outputs for your N
claims without assigned UTXOs. This is unsafe in case of one component being confirmed by your counterparty though if the HTLC timelock delays are in your favor (e.g more than 10 blocks) ahead than definitely a plausible strategy ? I don’t know where we can document this.
ee0ff68
to
ae67021
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Current version looks good to me. Github is showing me that there's still pending feedback to be addressed, so I'm not sure how much use an approval is right now, but here's one anyway.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I apologize, I thought I'd done a more thorough pass on this in the past, but a handful of things came up :/
67299c8
to
800d884
Compare
800d884
to
f794b6c
Compare
anchor_tx.input[0].witness = | ||
chan_utils::build_anchor_input_witness(&signer.pubkeys().funding_pubkey, &anchor_sig); | ||
|
||
self.broadcaster.broadcast_transactions(&[&commitment_tx, &anchor_tx]); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think one of the design goal for this version of the BumpTransactionEventHandler
is to exclude policy-valid transaction, in the sense that issued transactions or packages might not be valid under Bitcoin Core 25.0 replace-by-fee requirements (the release number is provided as this is a code area in current re-work as of today).
There is another requirement of mempool policy rules where the proposed generation of transactions might be deficient, i.e the dynamic mempool min fee (see CTxMemPool::GetMinFee
). Our generated package fees might be under this fee requirement, and therefore be refused by the mempool. Indeed the dynamic mempool fee is updated at each transaction eviction (in LimitMempoolSize
) and the fee estimation is updated at each block connection (in processBlock
).
Therefore, in case of sudden mempool spikes, our package broadcast might fail immediately, without any propagation on the transaction-relay network. Instead, the broadcast_transaction
should return an informative failure with the current mempool min fee, as seen by our transaction-relay interface and a new updated package re-scheduled immediately. And thus without loosing a rebroadcast frequency interval (our *_FREQUENCY_BUMP_INTERVAL
).
If my concern is correct, we should be aware of this limitation and update our BroadcasterInterface::broadcast_transaction
in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unrelated to this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems unrelated to this PR.
Somehow yes this is unrelated to this PR, while still affecting new transactions broadcast by this PR. Can be addressed separately.
let signer = self.signer_provider.derive_channel_signer( | ||
anchor_descriptor.channel_value_satoshis, anchor_descriptor.channel_keys_id, | ||
); | ||
let anchor_sig = signer.sign_holder_anchor_input(&anchor_tx, 0, &self.secp)?; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current Result
of sign_holder_anchor_input
do not return an Err(e)
that we can react on. If the underlying implementation of EcdsaChannelSigner
is composed of multiple independent component, e.g the Validating Lightning Signer where there is an external UTXO oracle, in case of failure of one of the component might (and effectively) should stop producing signatures.
If this is the case, one of the logical reaction on our side would be to return the failure to our ChannelManager
and stop accepting new channels, or new HTLC forwarding requests, until the signer has resolved its internal issue.
Such failure mode should not be communicated to our counterparty and constitutes an oracle on the reacting capabilities of our Lightning node. IIRC, this last concern happened in production to LND nodes with October 2022 or November 2022, leaking to counterparties they were not able to process blocks further.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how we can "return the failure to our ChannelManager
" from the anchor event handler. We're in a totally different part of the code, we can't communicate with the manager. Can you open an issue for this instead, or comment on future work in this area - there's currently no great support for signer errors, but some issues open to enable that in the future.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't see how we can "return the failure to our ChannelManager" from the anchor event handler. We're in a totally different part of the code, we can't communicate with the manager.
Could we not call into the ChannelManager
within the EventHandler
after handling the failed BumpTransactionEvent
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done with #2357. I know it’s not part of the code that currently communicate with the manager, though would love to be more reactive in stop making your lightning node situation worst if you have a detected failure from component.
/// The unique identifier for the claim of the HTLCs in the confirmed commitment | ||
/// transaction. | ||
/// | ||
/// Any future instances of `HTLCResolution` bump events that share the same `claim_id` must |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the clarification.
This is where I wonder to have claim_id
that are dynamically computed (i.e not from the set of HTLCs at the time the request is registered) though along the claiming life of the HTLC outputs (confirmation by counterparty or reorgs).
If we have low fee-bumping reserves, the remaining HTLC preimage on our side (in your example) can be less valuable to claim than another pending package X (claim_id_Y
), without assigned UTXOs. The locked UTXO in favor of this “tampered” claim_id_X
could be re-assigned to claim_id_Y
.
The current CoinSelectionSource
do not present such unselect_confirmed_utxos
function, where BumpTransactionEventHandler
could free a set of UTXOs to re-allocate them to claim_id_Y
. Of course, BumpTransactionEventHandler
need to be feeded with future events (e.g ClaimByCounterparty
or OurClaimReorgedOut
, which are not available as Event
for now.
In my mind, the open question for this current PR is if we need to make ClaimId
dynamic to avoid breaking some API guarantees toward implementations of CoinSelectionSource
?
@@ -774,19 +776,24 @@ impl<ChannelSigner: WriteableEcdsaChannelSigner> OnchainTxHandler<ChannelSigner> | |||
OnchainClaim::Event(claim_event) => { | |||
log_info!(logger, "Yielding onchain event to spend inputs {:?}", req.outpoints()); | |||
let package_id = match claim_event { | |||
ClaimEvent::BumpCommitment { ref commitment_tx, .. } => commitment_tx.txid().into_inner(), | |||
ClaimEvent::BumpCommitment { ref commitment_tx, .. } => | |||
// For commitment claims, we can just use their txid as it should |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is still a risk of collision if you have two ChannelMonitor
deployed, let’s say the main one and the watchtower and they both rely on the same CoinSelectionSource
instance. The second requester, that might be determined by the block processing of the hosts might lock a second set of fee-bumping UTXOs for the same transaction.
The CoinSelectionSource
could be enriched by its own CoinAPIError { DuplicateClaimId
to indicate a source on the side of the caller and that can be log_error
or log_warn
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have two monitors I don't see why you'd have the same coin selector state. I also don't see how you could reasonably expect the coin selector to somehow track if there's a duplicate claim when we sometimes do generate claims for different HTLCs with the same ID?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you have two monitors I don't see why you'd have the same coin selector state.
Coins are scarce is a good reason to share the same coin selector state.
if there's a duplicate claim when we sometimes do generate claims for different HTLCs with the same ID?
I don’t think that’s something allowed by current ClaimId
construction based on spent outpoint which are themselves unique per BIP30. See comment “For HTLC claims, commit to the entire set of HTLCs outputs to claim, which will be always be unique per request”. Anyway we can refine the CoinSelectionSource
API in future works.
/// should always have a value above dust for its given `script_pubkey`. It should not be | ||
/// spent until the transaction it belongs to confirms to ensure mempool descendant limits are | ||
/// not met. This implies no other party should be able to spend it except us. | ||
change_output: Option<TxOut>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There is one more requirement here, the scriptpubkey should be standard IsStandard
(in src/policy/policy.cpp
).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I certainly hope we don't have to specify this :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's probably more risk from writing out too many requirements and too much documentation that people stop reading over them forgetting to use standard scripts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Somehow I agree and get back me in the mood to wish libstandardness
in Bitcoin Core to sanitize all our transactions against policy rules in a black box fashion.
// cheapest way to include a dummy output. | ||
tx.output.push(TxOut { | ||
value: 0, | ||
script_pubkey: Script::new_op_return(&[]), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we’re exposed to our anchor CPFP not propagating if datacarrier
is turn off by the node operator itself, or become a widely deployed policy on the transaction-relay network in the future on the ground it’s only used to carry on zero-value transaction ?
What are doing other implementations in this case ? If there is a common behavior, that might be rational for fee saving purposes, at least it would be better for this to be documented.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a consensus rule to have at least one output not standard rule, see CheckTransaction()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Man Bitcoin Core needs to rip out those options, that's absurd that its configurable. But, yea, this is the most fee-maximal transaction construction - allowing us to get even more juice out of our UTXOs if we're using exact change.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we’re exposed to our anchor CPFP not propagating if datacarrier is turn off by the node operator itself, or become a widely deployed policy on the transaction-relay network in the future on the ground it’s only used to carry on zero-value transaction ?
They didn't give us a change output back, so the transaction can't be relayed without any outputs anyway.
What are doing other implementations in this case ? If there is a common behavior, that might be rational for fee saving purposes, at least it would be better for this to be documented.
They don't have this problem since they're tightly integrated with their on-chain wallet. They just add a regular change output.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Still would be nice if we don’t forget in the anchor documentation somewhere to say “YOU SHOULD NOT TURN OFF datacarrier
OPTION ON YOUR FULL_NODE”.
They don't have this problem since they're tightly integrated with their on-chain wallet. They just add a regular change output.
This approach is more fee-saving, as op_return are provably unspendale and therefore don’t have to satisfy dust outputs checks, which are even more a mess.
Note, as the CPFP should always at least 2 inputs, I think we’re always good with MIN_STANDARD_TX_NONWITNESS_SIZE
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will do more review tomorrow.
// TODO: Use fee estimation utils when we upgrade to bitcoin v0.30.0. | ||
let base_tx_weight = 4 /* version */ + 1 /* input count */ + 1 /* output count */ + 4 /* locktime */; | ||
let total_input_weight = must_spend.len() * | ||
(32 /* txid */ + 4 /* vout */ + 4 /* sequence */ + 1 /* script sig */); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm really uncomfortable with all this weight estimation code without any careful testing of it, when doing weight estimation I really like to at least have a final debug_assert
at the end checking that the real weight isn't any higher than the estimated one, and ideally a lower-bound too that's like an extra WU or two per input.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah I definitely plan to follow up with tests here. This is why I was hoping to bump to rust-bitcoin v0.30.0 in the same release, since they've added some weight prediction utils we could leverage here.
f794b6c
to
528ae2b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to check the constants but basically LGTM. Needs another pass from @arik-so.
528ae2b
to
d1ffad5
Compare
Looks like current copy doesn't compile. |
Doing another pass right now. |
d1ffad5
to
30d6ca2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comments on weight unit computation, otherwise 30d6ca2 sounds correct to me.
edited: as a follow-up, i think the bump event should be rebroadcast with the same frequency than other packages handled by rebroadcast_pending_claims
, as they’re under the scope of process_pending_events
that only recommends to be invoked at every chain tip ?
a9c0b45
to
ec13ed2
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM! Please don't bother addressing any of these until a followup.
(fee_sat * 1000 / weight).try_into().unwrap_or(u32::max_value()) | ||
} | ||
const fn fee_for_weight(feerate_sat_per_1000_weight: u32, weight: u64) -> u64 { | ||
feerate_sat_per_1000_weight as u64 * weight / 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think it matters either way but do we want to round up here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same crash scenario were this to be put through a fuzzer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Don't think it matters either way but do we want to round up here?
Sure.
same crash scenario were this to be put through a fuzzer
Any value that would result in such a crash would be non-sensical onchain, no?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
that's not the issue. The issue is making sure that such a crash would not imperil the safe advancement of the state machine
let commitment_tx_sat_per_1000_weight: u32 = compute_feerate_sat_per_1000_weight( | ||
commitment_tx_fee_sat, commitment_tx.weight() as u64, | ||
); | ||
let feerate_diff = package_target_feerate_sat_per_1000_weight |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you want to leave a TODO here to make the fee computation exact? AFAICT we always overpay by small margin because we calculate the feerate diff without considering the difference in weights between the commitment and anchor txn. We never underpay because we calculate the feerate on the anchor tx including the commitment tx weight, but this means we overpay by the difference between commitment_tx_fee / commitment_tx_weight
and commitment_tx_fee / (commitment_tx_weight + anchor_tx_weight)
(because you can consider the total package feerate as the fee of both txn divided by the weight of both txn).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yeah I added this logic much later than the initial code but it doesn't make sense, just ended up reverting it.
let htlc_input = htlc_descriptor.unsigned_tx_input(); | ||
must_spend.push(Input { | ||
outpoint: htlc_input.previous_output.clone(), | ||
satisfaction_weight: EMPTY_SCRIPT_SIG_WEIGHT + if htlc_descriptor.preimage.is_some() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please add some debug assertions post-signing that the weight is as expected.
|
||
// TODO: Define typed abstraction over feerates to handle their conversions. | ||
fn compute_feerate_sat_per_1000_weight(fee_sat: u64, weight: u64) -> u32 { | ||
(fee_sat * 1000 / weight).try_into().unwrap_or(u32::max_value()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this might crash if fee_sat is u64::max. Maybe use one of the saturating methods?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Such a fee is not possible in the protocol so no need to worry?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is there any way by which a user might be able to sneak such a fee in with a broken fee estimator? The real concern here is just making sure it doesn't unnecessarily panic, or if it does, that it does so prematurely with the excessive fee detection, which of course should get detected before it ever even gets here.
(fee_sat * 1000 / weight).try_into().unwrap_or(u32::max_value()) | ||
} | ||
const fn fee_for_weight(feerate_sat_per_1000_weight: u32, weight: u64) -> u64 { | ||
feerate_sat_per_1000_weight as u64 * weight / 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same crash scenario were this to be put through a fuzzer
let htlc_sig = signer.sign_holder_htlc_transaction( | ||
&htlc_tx, idx, htlc_descriptor, &self.secp | ||
)?; | ||
let per_commitment_point = signer.get_per_commitment_point( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think perhaps a comment might help explaining why we still need to separately call the sign method for the HTLC and call its descriptor to obtain the witness.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why we still need to separately call the sign method for the HTLC and call its descriptor to obtain the witness
After calling sign_tx
or why sign_holder_htlc_transaction
and tx_input_witness
are separate?
Note we’ll need another approving review here beyond Matt’s one to land the PR - I can review again though not being anymore in the GH set of qualified reviewers iirc. For all the comments and feedback on the witness accounting, I think they can be addressed in followups, the code is anchor output only and not activated yet. |
I'm ready to approve, just wanted a response regarding the saturating multiplication operation first. Everything else is for follow-ups. |
While the previous way of computing the identifier was safe, it wouldn't have been in certain scenarios if we considered splitting aggregated packages. While this type of splitting has yet to be implemented, it may come in the near future. To ensure we're prepared to handle such, we opt to instead commit to all of the HTLCs to claim in the request.
In a future commit, we plan to expand `BumpTransactionEvent` variants to include the unique identifier assigned to pending output claims by the `OnchainTxHandler` when a commitment is broadcast/confirmed. This requires making it public in our API. We also choose to rename it to `ClaimId` for the benefit of users, as the previous `PackageID` term could be interpreted to be the ID of a BIP-331 transaction package.
This allows users to bump their commitments and HTLC transactions without having to worry about all the little details to do so. Instead, we'll just require that they implement the `CoinSelectionSource` trait over their wallet/UTXO source, granting the event handler permission to spend confirmed UTXOs for the transactions it'll produce. While the event handler should in most cases produce valid transactions, assuming the provided confirmed UTXOs are valid, it may not produce relayable transactions due to not satisfying certain Replace-By-Fee (RBF) mempool policy requirements. Some of these require that the replacement transactions have a higher feerate and absolute fee than the conflicting transactions it aims to replace. To make sure we adhere to these requirements, we'd have to persist some state for all transactions the event handler has produced, greatly increasing its complexity. While we may consider implementing so in the future, we choose to go with a simple initial version that relies on the OnchainTxHandler's bumping frequency. For each new bumping attempt, the OnchainTxHandler proposes a 25% feerate increase to ensure transactions can propagate under constrained mempool circumstances.
Certain users may not care how their UTXOs are selected, or their wallet may not expose enough controls to fully implement the `CoinSelectionSource` trait. As an alternative, we introduce another trait `WalletSource` they could opt to implement instead, which is much simpler as it just returns the set of confirmed UTXOs that may be used. This trait implementation is then consumed into a wrapper `Wallet` which implements the `CoinSelectionSource` trait using a "smallest above-dust-after-spend first" coin selection algorithm.
ec13ed2
to
bc39da6
Compare
let commitment_tx_sat_per_1000_weight: u32 = compute_feerate_sat_per_1000_weight( | ||
commitment_tx_fee_sat, commitment_tx.weight() as u64, | ||
); | ||
if commitment_tx_sat_per_1000_weight >= package_target_feerate_sat_per_1000_weight { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now you're just overpaying by commitment_tx_fee
though? Which is more than it was? I mean its fine either way, just interesting to note.
This allows users to bump their commitments and HTLC transactions without having to worry about all the little details to do so. Instead, we'll just require them to implement a small shim over their wallet/UTXO source, that grants the event handler permission to spend confirmed UTXOs for the transactions it'll produce.
While the event handler should in most cases produce valid transactions, assuming the provided confirmed UTXOs are valid, it may not produce relayable transactions due to not satisfying certain Replace-By-Fee (RBF) mempool policy requirements. Some of these require that the replacement transactions have a higher feerate and absolute fee than the conflicting transactions it aims to replace. To make sure we adhere to these requirements, we'd have to persist some state for all transactions the event handler has produced, greatly increasing its complexity. While we may consider implementing so in the future, we choose to go with a simple initial version that relies on the
OnchainTxHandler
's bumping frequency. For each new bumping attempt, theOnchainTxHandler
proposes a 25% feerate increase to ensure transactions can propagate under constrained mempool circumstances.