Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bug]: Pending HTLC in force closed channel although it's has been already resolved by the network #8277

Closed
MButcho opened this issue Dec 14, 2023 · 16 comments
Labels
bug Unintended code behaviour

Comments

@MButcho
Copy link

MButcho commented Dec 14, 2023

Background

This is similar issue as: #8269

Channel on my node got force-closed and bunch of HTLCs were pending so besides paying huge amounts of sats on fees (huge issue), after 5+ days the channel is still pending close with negative blocks_til_maturity on pending HTLC which was in fact already resolved (10002 sats), check https://mempool.space/tx/43d5ab635d2281d876f07118a9a31454783f1a78a6bbaec8ddae69f6d120ad48#vout=2 :

This is how output of lncli pendingchannels:

"pending_force_closing_channels": [
        {
            "channel": {
                "remote_node_pub": "02d96eadea3d780104449aca5c93461ce67c1564e2e1d73225fa67dd3b997a6018",
                "channel_point": "f3375cbf2092f48c6af91678e174d590e6c1b59bfa8e0c0bb654ac26f7c3c4f6:1",
                "capacity": "6000000",
                "local_balance": "122778",
                "remote_balance": "5687380",
                "local_chan_reserve_sat": "0",
                "remote_chan_reserve_sat": "0",
                "initiator": "INITIATOR_LOCAL",
                "commitment_type": "ANCHORS",
                "num_forwarding_packages": "0",
                "chan_status_flags": "",
                "private": false,
                "memo": ""
            },
            "closing_txid": "43d5ab635d2281d876f07118a9a31454783f1a78a6bbaec8ddae69f6d120ad48",
            "limbo_balance": "10332",
            "maturity_height": 0,
            "blocks_til_maturity": 0,
            "recovered_balance": "0",
            "pending_htlcs": [
                {
                    "incoming": false,
                    "amount": "10002",
                    "outpoint": "72d9c5219f6e2ae260a3f282ed431ab46a3978961f74f71495bd5f79da4a8dda:0",
                    "maturity_height": 821056,
                    "blocks_til_maturity": -118,
                    "stage": 2
                }
            ],
            "anchor": "LIMBO"
        }
    ],

When I restart node, I receive following error is logs:

2023-12-14 15:36:52.448 [ERR] BTCN: Broadcast attempt failed: unable to replace transaction: -26: insufficient fee, rejecting replacement 8c09a0da9991989c333c2f72c20a9ace3d5d67a8c475e5e44a096c6f5abfe940; new feerate 0.00001011 BTC/kvB <= old feerate 0.00003343 BTC/kvB
2023-12-14 15:36:52.455 [ERR] SWPR: Publish sweep tx 8c09a0da9991989c333c2f72c20a9ace3d5d67a8c475e5e44a096c6f5abfe940 got error: transaction rejected: output already spent
2023-12-14 15:37:01.575 [ERR] HSWC: unable to find target channel for HTLC fail: channel ID = 810572:871:1, HTLC ID = 205935
2023-12-14 15:37:01.575 [ERR] HSWC: Unable to forward resolution msg: unable to find target channel for HTLC fail: channel ID = 810572:871:1, HTLC ID = 205935

Your environment

  • version of lnd: v0.17.2
  • which operating system (uname -a on *Nix): Linux 5.15.0-91-generic # 101-Ubuntu SMP Tue Nov 14 13:30:08 UTC 2023 x86_64 x86_64 x86_64 GNU/Linux
  • version of btcd, bitcoind, or other backend: v25.0

Steps to reproduce

Can't reproduce, force-close came out of nowhere, didn't have any expiring HTLCs (having it monitored)

Expected behaviour

No Force Close in the first place and if that happens, it should not stay in Limbo after maturity

Actual behaviour

Above

@MButcho MButcho added bug Unintended code behaviour needs triage labels Dec 14, 2023
@bitromortac
Copy link
Collaborator

It seems like the spend notification of that output was missed, but then it tried to resolve it again upon restart (but the htlc was already canceled back). Did you have other force closes during that time?

@MButcho
Copy link
Author

MButcho commented Dec 14, 2023

It seems like the spend notification of that output was missed, but then it tried to resolve it again upon restart (but the htlc was already canceled back). Did you have other force closes during that time?

Nope, only this one force close

@MButcho
Copy link
Author

MButcho commented Dec 14, 2023

I already rescaned wallet via —reset-wallet-transactions, no change. Also my btc mempool size is 2048 so that shouldn’t be issue as well.

@Roasbeef
Copy link
Member

Looks like all the outputs from that transaction have been spent, along with that HTLC you reference above: https://mempool.space/tx/85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4#vin=0

The broadcast fails as the output has already been spent. The HTLC expired which is why a force close was triggered, so there was indeed a force close (as evidenced by the node state in the first place).

Do you have start up logs that include the CNCT sub-system? We want to see if the notification came through or not.

@MButcho
Copy link
Author

MButcho commented Dec 15, 2023

@Roasbeef zip included

@MButcho
Copy link
Author

MButcho commented Dec 15, 2023

Looks like all the outputs from that transaction have been spent, along with that HTLC you reference above: https://mempool.space/tx/85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4#vin=0

The broadcast fails as the output has already been spent. The HTLC expired which is why a force close was triggered, so there was indeed a force close (as evidenced by the node state in the first place).

Do you have start up logs that include the CNCT sub-system? We want to see if the notification came through or not.

The problem with the FC is that I had no pending HTLCs with less than 14 blocks left, I have this monitored and it is alerting me. So in fact, no HTLC expired according to my node before this FC happened.

@ziggie1984
Copy link
Collaborator

The HTLC is still not resolved (Second Stage sweep still waiting to be resolved). The negative timelock is normal when your HTLC is not positive yielding to sweep back into your wallet:

From your logs:

2023-12-15 10:17:05.048 [DBG] LNWL: Returning 81053 sat/kw for conf target of 6
2023-12-15 10:17:05.049 [DBG] SWPR: Rejected regular input=0.00010002 BTC due to negative yield=-0.00029389 BTC
2023-12-15 10:17:05.049 [DBG] SWPR: 1 negative yield inputs not added to input set: 85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4:0 (HtlcOfferedTimeoutSecondLevel)

So you will need to wait until this HTLC is positive yielding or your can bump the fee down with:

lncli bumpfee --sat_per_vbyte 5 --outpoint 85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb 25d4:0

But this behavior is expected.

@MButcho
Copy link
Author

MButcho commented Dec 16, 2023

@ziggie1984 ok I get it. On the other hand, I am glad it doesn't take my sats from wallet! Previous HTLC from this FC did and took additional 50k sats from my wallet :(. Very bad because I wasn't able to stop it and couldn't do anything to prevent using extra wallet funds.

@ziggie1984
Copy link
Collaborator

yes we are aware of this, lnd 18 you will be able to limit the max-fee-rate for a sweep, and we are planning to make sweeps more configurable so that you have more control handling those expensive sweeps. I think the issue is resolved, feel free to open a discussion when you still have questions around the sweeping.

@MButcho
Copy link
Author

MButcho commented Dec 22, 2023

The HTLC is still not resolved (Second Stage sweep still waiting to be resolved). The negative timelock is normal when your HTLC is not positive yielding to sweep back into your wallet:

From your logs:

2023-12-15 10:17:05.048 [DBG] LNWL: Returning 81053 sat/kw for conf target of 6
2023-12-15 10:17:05.049 [DBG] SWPR: Rejected regular input=0.00010002 BTC due to negative yield=-0.00029389 BTC
2023-12-15 10:17:05.049 [DBG] SWPR: 1 negative yield inputs not added to input set: 85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4:0 (HtlcOfferedTimeoutSecondLevel)

So you will need to wait until this HTLC is positive yielding or your can bump the fee down with:

lncli bumpfee --sat_per_vbyte 5 --outpoint 85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb 25d4:0

But this behavior is expected.

The channel is stil not closed, but the weird thing is:

"closing_txid": "43d5ab635d2281d876f07118a9a31454783f1a78a6bbaec8ddae69f6d120ad48",
            "limbo_balance": "10002",
            "maturity_height": 0,
            "blocks_til_maturity": 0,
            "recovered_balance": "0",
            "pending_htlcs": [
                {
                    "incoming": false,
                    "amount": "10002",
                    "outpoint": "72d9c5219f6e2ae260a3f282ed431ab46a3978961f74f71495bd5f79da4a8dda:0",
                    "maturity_height": 821056,
                    "blocks_til_maturity": -1318,
                    "stage": 2
                }
            ],
            "anchor": "LOST"

Any idea on the "anchor": "LOST" ?

@guggero
Copy link
Collaborator

guggero commented Dec 22, 2023

Lost in this case means that someone else swept the anchor (330 sats), since after 16 confirmations it becomes an anyone-can-spend output.

@MButcho
Copy link
Author

MButcho commented Dec 22, 2023

Lost in this case means that someone else swept the anchor (330 sats), since after 16 confirmations it becomes an anyone-can-spend output.

Is there any way to close the channel itself? Let me try restarting ...

@MButcho
Copy link
Author

MButcho commented Dec 22, 2023

Lost in this case means that someone else swept the anchor (330 sats), since after 16 confirmations it becomes an anyone-can-spend output.

Ok, restart didn't help, the anchor is back in limbo:

"pending_force_closing_channels": [
        {
            "channel": {
                "remote_node_pub": "02d96eadea3d780104449aca5c93461ce67c1564e2e1d73225fa67dd3b997a6018",
                "channel_point": "f3375cbf2092f48c6af91678e174d590e6c1b59bfa8e0c0bb654ac26f7c3c4f6:1",
                "capacity": "6000000",
                "local_balance": "122778",
                "remote_balance": "5687380",
                "local_chan_reserve_sat": "0",
                "remote_chan_reserve_sat": "0",
                "initiator": "INITIATOR_LOCAL",
                "commitment_type": "ANCHORS",
                "num_forwarding_packages": "0",
                "chan_status_flags": "",
                "private": false,
                "memo": ""
            },
            "closing_txid": "43d5ab635d2281d876f07118a9a31454783f1a78a6bbaec8ddae69f6d120ad48",
            "limbo_balance": "10332",
            "maturity_height": 0,
            "blocks_til_maturity": 0,
            "recovered_balance": "0",
            "pending_htlcs": [
                {
                    "incoming": false,
                    "amount": "10002",
                    "outpoint": "72d9c5219f6e2ae260a3f282ed431ab46a3978961f74f71495bd5f79da4a8dda:0",
                    "maturity_height": 821056,
                    "blocks_til_maturity": -1322,
                    "stage": 2
                }
            ],
            "anchor": "LIMBO"
        }
    ],
    "waiting_close_channels": []

@MButcho
Copy link
Author

MButcho commented Dec 22, 2023

2023-12-22 11:14:50.561 [ERR] BTCN: Broadcast attempt failed: unable to replace transaction: -26: insufficient fee, rejecting replacement 3f57e584ae5aa33a7690f677af1c03cfeffdc5a6354b4be9137a9848692b9547; new feerate 0.00001011 BTC/kvB <= old feerate 0.00003343 BTC/kvB
2023-12-22 11:14:50.574 [ERR] SWPR: Publish sweep tx 3f57e584ae5aa33a7690f677af1c03cfeffdc5a6354b4be9137a9848692b9547 got error: transaction rejected: output already spent

@MButcho
Copy link
Author

MButcho commented Dec 22, 2023

85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4:0

ok bumped fee: lncli wallet bumpfee --sat_per_vbyte 50 85363840a2ff19e3e7d806cd84d56e2434136b46284c703f4cf61f79f5bb25d4:0 and have TX in, lets see if it ever gets confirmed https://mempool.space/tx/62f88143dcab7ccf8f346b4babcfa020faa5f709e1da0ac6b2a137a53a4602a3

@ziggie1984
Copy link
Collaborator

Ok, restart didn't help, the anchor is back in limbo:

In case of a restart, lnd will try to sweep the anchor to its wallet, because the anchor is still not confirmed (although in a mempool swept by a thrid party) your lnd node will try to sweep it as well. This will go away as soon as all other outputs of the channel are resolved.

The channel will remain in pending until all its outputs (excluding the anchor) are confirmed (which is a good thing and how it should be)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Unintended code behaviour
Projects
None yet
Development

No branches or pull requests

6 participants