You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
So we have seen and fixed several of the same panics in v1.14 recently. Two relevant log lines from what you provided:
[2022-06-01T20:24:18.612941132Z INFO solana_core::replay_stage] new fork:135986384 parent:135986379 root:135986379
[2022-06-01T20:24:18.613045169Z ERROR solana_runtime::accounts_db] set_hash: already exists; multiple forks with shared slot 135986384 as child (parent: 135986379)!?
The second log line indicates creating a bank for the same slot twice; we know this will cause a panic
The first log line I showed indicated that 135986384 was created with parent 135986379.
However, 384 was actually the child of 383 as shown here. So, you seemingly had a bad version of this block.
More so, 135986379 was the highest optimistically confirmed slot that was picked and used as the snapshot restart slot. Your timestamp for creating that line was 20:24 UTC; however, this status page indicates that block production didn't resume until 21:00 UTC. Thus, I don't think you should have been able to replay anything > 135986379 until block production resumed.
So, it is my hypothesis that your node had a version of 384 that was marked dead, and then ran into same behavior as outlined in #28343
I doubt you still have the full logs 🤣 so I don't think we can be 100% certain, but in any case, going to close this issue
Problem
Trying to restart today and run into: restart_crash.txt
Proposed Solution
The text was updated successfully, but these errors were encountered: