-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: refactor #55
feat: refactor #55
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Outside diff range and nitpick comments (4)
merkle/merkle.go (1)
60-74
: Validation of initialization parametersIn the
InitializeWorkingTree
method, consider adding validation to ensure that theversion
parameter is greater than zero. This will prevent potential issues with uninitialized or default versions.Apply the following diff to add the validation:
func (m *Merkle) InitializeWorkingTree(version uint64, treeIndex uint64, startLeafIndex uint64) error { + if version == 0 { + return fmt.Errorf("version must be greater than zero") + } if treeIndex < 1 || startLeafIndex < 1 { return fmt.Errorf("failed to initialize working tree index: %d, leaf: %d; invalid index", treeIndex, startLeafIndex) }challenger/challenger.go (1)
281-283
: Method receiver should be a pointerThe
DB
method has a receiver of typeChallenger
instead of*Challenger
. This means it operates on a copy of theChallenger
struct, which can lead to unintended behavior.Change the receiver to a pointer to ensure it uses the same instance:
-func (c Challenger) DB() types.DB { +func (c *Challenger) DB() types.DB { return c.db }node/broadcaster/broadcaster.go (2)
Line range hint
91-110
: Enhance error context in initializationWhile the error handling is generally good, consider adding more context to errors during account initialization.
func (b *Broadcaster) Initialize(ctx types.Context, status *rpccoretypes.ResultStatus, keyringConfigs []btypes.KeyringConfig) error { for _, keyringConfig := range keyringConfigs { account, err := NewBroadcasterAccount(ctx, b.cfg, b.cdc, b.txConfig, b.rpcClient, keyringConfig) if err != nil { - return err + return errors.Wrapf(err, "failed to create broadcaster account for config: %s", keyringConfig.Name) } err = account.Load(ctx) if err != nil { - return err + return errors.Wrapf(err, "failed to load broadcaster account: %s", keyringConfig.Name) } b.accounts = append(b.accounts, account) b.addressAccountMap[account.GetAddressString()] = len(b.accounts) - 1 }
268-289
: Add input validation to MsgsToProcessedMsgsThe utility method should validate its input parameters to prevent potential issues.
func MsgsToProcessedMsgs(queues map[string][]sdk.Msg) []btypes.ProcessedMsgs { + if queues == nil { + return nil + } res := make([]btypes.ProcessedMsgs, 0) for sender := range queues { + if sender == "" { + continue + } msgs := queues[sender] + if len(msgs) == 0 { + continue + } for i := 0; i < len(msgs); i += 5 {
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
challenger/challenger.go
(8 hunks)executor/executor.go
(5 hunks)merkle/db.go
(1 hunks)merkle/merkle.go
(4 hunks)merkle/types/db.go
(3 hunks)node/block_handler.go
(1 hunks)node/broadcaster/broadcaster.go
(7 hunks)node/node.go
(8 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- node/node.go
- executor/executor.go
- merkle/db.go
🔇 Additional comments (6)
node/block_handler.go (1)
28-54
: Code looks good and is well-structured
The handleBlockTxs
function properly checks for the mismatch in transaction counts and processes each transaction and its events accordingly. Error handling is appropriately managed with contextual wrapping.
merkle/types/db.go (2)
69-69
:
Ensure backward compatibility when changing JSON tags
Changing the JSON tag from "height_data"
to "last_siblings"
in TreeInfo
may break backward compatibility with existing serialized data.
Consider implementing custom unmarshal logic to handle both old and new tags or provide a migration path for existing data.
123-132
: 🛠️ Refactor suggestion
Add validation for critical fields in NewFinalizedTreeInfo
The constructor should validate critical fields like root
, treeHeight
, and leafCount
to prevent invalid tree states.
Apply the following diff to add validation:
func NewFinalizedTreeInfo(treeIndex uint64, treeHeight uint8, root []byte, startLeafIndex uint64, leafCount uint64, extraData []byte) FinalizedTreeInfo {
+ if len(root) == 0 {
+ panic("root hash cannot be empty")
+ }
+ if treeHeight == 0 {
+ panic("tree height cannot be zero")
+ }
+ if leafCount == 0 {
+ panic("leaf count cannot be zero")
+ }
return FinalizedTreeInfo{
TreeIndex: treeIndex,
TreeHeight: treeHeight,
Root: root,
StartLeafIndex: startLeafIndex,
LeafCount: leafCount,
ExtraData: extraData,
}
}
node/broadcaster/broadcaster.go (3)
4-4
: LGTM: Improved struct organization and dependencies
The structural changes enhance the code organization by:
- Adding proper hex encoding support for transaction hashes
- Introducing better account management with addressAccountMap
- Improving state tracking with syncedHeight
Also applies to: 32-48
153-169
:
Fix potential infinite loop in transaction waiting
The break statement inside the select case will only break from the select, not the for loop. Additionally, there's no explicit check for transaction inclusion status.
- case <-timer.C:
- break
+ case <-timer.C:
+ ctx.Logger().Info("timeout reached while waiting for pending transactions")
+ return nil
38-44
: Verify channel operations and goroutine safety
The interaction between txChannel and pendingTxs needs careful verification to prevent potential deadlocks or race conditions.
✅ Verification successful
Based on the results, I can now generate the final response as the code shows consistent and safe mutex usage patterns:
Mutex and channel operations appear to be properly implemented
The verification shows:
- All mutex locks are properly paired with deferred unlocks
- No nested locks that could cause deadlocks
- Channel operations are straightforward with no complex blocking patterns
- Consistent mutex protection for the pendingTxs slice
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Check for potential race conditions and channel-related issues
# Look for channel operations and mutex usage patterns
rg -A 3 "txChannel\s*<-|<-\s*txChannel|pendingTxMu"
# Check for potential deadlock scenarios with multiple locks
ast-grep --pattern 'Lock()
$$$
Lock()'
# Look for defer Unlock patterns
ast-grep --pattern 'Lock()
$$$
defer $_.Unlock()'
Length of output: 1998
Co-authored-by: beer-1 <[email protected]>
Co-authored-by: beer-1 <[email protected]>
Co-authored-by: beer-1 <[email protected]>
Co-authored-by: beer-1 <[email protected]>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
🧹 Outside diff range and nitpick comments (11)
challenger/child/withdraw.go (3)
43-45
: Consider usingfmt.Sprintf
for the panic message.While the invariant check is crucial, constructing the error message directly in
fmt.Errorf
could be inefficient. Consider pre-formatting the message:- panic(fmt.Errorf("INVARIANT failed; handleInitiateWithdrawal expect to working tree at leaf `%d` (start `%d` + count `%d`) but we got leaf `%d`", workingTree.StartLeafIndex+workingTree.LeafCount, workingTree.StartLeafIndex, workingTree.LeafCount, l2Sequence)) + msg := fmt.Sprintf("INVARIANT failed; handleInitiateWithdrawal expect to working tree at leaf `%d` (start `%d` + count `%d`) but we got leaf `%d`", workingTree.StartLeafIndex+workingTree.LeafCount, workingTree.StartLeafIndex, workingTree.LeafCount, l2Sequence) + panic(fmt.Errorf(msg))
76-76
: Consider pre-formatting the panic message.Similar to the previous suggestion, consider pre-formatting the panic message for better performance:
- panic(fmt.Errorf("working tree not found at height: %d, current: %d", blockHeight-1, blockHeight)) + msg := fmt.Sprintf("working tree not found at height: %d, current: %d", blockHeight-1, blockHeight) + panic(fmt.Errorf(msg))
121-121
: Consider pre-formatting the panic message.For consistency with previous suggestions:
- panic(fmt.Errorf("INVARIANT failed; handleTree expect to finalize tree at block `%d` but we got block `%d`", ch.finalizingBlockHeight, blockHeight)) + msg := fmt.Sprintf("INVARIANT failed; handleTree expect to finalize tree at block `%d` but we got block `%d`", ch.finalizingBlockHeight, blockHeight) + panic(fmt.Errorf(msg))challenger/host/handler.go (2)
13-18
: Consider adding documentation for the event queues.While the implementation is clean, adding comments explaining the purpose of
eventQueue
andoutputPendingEventQueue
would improve code maintainability.
Line range hint
77-82
: Add documentation explaining the txIndex condition.Consider adding a comment explaining why only the first transaction (txIndex == 0) is processed by the oracle handler.
challenger/db/db.go (2)
141-141
: Use logging instead offmt.Printf
for consistent loggingUsing
fmt.Printf
directly for logging is not recommended. Consider using a logger for consistent and configurable logging throughout your application.Apply this diff to switch to using a logger:
-fmt.Printf("reset height to 0 for node %s\n", string(nodeDB.GetPrefix())) +// Assuming a logger is available +log.Printf("reset height to 0 for node %s\n", string(nodeDB.GetPrefix()))
145-181
: Refactor duplicate code in deletion functionsThe functions
DeleteAllPendingEvents
andDeleteAllPendingChallenges
contain similar logic for deleting items from the database. Consider refactoring this code into a single helper function to reduce duplication and improve maintainability.Apply this diff to refactor the deletion logic:
+func DeleteAllPendingItems(db types.DB, prefix []byte) error { + deletingKeys := make([][]byte, 0) + iterErr := db.Iterate(prefix, nil, func(key []byte, _ []byte) (stop bool, err error) { + deletingKeys = append(deletingKeys, key) + return false, nil + }) + if iterErr != nil { + return iterErr + } + + for _, key := range deletingKeys { + err := db.Delete(key) + if err != nil { + return err + } + } + return nil +} + -func DeleteAllPendingEvents(db types.DB) error { - // Existing logic... -} +// Update the function to use the new helper +func DeleteAllPendingEvents(db types.DB) error { + return DeleteAllPendingItems(db, challengertypes.PendingEventKey) +} -func DeleteAllPendingChallenges(db types.DB) error { - // Existing logic... -} +// Update the function to use the new helper +func DeleteAllPendingChallenges(db types.DB) error { + return DeleteAllPendingItems(db, challengertypes.PendingChallengeKey) +}challenger/handler.go (2)
20-20
: File is not formatted withgoimports
The file is not properly formatted according to
goimports
. Please rungoimports
to format your code and ensure consistent import grouping and ordering.🧰 Tools
🪛 golangci-lint (1.62.2)
20-20: File is not
goimports
-ed(goimports)
74-74
: Adjust log level for challenge loggingUsing
ctx.Logger().Error()
to log the challenge might not be appropriate unless it represents an actual error. Consider usingctx.Logger().Info()
orctx.Logger().Debug()
to log informational messages.Apply this diff to adjust the log level:
-ctx.Logger().Error("challenge", zap.Any("challenge", challenge)) +ctx.Logger().Info("challenge", zap.Any("challenge", challenge))node/tx_checker.go (2)
11-20
: Simplify function documentation to follow Go conventionsThe function documentation is verbose and includes parameter and return descriptions that are self-explanatory. Go conventions recommend concise comments that start with the function name and provide a brief description. Consider simplifying the comment for better readability.
Suggested change:
-// txChecker continuously checks for pending transactions and handles events if the transaction is included in a block. -// If the transaction hash is not indexed by the node, even if the transaction is processed, the event handler will not be called. -// It is recommended to use the event handler only for logging or monitoring purposes. -// -// Parameters: -// - ctx: The context for managing the lifecycle of the txChecker. -// - enableEventHandler: A boolean flag to enable or disable event handling. -// -// Returns: -// - error: An error if the txChecker encounters an issue. +// txChecker monitors pending transactions and handles events when a transaction is included in a block. func (n *Node) txChecker(ctx types.Context, enableEventHandler bool) error {
29-45
: Consider renamingconsecutiveErrors
to reflect its purposeThe variable
consecutiveErrors
is incremented on each loop iteration when there are pending transactions, regardless of whether an error has occurred. This might be misleading, as it suggests it's counting consecutive errors. If its purpose is to control retries or iterations, consider renaming it to something likeretryCount
oriterationCount
to better reflect its actual usage.Suggested change:
- consecutiveErrors := 0 + retryCount := 0 ... if n.broadcaster.LenLocalPendingTx() == 0 { continue } ctx.Logger().Debug("remaining pending txs", zap.Int("count", n.broadcaster.LenLocalPendingTx())) - if types.SleepWithRetry(ctx, consecutiveErrors) { + if types.SleepWithRetry(ctx, retryCount) { return nil } - consecutiveErrors++ + retryCount++ ... - consecutiveErrors = 0 + retryCount = 0
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (8)
challenger/child/withdraw.go
(4 hunks)challenger/db/db.go
(1 hunks)challenger/handler.go
(2 hunks)challenger/host/handler.go
(2 hunks)challenger/types/keys.go
(2 hunks)node/node.go
(8 hunks)node/query.go
(1 hunks)node/tx_checker.go
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- node/query.go
- challenger/types/keys.go
- node/node.go
🧰 Additional context used
🪛 golangci-lint (1.62.2)
challenger/handler.go
20-20: File is not goimports
-ed
(goimports)
🔇 Additional comments (5)
challenger/child/withdraw.go (2)
23-32
: LGTM! Well-structured error handling.
The error handling improvements with descriptive messages and proper wrapping will help with debugging. The function signature update to use types.Context
aligns with the broader refactoring goals.
125-166
: LGTM! Well-structured error handling and logging.
The error handling improvements with proper wrapping and descriptive messages will help with debugging. The logging is also well-structured with appropriate context fields.
challenger/host/handler.go (2)
4-10
: LGTM! Import statements are well-organized and support the refactoring goals.
The addition of the errors
package and domain-specific imports aligns with the improved error handling strategy.
Line range hint 20-74
: Consider breaking down this method into smaller, focused functions.
The method handles multiple responsibilities including:
- Setting synced height
- Saving pending events
- Processing timeouts
- Managing challenges
Consider extracting these into separate methods for better maintainability and testing.
Verify the panic condition for outputPendingEventQueue.
The panic condition suggests a critical invariant about the queue length and content.
Inconsistent error wrapping messages.
Some errors are wrapped with context while others aren't. For consistency, consider wrapping all errors:
- return err
+ return errors.Wrap(err, "failed to save pending events on host db")
✅ Verification successful
Let me gather more context about the outputPendingEventQueue usage.
Let me check one more thing to understand how the outputPendingEventQueue is populated.
The panic condition for outputPendingEventQueue is correctly enforcing a critical invariant
The code analysis reveals that the panic condition is justified and correctly implemented:
-
The queue is only modified in two places:
- Reset to empty in beginBlockHandler:
h.outputPendingEventQueue[:0]
- Append single output in handleProposeOutput:
append(h.outputPendingEventQueue, output)
- Reset to empty in beginBlockHandler:
-
The invariant ensures that:
- The queue never contains more than one event
- If an event exists, it must be of type Output
This is crucial for maintaining the system's state consistency as outputs need to be processed sequentially and atomically.
The panic serves as a vital assertion to catch any violation of this design constraint during development or testing.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all modifications of outputPendingEventQueue to verify the invariant
ast-grep --pattern 'h.outputPendingEventQueue = $$$'
Length of output: 243
Script:
#!/bin/bash
# Search for the initialization and type definition of outputPendingEventQueue
ast-grep --pattern 'type Host struct {
$$$
outputPendingEventQueue $$$
$$$
}'
# Search for all references to outputPendingEventQueue
rg "outputPendingEventQueue" -A 2
# Search for the Output event type definition
ast-grep --pattern 'type Output struct {
$$$
}'
Length of output: 3192
Script:
#!/bin/bash
# Search for the implementation of the output handling
rg -A 10 "func.*Output" challenger/host/output.go
# Search for EventTypeOutput constant
rg "EventTypeOutput.*=.*"
Length of output: 1259
challenger/db/db.go (1)
5-5
: Verify the import path for the "slices" package
Ensure that the "slices"
package is correctly imported. If you're using Go 1.21 or later, importing "slices"
from the standard library is acceptable. Otherwise, you may need to import it from "golang.org/x/exp/slices"
.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (11)
node/broadcaster/broadcaster.go (2)
150-195
: Consider optimizing the polling mechanismWhile the transaction verification logic is now more robust with proper status checking, consider implementing an exponential backoff strategy for the polling interval to reduce unnecessary RPC calls.
- pollingTimer := time.NewTicker(ctx.PollingInterval()) + backoff := ctx.PollingInterval() + maxBackoff := 30 * time.Second + for { + select { + case <-ctx.Done(): + return ctx.Err() + case <-time.After(backoff): + // Double the backoff time for next iteration, up to maxBackoff + if backoff < maxBackoff { + backoff = time.Duration(float64(backoff) * 1.5) + if backoff > maxBackoff { + backoff = maxBackoff + } + }
276-297
: Consider making the message chunk size configurableThe message chunk size of 5 is currently hard-coded. Consider moving this to the configuration to allow for future adjustments based on network conditions or performance requirements.
+// DefaultMsgChunkSize is the default number of messages to include in a single transaction +const DefaultMsgChunkSize = 5 + +type BroadcasterConfig struct { + // existing fields... + MsgChunkSize int `json:"msg_chunk_size"` +} func MsgsToProcessedMsgs(queues map[string][]sdk.Msg) []btypes.ProcessedMsgs { res := make([]btypes.ProcessedMsgs, 0) for sender := range queues { msgs := queues[sender] - for i := 0; i < len(msgs); i += 5 { - end := i + 5 + chunkSize := b.cfg.MsgChunkSize + if chunkSize <= 0 { + chunkSize = DefaultMsgChunkSize + } + for i := 0; i < len(msgs); i += chunkSize { + end := i + chunkSizechallenger/query.go (2)
31-37
: Enhance error handling for base64 decoding.The error message could be more descriptive to help with debugging pagination issues.
Consider this improvement:
if next != "" { startKey, err = base64.StdEncoding.DecodeString(from) if err != nil { - return challengertypes.QueryChallengesResponse{}, err + return challengertypes.QueryChallengesResponse{}, fmt.Errorf("failed to decode pagination cursor: %w", err) } }
39-49
: Refactor duplicate error handling in iteration logic.The error handling is duplicated between forward and reverse iteration blocks.
Consider extracting the iteration logic to a helper function:
+func (c *Challenger) iterate(startKey []byte, descOrder bool, fn func([]byte, []byte) (bool, error)) error { + if descOrder { + return c.db.ReverseIterate(dbtypes.AppendSplitter(challengertypes.ChallengeKey), startKey, fn) + } + return c.db.Iterate(dbtypes.AppendSplitter(challengertypes.ChallengeKey), startKey, fn) +} if descOrder { - err = c.db.ReverseIterate(dbtypes.AppendSplitter(challengertypes.ChallengeKey), startKey, fetchFn) - if err != nil { - return challengertypes.QueryChallengesResponse{}, err - } + if err = c.iterate(startKey, true, fetchFn); err != nil { + return challengertypes.QueryChallengesResponse{}, fmt.Errorf("failed to iterate challenges: %w", err) + } } else { - err := c.db.Iterate(dbtypes.AppendSplitter(challengertypes.ChallengeKey), startKey, fetchFn) - if err != nil { - return challengertypes.QueryChallengesResponse{}, err - } + if err = c.iterate(startKey, false, fetchFn); err != nil { + return challengertypes.QueryChallengesResponse{}, fmt.Errorf("failed to iterate challenges: %w", err) + } }executor/batchsubmitter/batch_submitter.go (5)
19-19
: Use Standard Error Handling PackageConsider replacing the import of
"github.com/pkg/errors"
with the standard"errors"
package. As of Go 1.13, the standard library provides most of the functionality needed for error wrapping and unwrapping.Apply this diff to update the import:
- "github.com/pkg/errors" + "errors"And update error wrapping in the code:
- return errors.Wrap(err, "failed to initialize node") + return fmt.Errorf("failed to initialize node: %w", err)
Line range hint
91-138
: Consistent Context Usage and Error HandlingIn the
Initialize
method, there are multiple instances where errors are wrapped but not checked for context cancellation or deadlines. Ensure that context errors are handled appropriately to avoid resource leaks.Consider checking for context cancellation:
if ctx.Err() != nil { return ctx.Err() }Additionally, ensure consistent use of
ctx.Logger()
for logging throughout the method.
115-117
: Usefilepath.Join
for File PathsWhen constructing file paths, it's safer to use
filepath.Join
to ensure compatibility across different operating systems.Apply this diff:
- bs.batchFile, err = os.OpenFile(ctx.HomePath()+"/batch", fileFlag, 0640) + batchFilePath := filepath.Join(ctx.HomePath(), "batch") + bs.batchFile, err = os.OpenFile(batchFilePath, fileFlag, 0640)Don't forget to import
"path/filepath"
.
123-123
: Ensure Correct Initialization ofBatchSize
The initialization of
bs.localBatchInfo.BatchSize = 0
may be redundant if the default zero value is acceptable, but ensure that this assignment reflects the intended starting state.If intentional, consider adding a comment to clarify:
// Initialize BatchSize to zero at the start bs.localBatchInfo.BatchSize = 0
179-181
: Add Method DocumentationAdd comments to exported methods like
DB()
to enhance code readability and maintainability.Consider adding:
// DB returns the database instance associated with the BatchSubmitter
challenger/challenger.go (2)
Line range hint
230-287
: Simplify and DocumentgetProcessedHeights
LogicThe
getProcessedHeights
function is complex and contains nested conditionals. For maintainability, consider refactoring into smaller functions and adding comments to explain the logic.
289-291
: Add Method DocumentationAdd comments to the
DB()
method to enhance clarity.Consider adding:
// DB returns the database instance associated with the Challenger
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
challenger/challenger.go
(9 hunks)challenger/handler.go
(2 hunks)challenger/query.go
(1 hunks)challenger/types/query.go
(1 hunks)executor/batchsubmitter/batch_submitter.go
(9 hunks)executor/celestia/node.go
(5 hunks)node/broadcaster/broadcaster.go
(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- executor/celestia/node.go
- challenger/handler.go
🔇 Additional comments (12)
node/broadcaster/broadcaster.go (2)
Line range hint 4-48
: LGTM! Improved struct organization and efficiency
The restructuring of the Broadcaster struct with the addition of addressAccountMap and renaming of fields improves code clarity and lookup efficiency.
Line range hint 91-136
: Well-structured initialization with proper error handling
Good use of error wrapping pattern and staged initialization. The separation of concerns between initialization steps improves maintainability.
challenger/query.go (1)
11-14
: LGTM! Good implementation of cursor-based pagination.
The method signature has been improved to use cursor-based pagination with from
parameter instead of page numbers, which is more efficient and reliable for large datasets.
executor/batchsubmitter/batch_submitter.go (4)
130-132
: Reuse Existing Method Instead of Duplicating Logic
There was a previous comment suggesting reusing bs.emptyBatchFile
. Ensure that any logic for emptying the batch file is centralized to avoid duplication.
149-150
: Update Logging to Use Structured Context
In the Start
method, confirm that all logging statements use ctx.Logger()
and provide sufficient context for easier debugging.
57-58
: 🛠️ Refactor suggestion
Update Constructor Parameters and Usage
The NewBatchSubmitterV1
function signature has changed with the addition of db types.DB
and chainID string
parameters. Ensure all calls to this constructor are updated accordingly in the codebase.
Run the following script to find all usages of NewBatchSubmitterV1
:
✅ Verification successful
Let me check the actual implementation and usage to verify if the constructor calls are properly updated.
Constructor calls are properly updated
The verification shows that the only usage of NewBatchSubmitterV1
in executor/executor.go
has been correctly updated to include all required parameters:
db.WithPrefix([]byte(types.BatchName))
for thedb
parametercfg.L2Node.ChainID
for thechainID
parameter
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find all calls to `NewBatchSubmitterV1` and check parameter updates.
# Expect: All calls should pass `db` and `chainID` as arguments.
rg 'NewBatchSubmitterV1\('
Length of output: 172
Script:
#!/bin/bash
# Get the implementation and usage context
echo "=== BatchSubmitter Implementation ==="
rg -A 10 'func NewBatchSubmitterV1\(' executor/batchsubmitter/batch_submitter.go
echo -e "\n=== BatchSubmitter Usage ==="
rg -B 5 -A 5 'batchSubmitter:\s*batchsubmitter.NewBatchSubmitterV1\(' executor/executor.go
Length of output: 850
46-47
: Ensure All Necessary Fields Are Initialized
The addition of the chainID string
field is noted, but please verify that this field is properly initialized and used throughout the code to prevent potential nil
or zero-value issues.
Run the following script to check for uninitialized usage of chainID
:
✅ Verification successful
Let me check the initialization of chainID
field.
Let me try a different approach to find the constructor.
Let me check the complete constructor implementation to verify the initialization.
Field Properly Initialized and Used
The chainID
field is properly initialized in the constructor NewBatchSubmitterV1
and is only used once in a getter method. The field is assigned the value passed to the constructor at line chainID: chainID
, and there are no instances of uninitialized usage.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Search for uninitialized uses of `bs.chainID`
# Expect: All usages of `bs.chainID` should follow its initialization.
rg --no-heading --line-number '\bbs\.chainID\b' executor/
Length of output: 125
Script:
#!/bin/bash
# Search for constructor or initialization of BatchSubmitter
ast-grep --pattern 'func NewBatchSubmitter($_) {
$$$
}'
Length of output: 59
Script:
#!/bin/bash
# Search for BatchSubmitter constructor with rg
rg -A 10 "func NewBatchSubmitter" executor/
Length of output: 762
Script:
#!/bin/bash
# Get more context of the constructor
rg -A 30 "func NewBatchSubmitterV1" executor/
Length of output: 2082
challenger/types/query.go (1)
3-6
: 🛠️ Refactor suggestion
Add JSON Tags for Consistency and Clarity
Ensure that all fields in the struct have appropriate JSON tags for serialization consistency.
Apply this diff to add missing JSON tags:
type QueryChallengesResponse struct {
Challenges []Challenge `json:"challenges"`
- Next *string `json:"next,omitempty"`
+ Next *string `json:"next,omitempty"`
}
If all tags are already present and correct, this comment can be skipped.
challenger/challenger.go (4)
Line range hint 80-139
: Handle Context Errors in Initialize
Method
In the Initialize
method, ensure that context cancellation and timeouts are appropriately handled to prevent partial initialization or resource leaks.
Consider adding context error checks after operations that may take time:
if ctx.Err() != nil {
return ctx.Err()
}
147-176
:
Potential Deadlock with Unbuffered Channel
Reiterating the past review comment, sending to an unbuffered channel challengeCh
without a guaranteed receiver may lead to deadlocks.
Consider starting the challenge handler goroutine before sending challenges or using a buffered channel.
76-76
:
Initialize stage
Field Correctly
Similar to previous comments, ensure that stage
is properly initialized and handle any potential errors from db.NewStage()
.
Apply this diff:
- stage: db.NewStage(),
+ stage, err := db.NewStage()
+ if err != nil {
+ return nil, errors.Wrap(err, "failed to create new stage")
+ }
+ c.stage = stage
Likely invalid or redundant comment.
43-44
: Remove Unused Imports or Variables
Lines 43-44 are empty; ensure there are no unintended side effects or residual code. If the removed fields logger
and homePath
are no longer used, confirm that all references have been cleaned up.
Run the following script to check for unused variables:
✅ Verification successful
Based on the search results, I can now generate the final response:
Empty Lines Are Clean - No Issues Found
The empty lines 43-44 in challenger/challenger.go are not problematic. The code review shows that logger
and homePath
are properly managed in the types/context.go
package and are passed through the context parameter in methods, rather than being stored as struct fields in the Challenger type. This is a clean implementation that follows proper dependency injection patterns.
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Description: Find any unused variables or imports in challenger.go
golangci-lint run challenger/challenger.go
Length of output: 148
Script:
#!/bin/bash
# Let's check the content of the file to understand the context better
cat challenger/challenger.go
Length of output: 7368
Script:
#!/bin/bash
# Search for any references to logger and homePath in the challenger directory
rg -l "logger|homePath" challenger/
Length of output: 37
Script:
#!/bin/bash
# Check git history for the specific lines to understand what was removed
git show -L43,44:challenger/challenger.go
Length of output: 332
Script:
#!/bin/bash
# Let's check the git history for this file to see the previous version
git log -p challenger/challenger.go | grep -A 20 -B 20 "logger\|homePath"
Length of output: 0
Script:
#!/bin/bash
# Let's check if logger or homePath are used anywhere in the code structure
rg "logger|homePath" --type go
Length of output: 5318
Script:
#!/bin/bash
# Let's check the commit that might have removed these fields
git log --patch -S "logger" --since="1 month ago" challenger/challenger.go
Length of output: 0
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 5
🧹 Outside diff range and nitpick comments (5)
challenger/handler.go (1)
19-38
: Enhance error handling with descriptive messagesWhile the error handling is functional, it could be more descriptive by wrapping errors with context.
Consider applying this pattern:
c.stage.Reset() // Remove the pending challenge that was stored by the client or host err := challengerdb.DeletePendingChallenge(c.stage, challenge) if err != nil { - return err + return errors.Wrap(err, "failed to delete pending challenge") } err = challengerdb.SaveChallenge(c.stage, challenge) if err != nil { - return err + return errors.Wrap(err, "failed to save challenge") } err = c.handleChallenge(ctx, challenge) if err != nil { - return err + return errors.Wrap(err, "failed to handle challenge") } err = c.stage.Commit() if err != nil { - return err + return errors.Wrap(err, "failed to commit stage") }executor/batchsubmitter/batch_submitter.go (1)
19-19
: Consider using standarderrors
packageThe import of
github.com/pkg/errors
replaces the standarderrors
package. Unless you require specific functionality provided by this package, it's preferable to use the standard library to reduce dependencies.challenger/challenger.go (3)
Line range hint
47-77
: Consider enhancing error handling in the constructorWhile the changes look good overall, the current error handling pattern of panicking on validation failure isn't ideal for a constructor. Consider returning an error instead of panicking to allow better error handling by the caller.
-func NewChallenger(cfg *challengertypes.Config, db types.DB, sv *server.Server) *Challenger { +func NewChallenger(cfg *challengertypes.Config, db types.DB, sv *server.Server) (*Challenger, error) { err := cfg.Validate() if err != nil { - panic(err) + return nil, errors.Wrap(err, "invalid challenger config") } challengeCh := make(chan challengertypes.Challenge) - return &Challenger{ + return &Challenger{ host: host.NewHostV1( cfg.L1NodeConfig(), db.WithPrefix([]byte(types.HostName)), ), // ... rest of the initialization - } + }, nil }
189-205
: Enhance query parameter validationWhile basic validation is implemented, consider adding these improvements:
- Validate that limit is positive
- Add proper error handling for invalid order values
- Consider using constants for default and max limit values
+const ( + defaultLimit = 10 + maxLimit = 100 +) + next := ctx.Query("next", "") -limit := ctx.QueryInt("limit", 10) -if limit > 100 { - limit = 100 +limit := ctx.QueryInt("limit", defaultLimit) +if limit <= 0 { + return errors.New("limit must be positive") +} +if limit > maxLimit { + limit = maxLimit } ulimit, err := types.SafeInt64ToUint64(int64(limit)) if err != nil { return errors.Wrap(err, "failed to convert limit") } -descOrder := true -orderStr := ctx.Query("order", "desc") -if orderStr == "asc" { - descOrder = false +orderStr := strings.ToLower(ctx.Query("order", "desc")) +descOrder := true +switch orderStr { +case "asc": + descOrder = false +case "desc": + // default is already desc +default: + return errors.Errorf("invalid order value: %s", orderStr) }
289-291
: Add method documentationConsider adding documentation to explain the purpose of this method and any relevant usage details.
+// DB returns the database instance used by the challenger. +// This method provides access to the underlying database for external components. func (c Challenger) DB() types.DB { return c.db }
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
📒 Files selected for processing (7)
challenger/challenger.go
(9 hunks)challenger/handler.go
(2 hunks)challenger/query.go
(1 hunks)challenger/types/query.go
(1 hunks)executor/batchsubmitter/batch_submitter.go
(9 hunks)executor/celestia/node.go
(5 hunks)node/broadcaster/broadcaster.go
(7 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- executor/celestia/node.go
🔇 Additional comments (15)
challenger/handler.go (3)
12-12
: LGTM: Context type change aligns with project direction
The change from context.Context
to types.Context
improves consistency across the codebase.
21-21
: Improve comment clarity
The comment could be more descriptive about the purpose of deleting the pending challenge.
12-40
: Verify channel closure handling
The channel closure pattern looks correct, but let's verify there are no potential deadlocks or race conditions.
✅ Verification successful
Based on the results, I can now generate the final response as we have sufficient information about the channel usage patterns.
Channel handling implementation is safe and correct
The channel implementation follows Go's best practices:
challengeCh
is properly initialized as an unbuffered channelchallengeChStopped
is correctly used as a signal channel withstruct{}
- Channel closure is handled via
defer
in the main handler - The select statement properly handles context cancellation
- No potential deadlocks found in channel operations
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash
# Search for all channel operations on challengeCh and challengeChStopped
rg -A 3 "challengeCh(Stopped)?"
# Look for other goroutines that might interact with these channels
ast-grep --pattern 'go func() {
$$$
}'
Length of output: 1644
node/broadcaster/broadcaster.go (7)
4-4
: Imports are appropriate and necessary
The added imports encoding/hex
and sdk "github.com/cosmos/cosmos-sdk/types"
are necessary for the added functionality and are correctly included.
Also applies to: 18-18
32-34
: Struct enhancements and synchronization mechanisms are well-implemented
The additions to the Broadcaster
struct, including fields like txConfig
, accounts
, addressAccountMap
, and synchronization mechanisms like accountMu
and pendingTxMu
, improve the broadcaster's ability to handle transactions and manage accounts safely in concurrent environments.
Also applies to: 38-48
71-73
: Proper initialization of synchronization primitives and data structures
The initialization of pendingTxMu
, pendingTxs
, and pendingProcessedMsgsBatch
sets up the necessary mutexes and data slices for managing pending transactions and messages effectively.
91-93
: Consistent context management in Initialize
method
The Initialize
method now uses types.Context
, aligning with the updated context management strategy across the codebase. Passing the new context to NewBroadcasterAccount
ensures consistency and proper handling of context-specific operations.
Also applies to: 95-95
111-111
: Addition of SetSyncInfo
method enhances synchronization control
The SetSyncInfo
method correctly updates the syncedHeight
field, allowing for precise control over synchronization state within the broadcaster.
Also applies to: 113-113
116-136
: Improved broadcaster preparation logic
The prepareBroadcaster
method effectively loads pending transactions and processed message batches from the database, with appropriate error handling and staging. This enhances the reliability of the broadcaster's startup sequence.
138-211
: Enhanced transaction verification logic in loadPendingTxs
The loadPendingTxs
method now properly checks the inclusion status and success of pending transactions during the wait time, addressing previous concerns about verifying transaction success before proceeding.
challenger/types/query.go (1)
1-6
: Addition of QueryChallengesResponse
type is appropriate
The new QueryChallengesResponse
struct correctly models the response for querying challenges, including pagination support with the Next
field. The use of omitempty
ensures that Next
is omitted from the JSON output when it is nil
, which is a good practice.
executor/batchsubmitter/batch_submitter.go (2)
Line range hint 57-69
: Avoid using panic
for error handling
Using panic
at line 69 can lead to abrupt termination of the application, which is undesirable in production. Return the error instead and handle it appropriately.
Apply this diff to handle the error:
- panic(errors.Wrap(err, "failed to create node"))
+ return nil, errors.Wrap(err, "failed to create node")
Adjust the function signature to return an error:
- ) *BatchSubmitter {
+ ) (*BatchSubmitter, error) {
86-86
:
Handle potential errors from db.NewStage()
Ensure that the initialization of the stage
field handles any potential errors from db.NewStage()
, as unhandled errors may lead to unexpected behavior.
Consider modifying the code:
- stage: db.NewStage(),
+ stage, err := db.NewStage()
+ if err != nil {
+ return nil, errors.Wrap(err, "failed to create new stage")
+ }
Remember to adjust the function signature and return statements accordingly.
challenger/challenger.go (2)
11-11
: LGTM: Clean structural changes
The addition of the challengerdb import and the stage field of type types.CommitDB are well-organized and properly integrated into the codebase.
Also applies to: 43-44
160-166
: Potential deadlock due to unbuffered channel usage
The unbuffered channel issue previously identified still exists in the code.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
Summary by CodeRabbit
Release Notes
New Features
NewTestBaseChild
andNewTestBaseHost
for testing purposes.Context
struct for improved context management in various methods.QueryBlockTime
for retrieving block time based on height.TxHash
for computing transaction hashes.Bug Fixes
Broadcaster
struct.Refactor
types.Context
across various components.BaseChild
andBaseHost
initialization processes by removing unnecessary parameters.Tests