-
Notifications
You must be signed in to change notification settings - Fork 108
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(zetaclient): tolerate priorityFee > gasFee #2950
Conversation
📝 WalkthroughWalkthroughThe pull request introduces significant updates across multiple workflow files, a Makefile, and several Go source files. Key changes include the consolidation of input parameters for end-to-end testing, the addition of new migration test targets, and enhancements to the release publishing process. The encoding configuration has been updated to align with the latest module structures, while the changelog reflects the release of version 20.0.0, detailing new features and improvements. Changes
Possibly related PRs
Suggested labels
Suggested reviewers
Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media? 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## release/v20 #2950 +/- ##
============================================
Coverage 66.73% 66.74%
============================================
Files 364 364
Lines 20573 20577 +4
============================================
+ Hits 13730 13734 +4
Misses 6211 6211
Partials 632 632
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Caution
Inline review comments failed to post
🛑 Comments failed to post (86)
contrib/localnet/bitcoin-sidecar/js/src/util.ts (1)
1-1: 🛠️ Refactor suggestion
Enhance type safety, input validation, and documentation for
toXOnly
function.While the current implementation is concise, it can be improved for production-grade code. Consider the following enhancements:
- Add TypeScript type annotations for better type checking and documentation.
- Implement input validation to handle edge cases.
- Add JSDoc comments to explain the function's purpose and usage.
Here's a more robust implementation:
/** * Converts a public key to its 32-byte X-only representation. * @param pubKey - The public key as a Buffer or Uint8Array. * @returns The 32-byte X-only public key. * @throws {Error} If the input is invalid or has insufficient length. */ export const toXOnly = (pubKey: Buffer | Uint8Array): Buffer | Uint8Array => { if (!pubKey || pubKey.length < 32) { throw new Error('Invalid public key: must be at least 32 bytes'); } return pubKey.length === 32 ? pubKey : pubKey.slice(1, 33); };This implementation adds type annotations, input validation, and documentation while maintaining the core logic of the original function.
contrib/localnet/bitcoin-sidecar/js/src/tsconfig.json (1)
10-10: 🛠️ Refactor suggestion
Consider expanding library specifications for enhanced type support.
While the inclusion of "es2015" in the library array is appropriate and aligns with the ES6 target, for a Node.js environment, it would be beneficial to include additional type definitions.
Expand the library array to include Node.js types:
{ "compilerOptions": { // ... (existing options) }, - "lib": ["es2015"] + "lib": ["es2015", "dom"], + "types": ["node"] }This change will provide better type support for Node.js APIs and improve development experience with more accurate IntelliSense in compatible IDEs.
Committable suggestion was skipped due to low confidence.
contrib/localnet/bitcoin-sidecar/Dockerfile (2)
1-7: 🛠️ Refactor suggestion
Optimize the builder stage for better efficiency and security.
While the current implementation is functional, there are several areas for improvement:
File copying:
Instead of copying all files frombitcoin-sidecar/js/*
, consider copying only necessary files. This reduces the image size and potential security risks.Dependency management:
It appears thatnpm install
is run without apackage.json
file. Ensure that apackage.json
file is present and copied before runningnpm install
.Build optimization:
Consider usingnpm ci
instead ofnpm install
for more reproducible builds in a CI/CD environment.Security:
Avoid installing global packages in production images. Consider usingnpx
or local installation of TypeScript.Here's an improved version of this stage:
FROM node:18.20.4 as builder WORKDIR /home/zeta/node COPY bitcoin-sidecar/js/package*.json ./ RUN npm ci COPY bitcoin-sidecar/js/tsconfig.json ./ COPY bitcoin-sidecar/js/src ./src RUN npx tscAdditionally, consider adding a
.dockerignore
file to exclude unnecessary files from the build context.
9-14: 🛠️ Refactor suggestion
Enhance security and consistency in the final stage.
The current implementation of the final stage can be improved in several ways:
Version consistency:
Specify the Node.js version in the final stage to ensure consistency with the builder stage.Security:
Run the application as a non-root user to enhance security.Working directory:
Explicitly set the working directory in the final stage.Environment variables:
Consider using environment variables for configuration, if applicable.Here's an improved version of the final stage:
FROM node:18.20.4-alpine WORKDIR /home/zeta/node COPY --from=builder /home/zeta/node/dist ./dist COPY --from=builder /home/zeta/node/node_modules ./node_modules RUN addgroup -S zeta && adduser -S zeta -G zeta USER zeta CMD ["node", "dist/index.js"]This version ensures version consistency, sets up a non-root user, and maintains the working directory explicitly.
contrib/localnet/scripts/import-data.sh (3)
13-13: 🛠️ Refactor suggestion
Enhance error handling and input validation
While the URL change is correct, consider implementing the following improvements for robustness:
- Validate the $NETWORK variable to ensure it's a valid network name.
- Add error handling for the curl and jq commands to prevent silent failures.
- Implement timeout and error handling for the wget command.
Here's a more robust implementation:
#!/bin/bash set -eo pipefail VALID_NETWORKS=("athens" "babylon" "other_valid_networks") # Add all valid networks if [ $# -lt 1 ]; then echo "Usage: import-data.sh [network]" echo "Valid networks: ${VALID_NETWORKS[*]}" exit 1 fi NETWORK=$1 if [[ ! " ${VALID_NETWORKS[*]} " =~ " ${NETWORK} " ]]; then echo "Error: Invalid network '${NETWORK}'" echo "Valid networks: ${VALID_NETWORKS[*]}" exit 1 fi echo "NETWORK: ${NETWORK}" rm -rf ~/.zetacored/genesis_data mkdir -p ~/.zetacored/genesis_data echo "Download Latest State Export" SNAPSHOT_URL="https://snapshots.rpc.zetachain.com/${NETWORK}/state/latest.json" LATEST_EXPORT_URL=$(curl -s -f -m 10 "$SNAPSHOT_URL" | jq -e -r '.snapshots[0].link') || { echo "Error: Failed to fetch or parse snapshot data from $SNAPSHOT_URL" exit 1 } echo "LATEST EXPORT URL: ${LATEST_EXPORT_URL}" wget -q --timeout=60 --tries=3 "${LATEST_EXPORT_URL}" -O ~/.zetacored/genesis_data/exported-genesis.json || { echo "Error: Failed to download the latest export from ${LATEST_EXPORT_URL}" exit 1 } echo "Successfully downloaded and saved the latest state export."This implementation includes input validation, error handling, and timeouts for a more robust script.
13-13:
⚠️ Potential issueEnsure proper URL sanitization
While the use of HTTPS is commendable, directly interpolating the $NETWORK variable into the URL without proper sanitization could potentially lead to security vulnerabilities.
To mitigate this risk, consider implementing strict input validation for the $NETWORK variable before using it in the URL. This can be achieved by using a whitelist of allowed network names or a regex pattern that matches only valid network identifiers.
Example implementation:
VALID_NETWORKS=("athens" "babylon" "other_valid_networks") # Add all valid networks if [[ ! " ${VALID_NETWORKS[*]} " =~ " ${NETWORK} " ]]; then echo "Error: Invalid network '${NETWORK}'" echo "Valid networks: ${VALID_NETWORKS[*]}" exit 1 fi # Now it's safe to use $NETWORK in the URL SNAPSHOT_URL="https://snapshots.rpc.zetachain.com/${NETWORK}/state/latest.json"This approach ensures that only predefined, valid network names can be used in the URL, significantly reducing the risk of potential command injection or other URL-based attacks.
13-15: 🛠️ Refactor suggestion
Consider performance enhancements
While the current implementation is functional, consider the following performance improvements:
- Implement a caching mechanism to avoid unnecessary downloads if the latest export hasn't changed.
- Add a progress indicator for the wget download to provide better user feedback.
- If applicable, consider implementing parallel processing for downloading multiple files.
Here's an example of how you might implement these improvements:
#!/bin/bash set -eo pipefail # ... (previous input validation code) ... CACHE_FILE="$HOME/.zetacored/latest_export_cache.json" SNAPSHOT_URL="https://snapshots.rpc.zetachain.com/${NETWORK}/state/latest.json" # Fetch and compare with cached data NEW_DATA=$(curl -s -f -m 10 "$SNAPSHOT_URL") if [ -f "$CACHE_FILE" ] && diff -q <(echo "$NEW_DATA" | jq -S .) "$CACHE_FILE" >/dev/null; then echo "No new export available. Using cached data." LATEST_EXPORT_URL=$(jq -r '.snapshots[0].link' "$CACHE_FILE") else echo "New export available. Updating cache." echo "$NEW_DATA" | jq -S . > "$CACHE_FILE" LATEST_EXPORT_URL=$(echo "$NEW_DATA" | jq -r '.snapshots[0].link') fi echo "LATEST EXPORT URL: ${LATEST_EXPORT_URL}" # Download with progress bar wget --progress=bar:force:noscroll "${LATEST_EXPORT_URL}" -O ~/.zetacored/genesis_data/exported-genesis.json echo "Successfully downloaded and saved the latest state export."This implementation adds caching to avoid unnecessary downloads and includes a progress bar for better user feedback. If you need to download multiple files, consider using GNU Parallel or xargs for parallel processing.
contrib/localnet/bitcoin-sidecar/js/package.json (2)
6-8:
⚠️ Potential issueImplement proper test scripts and add useful npm commands.
The current test script is a placeholder that doesn't execute any tests. For a production-grade project, it's crucial to have proper test coverage. Consider implementing actual test scripts using a testing framework suitable for your project (e.g., Jest, Mocha).
Additionally, it's recommended to add more npm scripts for common development tasks. For example:
"scripts": { "start": "node index.js", "dev": "nodemon index.js", "test": "jest", "lint": "eslint .", "build": "tsc" }Ensure to install the necessary devDependencies for these scripts (e.g., nodemon, jest, eslint).
19-22:
⚠️ Potential issueAlign TypeScript configuration with project structure.
The inclusion of TypeScript and @types/node as devDependencies indicates an intention to use TypeScript in the project. However, the main entry point is currently set to "index.js", which is a JavaScript file. This inconsistency may lead to confusion and potential issues in the development process.
To resolve this, consider the following recommendations:
Change the main entry point to a TypeScript file:
"main": "index.ts"Add a build script to compile TypeScript to JavaScript:
"scripts": { "build": "tsc", "start": "node dist/index.js" }Create a
tsconfig.json
file to configure TypeScript compilation options.Ensure all source files are written in TypeScript (.ts extension) for consistency.
Add additional TypeScript-related dev dependencies if needed, such as
ts-node
for running TypeScript directly without compilation.e2e/txserver/authority.go (1)
12-20: 🛠️ Refactor suggestion
Optimize account and address retrieval.
The current implementation correctly handles errors, but it can be more concise. Consider combining the account and address retrieval into a single operation.
Apply this change to streamline the code:
- // retrieve account - accAdmin, err := zts.clientCtx.Keyring.Key(e2eutils.AdminPolicyName) - if err != nil { - return err - } - addrAdmin, err := accAdmin.GetAddress() - if err != nil { - return err - } + // retrieve admin address + addrAdmin, err := zts.clientCtx.Keyring.Key(e2eutils.AdminPolicyName) + if err != nil { + return fmt.Errorf("failed to retrieve admin address: %w", err) + }This change reduces the number of operations and provides a more informative error message.
Committable suggestion was skipped due to low confidence.
contrib/localnet/bitcoin-sidecar/js/src/index.ts (2)
13-18:
⚠️ Potential issueEnhance error handling and clarify
zetaClient.call()
usage in/commit
route.While the route correctly processes the
memo
field, consider the following improvements:
- Implement error handling for cases where the
memo
field is missing or invalid.- Clarify the usage of
zetaClient.call()
. The empty string as the first argument appears suspicious.Consider applying the following changes:
app.post('/commit', (req: Request, res: Response) => { const { memo } = req.body; if (!memo || typeof memo !== 'string') { return res.status(400).json({ error: 'Invalid or missing memo field' }); } try { const address = zetaClient.call("", Buffer.from(memo, "hex")); res.json({ address }); } catch (error) { console.error('Error processing commit:', error); res.status(500).json({ error: 'Internal server error' }); } });Additionally, please clarify the purpose of the empty string argument in
zetaClient.call()
.
20-28:
⚠️ Potential issueEnhance error handling, validate parameters, and reconsider
zetaClient
reinitialization in/reveal
route.The route correctly extracts parameters, but several improvements are necessary:
- Implement proper error handling and parameter validation.
- Remove the
console.log
statement, which appears to be for debugging.- Reconsider the necessity of reinitializing
zetaClient
after each request, as this may impact performance.Consider applying the following changes:
app.post('/reveal', (req: Request, res: Response) => { const { txn, idx, amount, feeRate, to } = req.body; if (!txn || !idx || !amount || !feeRate || !to) { return res.status(400).json({ error: 'Missing required parameters' }); } try { const rawHex = zetaClient.buildRevealTxn(to, { txn, idx }, Number(amount), feeRate).toString("hex"); res.json({ rawHex }); } catch (error) { console.error('Error processing reveal:', error); res.status(500).json({ error: 'Internal server error' }); } });Additionally, please explain the rationale behind reinitializing
zetaClient
after each request. If it's unnecessary, consider removing this line:zetaClient = ZetaBtcClient.regtest();zetaclient/testdata/solana/chain_901_inbound_tx_result_MS3MPLN7hkbyCZFwKqXcg8fmEvQMD74fN6Ps2LSWXJoRxPW5ehaxBorK9q1JFVbqnAvu9jXm6ertj7kT7HpYw1j.json (1)
63-64: 🛠️ Refactor suggestion
Consider using a newer transaction version if applicable.
The transaction version is set to 0, which typically indicates a legacy transaction format in Solana. If this transaction is part of new development or updates, it may be beneficial to consider using a newer transaction version. Newer versions often provide additional features, optimizations, or improved compatibility with recent Solana updates.
zetaclient/chains/evm/signer/gas.go (1)
95-100: 🛠️ Refactor suggestion
Enhance clarity and documentation for the gas price adjustment logic.
The addition of this check is a prudent measure to ensure that the priority fee does not exceed the gas price. However, to improve code maintainability and clarity, consider the following enhancements:
- Improve the warning message to be more descriptive:
logger.Warn(). Uint64("cctx.initial_gas_price", gasPrice.Uint64()). Uint64("cctx.initial_priority_fee", priorityFee.Uint64()). Msg("Gas price is less than priority fee. Adjusting priority fee to match gas price.")
- Add a comment explaining the rationale behind this change:
// Ensure priority fee doesn't exceed gas price, which is invalid in EVM chains. // This situation might occur due to rapid gas price fluctuations or misconfiguration. if gasPrice.Cmp(priorityFee) == -1 { // ... (existing code) }
- Consider adding a constant or configuration parameter for the minimum allowed difference between gas price and priority fee, allowing for more flexible adjustments in the future.
Would you like me to provide a complete refactored version of this function incorporating these suggestions?
e2e/runner/v2_setup_evm.go (1)
107-110: 💡 Codebase verification
Missing Documentation Updates for Legacy Support
The verification process did not find any updated documentation mentioning "legacy support" in the
./pkg/contracts
or./e2e
directories.Please ensure that the function documentation is updated to include:
- Purpose and Impact: Explain the rationale behind enabling legacy support and how it affects the ERC20Custody contract's behavior.
- Potential Risks or Considerations: Outline any risks associated with legacy support and any measures taken to mitigate them.
- Usage Guidelines: Provide instructions or guidelines on how to utilize the legacy support feature effectively.
🔗 Analysis chain
Clarification needed on legacy support implications.
The addition of legacy support in the ERC20Custody contract is a significant change. While the implementation is correct, the implications of this change are not immediately clear from the code.
Please provide the following:
- A brief explanation of the purpose and impact of enabling legacy support.
- Any potential risks or considerations associated with this change.
- Update the function documentation to reflect this new functionality.
To ensure comprehensive documentation, please run the following script:
Also applies to: 112-112
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Check for updated documentation in relevant files # Test: Search for updated documentation mentioning legacy support rg --type go "legacy support" ./pkg/contracts rg --type go "legacy support" ./e2eLength of output: 279
x/observer/keeper/hooks.go (4)
15-21: 🛠️ Refactor suggestion
Enhance error handling and logging in
AfterValidatorRemoved
While the addition of error logging improves observability, consider the following enhancements:
- Return the error instead of nil to allow upstream callers to handle it.
- Provide more context in the error log, such as the validator address being removed.
Consider applying the following changes:
func (h Hooks) AfterValidatorRemoved(ctx sdk.Context, _ sdk.ConsAddress, valAddr sdk.ValAddress) error { err := h.k.CleanObservers(ctx, valAddr) if err != nil { - ctx.Logger().Error("Error cleaning observer set", "error", err) + ctx.Logger().Error("Error cleaning observer set after validator removal", "error", err, "validator", valAddr.String()) + return err } return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (h Hooks) AfterValidatorRemoved(ctx sdk.Context, _ sdk.ConsAddress, valAddr sdk.ValAddress) error { err := h.k.CleanObservers(ctx, valAddr) if err != nil { ctx.Logger().Error("Error cleaning observer set after validator removal", "error", err, "validator", valAddr.String()) return err } return nil }
39-45: 🛠️ Refactor suggestion
Improve error handling and logging in
BeforeValidatorSlashed
The addition of error logging enhances observability. However, consider the following improvements:
- Return the error instead of nil to allow proper handling by upstream callers.
- Enhance the error log with more context, such as the validator address being slashed and the slashing fraction.
Apply the following changes to improve error handling and logging:
func (h Hooks) BeforeValidatorSlashed(ctx sdk.Context, valAddr sdk.ValAddress, fraction sdk.Dec) error { err := h.k.CleanSlashedValidator(ctx, valAddr, fraction) if err != nil { - ctx.Logger().Error("Error cleaning observer set", "error", err) + ctx.Logger().Error("Error cleaning slashed validator", "error", err, "validator", valAddr.String(), "fraction", fraction.String()) + return err } return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (h Hooks) BeforeValidatorSlashed(ctx sdk.Context, valAddr sdk.ValAddress, fraction sdk.Dec) error { err := h.k.CleanSlashedValidator(ctx, valAddr, fraction) if err != nil { ctx.Logger().Error("Error cleaning slashed validator", "error", err, "validator", valAddr.String(), "fraction", fraction.String()) return err } return nil }
23-29: 🛠️ Refactor suggestion
Improve error handling and logging in
AfterValidatorBeginUnbonding
The addition of error logging enhances observability. However, consider the following improvements:
- Propagate the error to upstream callers instead of suppressing it.
- Enhance the error log with more context, such as the validator address beginning unbonding.
Apply the following changes to improve error handling and logging:
func (h Hooks) AfterValidatorBeginUnbonding(ctx sdk.Context, _ sdk.ConsAddress, valAddr sdk.ValAddress) error { err := h.k.CheckAndCleanObserver(ctx, valAddr) if err != nil { - ctx.Logger().Error("Error cleaning observer set", "error", err) + ctx.Logger().Error("Error checking and cleaning observer after validator begin unbonding", "error", err, "validator", valAddr.String()) + return err } return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (h Hooks) AfterValidatorBeginUnbonding(ctx sdk.Context, _ sdk.ConsAddress, valAddr sdk.ValAddress) error { err := h.k.CheckAndCleanObserver(ctx, valAddr) if err != nil { ctx.Logger().Error("Error checking and cleaning observer after validator begin unbonding", "error", err, "validator", valAddr.String()) return err } return nil }
31-37: 🛠️ Refactor suggestion
Enhance error handling and logging in
AfterDelegationModified
While the addition of error logging is beneficial, consider the following enhancements:
- Propagate the error to allow proper handling by upstream callers.
- Provide more context in the error log, including the delegator and validator addresses involved.
Implement the following changes to improve error handling and logging:
func (h Hooks) AfterDelegationModified(ctx sdk.Context, delAddr sdk.AccAddress, valAddr sdk.ValAddress) error { err := h.k.CheckAndCleanObserverDelegator(ctx, valAddr, delAddr) if err != nil { - ctx.Logger().Error("Error cleaning observer set", "error", err) + ctx.Logger().Error("Error checking and cleaning observer delegator after delegation modification", "error", err, "delegator", delAddr.String(), "validator", valAddr.String()) + return err } return nil }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.func (h Hooks) AfterDelegationModified(ctx sdk.Context, delAddr sdk.AccAddress, valAddr sdk.ValAddress) error { err := h.k.CheckAndCleanObserverDelegator(ctx, valAddr, delAddr) if err != nil { ctx.Logger().Error("Error checking and cleaning observer delegator after delegation modification", "error", err, "delegator", delAddr.String(), "validator", valAddr.String()) return err } return nil }
cmd/zetae2e/config/localnet.yml (1)
44-59: 💡 Codebase verification
Add
localnet.yml
to.gitignore
to Protect Sensitive InformationThe
cmd/zetae2e/config/localnet.yml
file contains sensitive information, including private keys, and is currently not listed in.gitignore
. This oversight poses a significant security risk as the file could be inadvertently committed to the repository.Please take the following actions:
- Add
cmd/zetae2e/config/localnet.yml
to the.gitignore
file to prevent it from being tracked by Git.- Review the repository’s commit history to ensure that
localnet.yml
has not been previously committed. If it has, follow the necessary steps to remove it from the history securely.- Implement secure key management practices, such as using environment variables or dedicated secret management tools, to handle sensitive data in configuration files.
🔗 Analysis chain
Approve the addition of new test accounts with a security consideration.
The new user accounts for v2 operations and revert scenarios have been correctly added to the
additional_accounts
section, maintaining consistency with the existing structure. This addition enhances the testing capabilities for various scenarios.However, it is crucial to emphasize the importance of securing this configuration file, as it contains sensitive information such as private keys. While this is acceptable for local testing environments, please ensure that:
- This file is never committed to version control systems.
- Access to this file is strictly limited to authorized personnel.
- For production environments, consider using secure key management systems or environment variables to manage sensitive data.
To ensure this file is properly excluded from version control, please run the following command:
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Verify that the localnet.yml file is listed in .gitignore grep -q "cmd/zetae2e/config/localnet.yml" .gitignore && echo "File is properly ignored" || echo "WARNING: File is not ignored in .gitignore"Length of output: 147
🧰 Tools
🪛 Gitleaks
47-47: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
51-51: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
55-55: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
59-59: Detected a Generic API Key, potentially exposing access to various services and sensitive operations.
(generic-api-key)
.github/workflows/publish-release.yml (1)
111-111: 🛠️ Refactor suggestion
Consider addressing shellcheck recommendations for improved script robustness.
While the current implementation is functional, addressing the following shellcheck recommendations could enhance the script's robustness:
- Replace
cat
with direct input redirection.- Use double quotes to prevent potential globbing and word splitting issues.
Consider applying the following changes:
- UPGRADE_HANDLER_MAJOR_VERSION=$(cat app/setup_handlers.go | grep "const releaseVersion" | cut -d ' ' -f4 | tr -d '"' | cut -d '.' -f 1 | tr -d '\n') + UPGRADE_HANDLER_MAJOR_VERSION=$(grep "const releaseVersion" app/setup_handlers.go | cut -d ' ' -f4 | tr -d '"' | cut -d '.' -f 1 | tr -d '\n') - if [ ${USER_INPUT_VERSION} != $UPGRADE_HANDLER_MAJOR_VERSION ]; then + if [ "${USER_INPUT_VERSION}" != "${UPGRADE_HANDLER_MAJOR_VERSION}" ]; thenCommittable suggestion was skipped due to low confidence.
🧰 Tools
🪛 actionlint
111-111: shellcheck reported issue in this script: SC2002:style:1:37: Useless cat. Consider 'cmd < file | ..' or 'cmd file | ..' instead
(shellcheck)
111-111: shellcheck reported issue in this script: SC2086:info:5:6: Double quote to prevent globbing and word splitting
(shellcheck)
111-111: shellcheck reported issue in this script: SC2086:info:5:31: Double quote to prevent globbing and word splitting
(shellcheck)
contrib/localnet/bitcoin-sidecar/js/src/script.ts (3)
14-20:
⚠️ Potential issueRename the
new
method to avoid using a reserved keywordUsing
new
as a method name can cause confusion and potential issues sincenew
is a reserved keyword in TypeScript. It's advisable to rename the method to a more descriptive name such ascreate
orfromPublicKey
.Apply the following diff to rename the method:
export class ScriptBuilder { private script: Stack; private constructor(initialScript: Stack) { this.script = initialScript; } - public static new(publicKey: Buffer): ScriptBuilder { + public static create(publicKey: Buffer): ScriptBuilder { const stack = [ toXOnly(publicKey), opcodes.OP_CHECKSIG, ]; return new ScriptBuilder(stack); }Remember to update all references to this method accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.public static create(publicKey: Buffer): ScriptBuilder { const stack = [ toXOnly(publicKey), opcodes.OP_CHECKSIG, ]; return new ScriptBuilder(stack); }
45-52: 🛠️ Refactor suggestion
Encapsulate
chunkBuffer
as a private method withinScriptBuilder
Since
chunkBuffer
is only used within theScriptBuilder
class, consider moving it as a private method inside the class. This encapsulates functionality and adheres to object-oriented principles.Apply the following changes:
export class ScriptBuilder { /* ... */ + private chunkBuffer(buffer: Buffer, chunkSize: number): Buffer[] { + const chunks = []; + for (let i = 0; i < buffer.length; i += chunkSize) { + const chunk = buffer.slice(i, i + chunkSize); + chunks.push(chunk); + } + return chunks; + } public pushData(data: Buffer) { /* ... */ - const chunks = chunkBuffer(data, MAX_SCRIPT_ELEMENT_SIZE); + const chunks = this.chunkBuffer(data, MAX_SCRIPT_ELEMENT_SIZE); /* ... */ } /* ... */ } -function chunkBuffer(buffer: Buffer, chunkSize: number): Buffer[] { - const chunks = []; - for (let i = 0; i < buffer.length; i += chunkSize) { - const chunk = buffer.slice(i, i + chunkSize); - chunks.push(chunk); - } - return chunks; -}Committable suggestion was skipped due to low confidence.
27-35: 🛠️ Refactor suggestion
Optimize script pushing by combining opcodes
When pushing multiple opcodes onto the script stack, it's more efficient to push them as an array in a single call rather than multiple arguments. This enhances code clarity and performance.
Apply this change:
this.script.push( - opcodes.OP_FALSE, - opcodes.OP_IF + ...[ + opcodes.OP_FALSE, + opcodes.OP_IF + ] ); /* ... */ for (const chunk of chunks) { this.script.push(chunk); }Alternatively, consider combining the pushes:
this.script.push( - opcodes.OP_FALSE, - opcodes.OP_IF -); + ...[ + opcodes.OP_FALSE, + opcodes.OP_IF, + ...chunks, + opcodes.OP_ENDIF + ] ); -/* Existing loop */ -for (const chunk of chunks) { - this.script.push(chunk); -} -this.script.push(opcodes.OP_ENDIF);This approach reduces the number of
push
calls and consolidates the script construction.Committable suggestion was skipped due to low confidence.
e2e/e2etests/test_extract_bitcoin_inscription_memo.go (5)
43-47:
⚠️ Potential issuePassing Duplicate Transactions
Both
dummyCoinbaseTxn
andrawtx
reference the same transaction. Passing them as separate entries may cause redundant processing.If the intention is to process multiple distinct transactions, adjust the code accordingly. Otherwise, pass a single instance:
- []btcjson.TxRawResult{*dummyCoinbaseTxn, *rawtx}, + []btcjson.TxRawResult{*rawtx},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.dummyCoinbaseTxn := rawtx depositorFee := zetabitcoin.DefaultDepositorFee events, err := btcobserver.FilterAndParseIncomingTx( r.BtcRPCClient, []btcjson.TxRawResult{*rawtx},
41-41:
⚠️ Potential issueIncorrect Logger Usage
The logging statement uses
r.Logger.Info
incorrectly. Inzerolog
, the correct pattern is to chain methods afterInfo()
.Update the logging statement:
- r.Logger.Info("obtained reveal txn id %s", txid) + r.Logger.Info().Msgf("obtained reveal txn id %s", txid)Alternatively, to include structured logging:
+ r.Logger.Info().Str("txid", txid).Msg("obtained reveal txn id")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.r.Logger.Info().Msgf("obtained reveal txn id %s", txid)
43-43:
⚠️ Potential issueMisleading Variable Name
dummyCoinbaseTxn
Assigning
rawtx
todummyCoinbaseTxn
may be misleading sincerawtx
is not a coinbase transaction.Consider renaming the variable to reflect its actual content:
- dummyCoinbaseTxn := rawtx + transaction := rawtxIf a coinbase transaction is intended, please ensure you're assigning the correct transaction.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.transaction := rawtx
34-34:
⚠️ Potential issueUnhandled Error from
InscribeToTSSFromDeployerWithMemo
The function
InscribeToTSSFromDeployerWithMemo
may return an error that is not being checked. It's important to handle errors to ensure the test behaves as expected.Modify the code to handle the error:
- txid := r.InscribeToTSSFromDeployerWithMemo(amount, utxos, memo) + txid, err := r.InscribeToTSSFromDeployerWithMemo(amount, utxos, memo) + require.NoError(r, err)Committable suggestion was skipped due to low confidence.
27-27:
⚠️ Potential issueUndefined Function
parseFloat
The function
parseFloat
is used but not defined in the provided context. This will lead to a compilation error.Apply this diff to fix the issue:
+ import "strconv" ... - amount := parseFloat(r, args[0]) + amount, err := strconv.ParseFloat(args[0], 64) + require.NoError(r, err)This change uses the standard
strconv.ParseFloat
function to parse the amount and properly handles any potential error.Committable suggestion was skipped due to low confidence.
app/encoding.go (1)
33-50: 🛠️ Refactor suggestion
Consolidate interface registrations to enhance maintainability
The consecutive calls to
RegisterInterfaces
for each module can be streamlined to improve code maintainability and readability. By iterating over a slice of registration functions, you can reduce repetition and make future additions simpler.Consider applying the following refactor:
func MakeEncodingConfig() params.EncodingConfig { //encodingConfig := params.MakeEncodingConfig() encodingConfig := evmenc.MakeConfig(ModuleBasics) registry := encodingConfig.InterfaceRegistry - cryptocodec.RegisterInterfaces(registry) - authtypes.RegisterInterfaces(registry) - authz.RegisterInterfaces(registry) - banktypes.RegisterInterfaces(registry) - stakingtypes.RegisterInterfaces(registry) - slashingtypes.RegisterInterfaces(registry) - upgradetypes.RegisterInterfaces(registry) - distrtypes.RegisterInterfaces(registry) - evidencetypes.RegisterInterfaces(registry) - crisistypes.RegisterInterfaces(registry) - evmtypes.RegisterInterfaces(registry) - ethermint.RegisterInterfaces(registry) - authoritytypes.RegisterInterfaces(registry) - crosschaintypes.RegisterInterfaces(registry) - emissionstypes.RegisterInterfaces(registry) - fungibletypes.RegisterInterfaces(registry) - observertypes.RegisterInterfaces(registry) - lightclienttypes.RegisterInterfaces(registry) + interfaceRegistrations := []func(types.InterfaceRegistry){ + cryptocodec.RegisterInterfaces, + authtypes.RegisterInterfaces, + authz.RegisterInterfaces, + banktypes.RegisterInterfaces, + stakingtypes.RegisterInterfaces, + slashingtypes.RegisterInterfaces, + upgradetypes.RegisterInterfaces, + distrtypes.RegisterInterfaces, + evidencetypes.RegisterInterfaces, + crisistypes.RegisterInterfaces, + evmtypes.RegisterInterfaces, + ethermint.RegisterInterfaces, + authoritytypes.RegisterInterfaces, + crosschaintypes.RegisterInterfaces, + emissionstypes.RegisterInterfaces, + fungibletypes.RegisterInterfaces, + observertypes.RegisterInterfaces, + lightclienttypes.RegisterInterfaces, + } + + for _, register := range interfaceRegistrations { + register(registry) + } return encodingConfig }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.cryptocodec.RegisterInterfaces(registry) authtypes.RegisterInterfaces(registry) authz.RegisterInterfaces(registry) banktypes.RegisterInterfaces(registry) stakingtypes.RegisterInterfaces(registry) slashingtypes.RegisterInterfaces(registry) upgradetypes.RegisterInterfaces(registry) distrtypes.RegisterInterfaces(registry) evidencetypes.RegisterInterfaces(registry) crisistypes.RegisterInterfaces(registry) evmtypes.RegisterInterfaces(registry) ethermint.RegisterInterfaces(registry) authoritytypes.RegisterInterfaces(registry) crosschaintypes.RegisterInterfaces(registry) emissionstypes.RegisterInterfaces(registry) fungibletypes.RegisterInterfaces(registry) observertypes.RegisterInterfaces(registry) lightclienttypes.RegisterInterfaces(registry) interfaceRegistrations := []func(types.InterfaceRegistry){ cryptocodec.RegisterInterfaces, authtypes.RegisterInterfaces, authz.RegisterInterfaces, banktypes.RegisterInterfaces, stakingtypes.RegisterInterfaces, slashingtypes.RegisterInterfaces, upgradetypes.RegisterInterfaces, distrtypes.RegisterInterfaces, evidencetypes.RegisterInterfaces, crisistypes.RegisterInterfaces, evmtypes.RegisterInterfaces, ethermint.RegisterInterfaces, authoritytypes.RegisterInterfaces, crosschaintypes.RegisterInterfaces, emissionstypes.RegisterInterfaces, fungibletypes.RegisterInterfaces, observertypes.RegisterInterfaces, lightclienttypes.RegisterInterfaces, } for _, register := range interfaceRegistrations { register(registry) }
app/setup_handlers.go (3)
12-12: 🛠️ Refactor suggestion
Consider centralizing the release version constant
Hardcoding
releaseVersion
as"v20"
may lead to maintenance challenges when updating to future versions. It's advisable to centralize the version definition, perhaps by using a global constant or configuration, to ensure consistency across the codebase.
54-56: 🛠️ Refactor suggestion
Utilize a centralized version constant in upgrade handlers
Using the hardcoded
releaseVersion
in theSetUpgradeHandler
function may lead to discrepancies if version values are updated elsewhere. Refactoring to use a centralized version constant ensures that all parts of the application reference the same version value, enhancing maintainability.
74-74: 🛠️ Refactor suggestion
Ensure consistent version comparisons across the application
The conditional check on line 74 compares
upgradeInfo.Name
with the hardcodedreleaseVersion
. To prevent potential bugs from version mismatches, consider using a centralized constant for version comparison or implementing a method to retrieve the current version dynamically.x/fungible/keeper/msg_server_update_gateway_contract.go (3)
58-63:
⚠️ Potential issueCorrect the error message to reflect the actual function called
The error message references
updateSystemContractAddress
, which does not match the functionCallUpdateGatewayAddress
being called. This inconsistency may lead to confusion during debugging. Update the error message to accurately reflect the function name.Apply this diff to correct the error message:
return nil, cosmoserrors.Wrapf( err, - "failed to call updateSystemContractAddress for ZRC20 (%s)", + "failed to call updateGatewayAddress for ZRC20 (%s)", fcoin.Zrc20ContractAddress, )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.return nil, cosmoserrors.Wrapf( err, "failed to call updateGatewayAddress for ZRC20 (%s)", fcoin.Zrc20ContractAddress, ) }
50-54: 🛠️ Refactor suggestion
Improve ZRC20 contract address validation
Similarly, when iterating over
foreignCoins
, enhance the validation ofzrc20Addr
by usingethcommon.IsHexAddress(fcoin.Zrc20ContractAddress)
. This ensures that only valid hexadecimal addresses are processed, preventing potential issues during address conversion.Apply this diff to improve address validation:
+ if !ethcommon.IsHexAddress(fcoin.Zrc20ContractAddress) { k.Logger(ctx).Error("invalid zrc20 contract address", "address", fcoin.Zrc20ContractAddress) continue } zrc20Addr := ethcommon.HexToAddress(fcoin.Zrc20ContractAddress) - if zrc20Addr == (ethcommon.Address{}) { - k.Logger(ctx).Error("invalid zrc20 contract address", "address", fcoin.Zrc20ContractAddress) - continue - }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if !ethcommon.IsHexAddress(fcoin.Zrc20ContractAddress) { k.Logger(ctx).Error("invalid zrc20 contract address", "address", fcoin.Zrc20ContractAddress) continue } zrc20Addr := ethcommon.HexToAddress(fcoin.Zrc20ContractAddress)
28-34: 🛠️ Refactor suggestion
Enhance address validation using
IsHexAddress
The current validation checks if
gatewayAddr
is the zero value ofethcommon.Address
, which may not catch all invalid addresses. To improve robustness, useethcommon.IsHexAddress(msg.NewGatewayContractAddress)
to validate the address format before conversion. This ensures that only properly formatted hexadecimal addresses are accepted.Apply this diff to enhance the address validation:
+ if !ethcommon.IsHexAddress(msg.NewGatewayContractAddress) { + return nil, cosmoserrors.Wrapf( + sdkerrors.ErrInvalidAddress, + "invalid gateway contract address (%s)", + msg.NewGatewayContractAddress, + ) + } gatewayAddr := ethcommon.HexToAddress(msg.NewGatewayContractAddress) - if gatewayAddr == (ethcommon.Address{}) { - return nil, cosmoserrors.Wrapf( - sdkerrors.ErrInvalidAddress, - "invalid gateway contract address (%s)", - msg.NewGatewayContractAddress, - ) - }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if !ethcommon.IsHexAddress(msg.NewGatewayContractAddress) { return nil, cosmoserrors.Wrapf( sdkerrors.ErrInvalidAddress, "invalid gateway contract address (%s)", msg.NewGatewayContractAddress, ) } gatewayAddr := ethcommon.HexToAddress(msg.NewGatewayContractAddress)
e2e/runner/inscription.go (6)
117-117:
⚠️ Potential issueCorrect the misleading comment
The comment indicates accessing the "address" field, but the code returns
response.RawHex
. This may cause confusion.Update the comment to accurately reflect the returned value:
-// Access the "address" field +// Return the raw hexadecimal transaction📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// Return the raw hexadecimal transaction
103-106:
⚠️ Potential issueCheck HTTP response status code before processing
As with the previous function, ensure that
resp.StatusCode
is checked to confirm a successful response before reading and parsing the response body.Add a status code check:
resp, err := r.client.Do(req) if err != nil { return "", errors.Wrap(err, "cannot send reveal to sidecar") } +if resp.StatusCode != http.StatusOK { + return "", fmt.Errorf("received non-OK HTTP status: %s", resp.Status) +} defer resp.Body.Close()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.resp, err := r.client.Do(req) if err != nil { return "", errors.Wrap(err, "cannot send reveal to sidecar") } if resp.StatusCode != http.StatusOK { return "", fmt.Errorf("received non-OK HTTP status: %s", resp.Status) } defer resp.Body.Close() // Read the response body body, err := io.ReadAll(resp.Body)
80-80:
⚠️ Potential issueAvoid precision loss when converting amount
Converting
amount
fromfloat64
toint
after multiplication may lead to precision loss due to floating-point inaccuracies. Since precise amounts are critical in financial transactions, it's advisable to handle amounts using integer values representing the smallest currency unit (e.g., satoshis).Modify the function to accept
amount
as anint64
representing satoshis:-func (r *InscriptionBuilder) GenerateRevealTxn(to string, txnHash string, idx int, amount float64) (string, error) { +func (r *InscriptionBuilder) GenerateRevealTxn(to string, txnHash string, idx int, amount int64) (string, error) { // Inside the function: - Amount: int(amount * 100000000), + Amount: amount,This change ensures accurate handling of monetary values without precision loss.
Committable suggestion was skipped due to low confidence.
61-65:
⚠️ Potential issueCheck HTTP response status code before processing
Currently, the code does not verify the HTTP response status code after making the request. It's important to check
resp.StatusCode
to ensure the request was successful before decoding the response body.Consider adding a check for
resp.StatusCode
and handling non-200 responses appropriately:resp, err := r.client.Do(req) if err != nil { return "", errors.Wrap(err, "cannot send to sidecar") } +if resp.StatusCode != http.StatusOK { + return "", fmt.Errorf("received non-OK HTTP status: %s", resp.Status) +} defer resp.Body.Close()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.resp, err := r.client.Do(req) if err != nil { return "", errors.Wrap(err, "cannot send to sidecar") } if resp.StatusCode != http.StatusOK { return "", fmt.Errorf("received non-OK HTTP status: %s", resp.Status) } defer resp.Body.Close() // Read the response body var response commitResponse err = json.NewDecoder(resp.Body).Decode(&response)
33-33: 🛠️ Refactor suggestion
Initialize http.Client with timeouts
Using a default
http.Client
without timeouts can lead to hanging requests if the server is unresponsive, potentially causing resource leaks.Initialize the
http.Client
with appropriate timeouts:import ( // existing imports + "time" ) type InscriptionBuilder struct { sidecarURL string - client http.Client + client *http.Client } +func NewInscriptionBuilder(sidecarURL string) *InscriptionBuilder { + return &InscriptionBuilder{ + sidecarURL: sidecarURL, + client: &http.Client{ + Timeout: 15 * time.Second, + }, + } +}This ensures that HTTP requests will time out after a specified duration, improving the robustness of your application.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.import ( // existing imports "time" ) type InscriptionBuilder struct { sidecarURL string client *http.Client } func NewInscriptionBuilder(sidecarURL string) *InscriptionBuilder { return &InscriptionBuilder{ sidecarURL: sidecarURL, client: &http.Client{ Timeout: 15 * time.Second, }, } }
70-70: 🛠️ Refactor suggestion
Use a logging library instead of fmt.Print
Using
fmt.Print
for logging is not recommended in production code. It lacks features like log levels and structured logging.Replace
fmt.Print
with the standardlog
package or a structured logging library:-import "fmt" +import "log" ... -fmt.Print("raw commit response ", response.Address) +log.Printf("raw commit response: %s", response.Address)This approach provides better control over logging output and integrates well with logging systems.
Committable suggestion was skipped due to low confidence.
contrib/localnet/scripts/start-upgrade-orchestrator.sh (1)
42-42: 🛠️ Refactor suggestion
Make
UPGRADE_NAME
configurable for better maintainability.Hardcoding
UPGRADE_NAME
to"v20"
reduces the script's flexibility. Consider parameterizingUPGRADE_NAME
via an environment variable or a script argument. This approach enhances maintainability and facilitates future upgrades without modifying the script.Apply this modification to make
UPGRADE_NAME
configurable:-UPGRADE_NAME="v20" +UPGRADE_NAME=${UPGRADE_NAME:-"v20"}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.UPGRADE_NAME=${UPGRADE_NAME:-"v20"}
e2e/runner/v2_setup_zeta.go (1)
78-80:
⚠️ Potential issueRefine the function comment to adhere to GoDoc conventions
The function comment should start with the function name and be written in the third person singular verb form. This enhances clarity and maintains consistency with GoDoc standards.
Apply this diff to improve the comment:
-// UpdateChainParamsV2Contracts update the erc20 custody contract and gateway address in the chain params -// this operation is used when transitioning to new smart contract architecture where a new ERC20 custody contract is deployed +// UpdateChainParamsV2Contracts updates the ERC20 custody contract and gateway address in the chain parameters. +// This operation is used when transitioning to a new smart contract architecture where a new ERC20 custody contract is deployed.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// UpdateChainParamsV2Contracts updates the ERC20 custody contract and gateway address in the chain parameters. // This operation is used when transitioning to a new smart contract architecture where a new ERC20 custody contract is deployed. func (r *E2ERunner) UpdateChainParamsV2Contracts() {
cmd/zetae2e/local/v2.go (1)
15-72: 🛠️ Refactor suggestion
Consider refactoring 'startV2Tests' to reduce code duplication
The
startV2Tests
function contains multiple calls tov2TestRoutine
with similar parameters. Refactoring this code to iterate over a slice or map of test configurations can improve maintainability and reduce repetition.zetaclient/chains/bitcoin/rpc/rpc.go (3)
139-145: 🛠️ Refactor suggestion
Optimize input transaction retrieval to improve performance
Currently,
GetRawTxByHash
is called for every input transaction. If multiplevin
entries reference the same transaction, this results in redundant RPC calls, which can impact performance. Consider caching the results of previously fetched transactions to avoid unnecessary RPC calls:Apply this refactor to cache previous transactions:
prevTxCache := make(map[string]*btcutil.Tx) for _, vin := range rawResult.Vin { prevTx, exists := prevTxCache[vin.Txid] if !exists { prevTx, err = GetRawTxByHash(rpcClient, vin.Txid) if err != nil { return 0, 0, errors.Wrapf(err, "failed to get previous tx: %s", vin.Txid) } prevTxCache[vin.Txid] = prevTx } if int(vin.Vout) >= len(prevTx.MsgTx().TxOut) { return 0, 0, fmt.Errorf("invalid Vout index %d for tx %s", vin.Vout, vin.Txid) } totalInputValue += prevTx.MsgTx().TxOut[vin.Vout].Value }This approach reduces the number of RPC calls by reusing transactions that have already been fetched.
139-145:
⚠️ Potential issueEnsure
vin.Vout
index is within bounds to prevent panicsIn the loop over transaction inputs, accessing
prevTx.MsgTx().TxOut[vin.Vout]
without verifying the index may lead to an index out of range panic ifvin.Vout
exceeds the length ofprevTx.MsgTx().TxOut
. To enhance robustness, add a bounds check before accessing the output:Apply this diff to add the bounds check:
for _, vin := range rawResult.Vin { prevTx, err := GetRawTxByHash(rpcClient, vin.Txid) if err != nil { return 0, 0, errors.Wrapf(err, "failed to get previous tx: %s", vin.Txid) } + if int(vin.Vout) >= len(prevTx.MsgTx().TxOut) { + return 0, 0, fmt.Errorf("invalid Vout index %d for tx %s", vin.Vout, vin.Txid) + } totalInputValue += prevTx.MsgTx().TxOut[vin.Vout].Value }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.for _, vin := range rawResult.Vin { prevTx, err := GetRawTxByHash(rpcClient, vin.Txid) if err != nil { return 0, 0, errors.Wrapf(err, "failed to get previous tx: %s", vin.Txid) } if int(vin.Vout) >= len(prevTx.MsgTx().TxOut) { return 0, 0, fmt.Errorf("invalid Vout index %d for tx %s", vin.Vout, vin.Txid) } totalInputValue += prevTx.MsgTx().TxOut[vin.Vout].Value }
159-162: 🛠️ Refactor suggestion
Handle unlikely negative fee scenario explicitly
While the comment notes that a negative fee "never happens," it's prudent to handle this case explicitly, possibly with more context. Consider logging additional information or using a custom error type to aid in debugging if this scenario occurs.
Apply this diff to enhance error handling:
if fee < 0 { // unlikely but should be handled - return 0, 0, fmt.Errorf("got negative fee: %d", fee) + return 0, 0, fmt.Errorf("got negative fee: %d (totalInputValue: %d, totalOutputValue: %d, txid: %s)", fee, totalInputValue, totalOutputValue, rawResult.Txid) }This provides more context, which can be valuable for troubleshooting unexpected negative fees.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.fee := totalInputValue - totalOutputValue if fee < 0 { // unlikely but should be handled return 0, 0, fmt.Errorf("got negative fee: %d (totalInputValue: %d, totalOutputValue: %d, txid: %s)", fee, totalInputValue, totalOutputValue, rawResult.Txid) }
x/fungible/keeper/msg_server_update_gateway_contract_test.go (1)
43-63: 🛠️ Refactor suggestion
Handle potential errors in
queryZRC20Gateway
function.The
queryZRC20Gateway
helper function lacks handling for potential errors that might occur during the EVM call or result unpacking. Whilerequire.NoError
will fail the test if an error occurs, it's advisable to handle these errors gracefully to improve test readability and maintainability.Consider refactoring the function to return an error:
-func queryZRC20Gateway(contract common.Address) string { +func queryZRC20Gateway(contract common.Address) (string, error) { abi, err := zrc20.ZRC20MetaData.GetAbi() - require.NoError(t, err) + if err != nil { + return "", err + } res, err := k.CallEVM( ctx, *abi, types.ModuleAddressEVM, contract, keeper.BigIntZero, nil, false, false, "gatewayAddress", ) - require.NoError(t, err) + if err != nil { + return "", err + } unpacked, err := abi.Unpack("gatewayAddress", res.Ret) - require.NoError(t, err) + if err != nil { + return "", err + } address, ok := unpacked[0].(common.Address) - require.True(t, ok) + if !ok { + return "", fmt.Errorf("failed to cast unpacked result to common.Address") + } return address.Hex(), nil }This allows the calling function to handle the error appropriately and provides clearer error messages.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.queryZRC20Gateway := func(contract common.Address) (string, error) { abi, err := zrc20.ZRC20MetaData.GetAbi() if err != nil { return "", err } res, err := k.CallEVM( ctx, *abi, types.ModuleAddressEVM, contract, keeper.BigIntZero, nil, false, false, "gatewayAddress", ) if err != nil { return "", err } unpacked, err := abi.Unpack("gatewayAddress", res.Ret) if err != nil { return "", err } address, ok := unpacked[0].(common.Address) if !ok { return "", fmt.Errorf("failed to cast unpacked result to common.Address") } return address.Hex(), nil }
contrib/localnet/bitcoin-sidecar/js/src/client.ts (5)
17-18:
⚠️ Potential issueUse 'string' instead of 'String' for type annotations
The types
String
should be replaced with the primitive typestring
. Usingstring
ensures consistency and avoids the potential issues associated with boxed primitives.Apply this diff to resolve the issue:
-export type Address = String; -export type BtcAddress = String; +export type Address = string; +export type BtcAddress = string; ... -export type BtcTxnHash = String; +export type BtcTxnHash = string;Also applies to: 21-21
🧰 Tools
🪛 Biome
[error] 17-17: Don't use 'String' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'string' instead(lint/complexity/noBannedTypes)
[error] 18-18: Don't use 'String' as a type.
Use lowercase primitives for consistency.
Safe fix: Use 'string' instead(lint/complexity/noBannedTypes)
121-121:
⚠️ Potential issueRemove unnecessary semicolon
There is an extra semicolon after the constructor call, which is unnecessary and should be removed to maintain clean and readable code.
Apply this diff:
- this.psbt = new Psbt({ network });; + this.psbt = new Psbt({ network });📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.this.psbt = new Psbt({ network });
127-127: 🛠️ Refactor suggestion
Rename method to follow naming conventions
The method
with_commit_tx
should be renamed towithCommitTx
to adhere to JavaScript and TypeScript naming conventions, which favor camelCase for method names.Apply this diff:
- public with_commit_tx(to: string, commitTxn: BtcInput, commitAmount: number, feeRate: number): RevealTxnBuilder { + public withCommitTx(to: string, commitTxn: BtcInput, commitAmount: number, feeRate: number): RevealTxnBuilder {Ensure to update all references to this method accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.public withCommitTx(to: string, commitTxn: BtcInput, commitAmount: number, feeRate: number): RevealTxnBuilder {
176-182: 🛠️ Refactor suggestion
Optimize fee estimation without signing transactions
In the
estimateFee
method, signing and finalizing the cloned PSBT purely for fee estimation is inefficient. There are more efficient approaches to estimate the transaction size without signing or finalizing.Consider calculating the estimated virtual size based on known sizes of inputs and outputs, or use methods that do not require signing:
- cloned.signAllInputs(this.key); - cloned.finalizeAllInputs(); - const size = cloned.extractTransaction().virtualSize(); + const size = cloned.getVirtualSize();This approach leverages
getVirtualSize()
to obtain the transaction size without the need to sign or finalize, improving performance and reducing unnecessary computational overhead.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// should have a way to avoid signing but just providing mocked signautre const size = cloned.getVirtualSize(); return size * feeRate; }
81-81:
⚠️ Potential issueUse cryptographically secure random number generation
Using
randombytes
from therandombytes
package may not provide cryptographically secure randomness required for key generation. It is recommended to use Node.js's built-incrypto
module'srandomBytes
function instead.Apply this diff:
-import randomBytes from "randombytes"; +import { randomBytes } from "crypto"; ... -const rng = randomBytes; -... -const internalKey = bip32.fromSeed(rng(64), this.network); +const internalKey = bip32.fromSeed(randomBytes(64), this.network);This change ensures that the seed for BIP32 key generation uses a secure source of randomness.
Committable suggestion was skipped due to low confidence.
zetaclient/chains/bitcoin/observer/witness.go (3)
19-20: 🛠️ Refactor suggestion
Improve documentation comment for clarity
The added note provides valuable guidance but can be more precise. It is recommended to specify the exact error conditions under which the caller should rescan the transaction, such as errors from
GetSenderAddressByVin
. This will enhance the readability and maintainability of the code.
46-57: 🛠️ Refactor suggestion
Refactor complex conditional logic for readability
The condition in the
if
statement combines multiple checks, which may reduce readability. Consider extracting the condition into a well-named boolean variable or a separate function to clarify its purpose.Example:
isDepositorFeeV2Applicable := netParams.Name == chaincfg.TestNet3Params.Name || (netParams.Name == chaincfg.MainNetParams.Name && blockNumber >= bitcoin.DynamicDepositorFeeHeightV2) if isDepositorFeeV2Applicable { depositorFee, err = bitcoin.CalcDepositorFeeV2(client, &tx, netParams) // ... }
72-77: 🛠️ Refactor suggestion
Review logging levels to prevent excessive verbosity
The log messages indicating the successful extraction of memos are set at the
Info
level. In production environments, this may lead to excessive logging. Consider setting these messages to theDebug
level unless they are essential for monitoring.e2e/runner/accounting.go (2)
177-177: 🛠️ Refactor suggestion
Compare Addresses Directly Instead of Hex Strings
For improved efficiency and clarity, compare the
common.Address
values directly rather than comparing their hexadecimal string representations.Apply this diff to compare addresses directly:
-if r.ERC20CustodyAddr.Hex() != r.ERC20CustodyV2Addr.Hex() { +if r.ERC20CustodyAddr != r.ERC20CustodyV2Addr {📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if r.ERC20CustodyAddr != r.ERC20CustodyV2Addr {
173-173:
⚠️ Potential issueAvoid Shared Reference When Initializing
custodyFullBalance
Currently,
custodyFullBalance
points to the samebig.Int
instance ascustodyBalance
. ModifyingcustodyFullBalance
later could unintentionally affectcustodyBalance
. To prevent this, create a newbig.Int
instance with the same value.Apply this diff to safely initialize
custodyFullBalance
:-custodyFullBalance := custodyBalance +custodyFullBalance := new(big.Int).Set(custodyBalance)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.custodyFullBalance := new(big.Int).Set(custodyBalance)
e2e/runner/v2_migration.go (3)
66-81: 🛠️ Refactor suggestion
Refactor repetitive code in
upgradeZRC20s
function.The
upgradeZRC20s
function contains repetitive code blocks for upgrading each ZRC20 token. Refactoring this code into a loop over a slice of token configurations can enhance readability and maintainability.Consider refactoring the function as follows:
func (r *E2ERunner) upgradeZRC20s() { // Obtain chain IDs evmChainID, err := r.EVMClient.ChainID(r.Ctx) require.NoError(r, err) btcChainID := big.NewInt(r.GetBitcoinChainID()) solChainID := big.NewInt(902) // Define a slice of ZRC20 configurations zrc20Configs := []struct { name string addr common.Address caller zrc20Caller chainID *big.Int coinType uint8 }{ {"ETH", r.ETHZRC20Addr, r.ETHZRC20, evmChainID, uint8(coin.CoinType_Gas)}, {"ERC20", r.ERC20ZRC20Addr, r.ERC20ZRC20, evmChainID, uint8(coin.CoinType_ERC20)}, {"BTC", r.BTCZRC20Addr, r.BTCZRC20, btcChainID, uint8(coin.CoinType_Gas)}, {"SOL", r.SOLZRC20Addr, r.SOLZRC20, solChainID, uint8(coin.CoinType_Gas)}, } // Loop through configurations and upgrade each ZRC20 for _, config := range zrc20Configs { r.Logger.Infof("Upgrading %s ZRC20", config.name) r.upgradeZRC20(config.addr, config.caller, config.chainID, config.coinType) } }
178-178:
⚠️ Potential issueAvoid using
Int64()
when checking for zero balance.In line 178, using
balance.Int64()
to compare with zero can lead to incorrect results ifbalance
exceeds the range of anint64
, causing an overflow. To safely check if the balance is zero, usebalance.Sign()
, which returns-1
for negative numbers,0
for zero, and1
for positive numbers.Apply this diff to fix the comparison:
-require.NotEqual(r, int64(0), balance.Int64()) +require.NotEqual(r, 0, balance.Sign())📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.require.NotEqual(r, 0, balance.Sign())
23-23:
⚠️ Potential issueCorrect the use of
big.NewInt
with large integers.In line 23, the call to
big.NewInt(1e18)
is incorrect because1e18
is a floating-point constant, andbig.NewInt
expects anint64
argument. This can lead to loss of precision or an overflow error. To accurately represent1e18
, consider usingSetString
to initialize abig.Int
from a decimal string.Apply this diff to fix the issue:
-oneThousand := big.NewInt(0).Mul(big.NewInt(1e18), big.NewInt(1000)) +oneEth, success := new(big.Int).SetString("1000000000000000000", 10) +require.True(r, success, "Failed to set big.Int from string") +oneThousand := new(big.Int).Mul(oneEth, big.NewInt(1000))📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.oneEth, success := new(big.Int).SetString("1000000000000000000", 10) require.True(r, success, "Failed to set big.Int from string") oneThousand := new(big.Int).Mul(oneEth, big.NewInt(1000))
zetaclient/orchestrator/bootstrap.go (3)
10-10: 🛠️ Refactor suggestion
Re-evaluate the necessity of introducing
github.com/onrik/ethrpc
as a new dependency.The addition of
ethrpc2 "github.com/onrik/ethrpc"
introduces an external dependency. Consider utilizing existing functionality provided by thego-ethereum
packages to minimize dependencies and maintain consistency within the codebase.
302-303:
⚠️ Potential issueImplement error handling for
evmJSONRPCClient
initialization.While
ethrpc2.NewEthRPC
may not return an error, it's prudent to ensure the client is correctly instantiated to prevent potential nil pointer dereferences or runtime errors. Verify the client’s initialization status after creation.
307-309: 🛠️ Refactor suggestion
Assess the necessity of both
evmClient
andevmJSONRPCClient
in observer initialization.The
NewObserver
function now accepts bothevmClient
andevmJSONRPCClient
as parameters. Evaluate if both clients are required, or if their functionalities overlap. Consolidating them could simplify the observer's initialization and reduce complexity.zetaclient/chains/bitcoin/fee.go (3)
24-53: 🛠️ Refactor suggestion
Consolidate constants for better organization
The introduction of numerous constants enhances the clarity of transaction size and fee calculations. However, consider grouping related constants into distinct
const
blocks or organizing them into a struct or nested constants for improved maintainability and readability.
258-280:
⚠️ Potential issueEnsure comprehensive error handling in
CalcDepositorFeeV2
The
CalcDepositorFeeV2
function effectively calculates the depositor fee based on the transaction's fee rate. To enhance robustness:
- Verify that all possible network parameters are correctly handled, including testnets and any future networks.
- Enhance error messages to provide more context, aiding in debugging if issues arise.
Would you like assistance in extending unit tests to cover edge cases for different network parameters?
282-326: 🛠️ Refactor suggestion
⚠️ Potential issueImprove error resilience in
GetRecentFeeRate
In the
GetRecentFeeRate
function, certain errors could interrupt the fee rate retrieval process. Specifically:
- Immediate return on errors when fetching block hashes or block details may cause the function to fail due to transient issues.
Suggestion:
- Implement error handling that logs the error and continues with the next block instead of terminating. This approach increases the robustness of the fee estimation process.
Apply the following changes to enhance error handling:
for i := int64(0); i < feeRateCountBackBlocks; i++ { // get the block hash, err := rpcClient.GetBlockHash(blockNumber - i) if err != nil { - return 0, err + // Log the error and continue to the next block + continue } block, err := rpcClient.GetBlockVerboseTx(hash) if err != nil { - return 0, err + // Log the error and continue to the next block + continue } // compute the average fee rate of the block and take the higher rate avgFeeRate, err := CalcBlockAvgFeeRate(block, netParams) if err != nil { - return 0, err + // Log the error and continue to the next block + continue } if avgFeeRate > highestRate { highestRate = avgFeeRate } } // Use default fee rate if unable to determine from recent blocks if highestRate == 0 { highestRate = defaultTestnetFeeRate }This modification ensures that transient errors do not prevent the function from attempting to retrieve fee rates from other blocks.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// GetRecentFeeRate gets the highest fee rate from recent blocks // Note: this method should be used for testnet ONLY func GetRecentFeeRate(rpcClient interfaces.BTCRPCClient, netParams *chaincfg.Params) (uint64, error) { // should avoid using this method for mainnet if netParams.Name == chaincfg.MainNetParams.Name { return 0, errors.New("GetRecentFeeRate should not be used for mainnet") } // get the current block number blockNumber, err := rpcClient.GetBlockCount() if err != nil { return 0, err } // get the highest fee rate among recent 'countBack' blocks to avoid underestimation highestRate := int64(0) for i := int64(0); i < feeRateCountBackBlocks; i++ { // get the block hash, err := rpcClient.GetBlockHash(blockNumber - i) if err != nil { // Log the error and continue to the next block continue } block, err := rpcClient.GetBlockVerboseTx(hash) if err != nil { // Log the error and continue to the next block continue } // computes the average fee rate of the block and take the higher rate avgFeeRate, err := CalcBlockAvgFeeRate(block, netParams) if err != nil { // Log the error and continue to the next block continue } if avgFeeRate > highestRate { highestRate = avgFeeRate } } // Use default fee rate if unable to determine from recent blocks if highestRate == 0 { highestRate = defaultTestnetFeeRate } // #nosec G115 always in range return uint64(highestRate), nil }
zetaclient/chains/solana/observer/inbound.go (1)
288-294:
⚠️ Potential issueEnsure Robust Error Handling When Parsing the Signer Address
In the
ParseInboundAsDeposit
function, when the signer address cannot be parsed, the code logs the error and returnsnil, nil
, effectively skipping the instruction. This approach may lead to missed inbound events or silent failures, which is undesirable in a production environment where accurate event processing is critical.Consider returning the error to propagate it up the call stack, ensuring that parsing failures are properly handled. For example:
// get the sender address (skip if unable to parse signer address) sender, err := ob.GetSignerDeposit(tx, &instruction) if err != nil { - ob.Logger(). - Inbound.Err(err). - Msgf("unable to get signer for sig %s instruction %d", tx.Signatures[0], instructionIndex) - return nil, nil + return nil, errors.Wrapf(err, "unable to get signer for sig %s instruction %d", tx.Signatures[0], instructionIndex) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// get the sender address (skip if unable to parse signer address) sender, err := ob.GetSignerDeposit(tx, &instruction) if err != nil { return nil, errors.Wrapf(err, "unable to get signer for sig %s instruction %d", tx.Signatures[0], instructionIndex) }
e2e/runner/bitcoin.go (1)
307-308: 🛠️ Refactor suggestion
Make
sidecarURL
configurable for better flexibilityCurrently, the
sidecarURL
is hardcoded as"http://bitcoin-node-sidecar:8000"
. Consider making this URL configurable through environment variables or configuration files to enhance flexibility and adaptability across different environments.zetaclient/chains/evm/observer/observer.go (1)
64-66: 💡 Codebase verification
Some
NewObserver
invocations do not match the updated signatureSeveral instances of the
NewObserver
function still utilize the old signature, passing parameters that no longer align with the updated function definition. Please update the following locations to ensure consistency and prevent potential compilation errors:
zetaclient/orchestrator/bootstrap.go
zetaclient/chains/solana/observer/outbound_test.go
zetaclient/chains/solana/observer/inbound_test.go
zetaclient/chains/evm/signer/signer_test.go
zetaclient/chains/evm/observer/observer_test.go
zetaclient/chains/bitcoin/observer/observer_test.go
zetaclient/chains/base/observer_test.go
🔗 Analysis chain
Update 'NewObserver' function signature to enhance flexibility
The
NewObserver
function now accepts achain chains.Chain
and anevmJSONRPC interfaces.EVMJSONRPCClient
as parameters, replacing the previousevmCfg config.EVMConfig
. This change promotes better abstraction and flexibility by directly passing the chain object and the JSON RPC client.Ensure that all invocations of
NewObserver
throughout the codebase are updated accordingly to match the new signature to prevent any compilation errors.You can verify all usages of
NewObserver
have been updated by running the following script:Also applies to: 76-76
🏁 Scripts executed
The following scripts were executed for the analysis:
Script:
#!/bin/bash # Description: Find all usages of 'NewObserver' to confirm they match the new function signature. rg --type go 'NewObserver\(' -A 5Length of output: 11712
zetaclient/chains/bitcoin/observer/inbound.go (3)
466-468:
⚠️ Potential issuePrevent logging of sensitive transaction data
Logging the raw
ScriptPubKey.Hex
may expose sensitive information. For security purposes, consider omitting this data or logging a sanitized version to prevent potential misuse.Apply this diff to modify the log statement:
logger.Error(). Err(err). - Msgf("GetBtcEventWithoutWitness: error decoding OP_RETURN memo: %s", vout1.ScriptPubKey.Hex) + Msg("GetBtcEventWithoutWitness: error decoding OP_RETURN memo")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.logger.Error(). Err(err). Msg("GetBtcEventWithoutWitness: error decoding OP_RETURN memo")
476-476: 🛠️ Refactor suggestion
Handle unexpected empty transaction inputs explicitly
Encountering a transaction with no inputs (
len(tx.Vin) == 0
) is unexpected. To improve robustness, consider handling this case explicitly or providing additional context in the error message.Add more context to the error message:
if len(tx.Vin) == 0 { // should never happen - return nil, fmt.Errorf("GetBtcEventWithoutWitness: no input found for inbound: %s", tx.Txid) + return nil, fmt.Errorf("GetBtcEventWithoutWitness: transaction %s has no inputs, possibly malformed transaction", tx.Txid) }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.if len(tx.Vin) == 0 { // should never happen return nil, fmt.Errorf("GetBtcEventWithoutWitness: transaction %s has no inputs, possibly malformed transaction", tx.Txid) }
441-453: 🛠️ Refactor suggestion
⚠️ Potential issueAvoid modifying input parameter
depositorFee
Modifying the input parameter
depositorFee
withinGetBtcEventWithoutWitness
can lead to confusion and unintended side effects. It's recommended to use a new local variable to store the recalculated depositor fee for better code clarity and maintainability.Apply this diff to use a local variable for the depositor fee:
if netParams.Name == chaincfg.TestNet3Params.Name || (netParams.Name == chaincfg.MainNetParams.Name && blockNumber >= bitcoin.DynamicDepositorFeeHeightV2) { - depositorFee, err = bitcoin.CalcDepositorFeeV2(rpcClient, &tx, netParams) + newDepositorFee, err := bitcoin.CalcDepositorFeeV2(rpcClient, &tx, netParams) if err != nil { return nil, errors.Wrapf(err, "error calculating depositor fee V2 for inbound: %s", tx.Txid) } + depositorFee = newDepositorFee }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// switch to depositor fee V2 if // 1. it is bitcoin testnet, or // 2. it is bitcoin mainnet and upgrade height is reached // TODO: remove CalcDepositorFeeV1 and below conditions after the upgrade height // https://github.com/zeta-chain/node/issues/2766 if netParams.Name == chaincfg.TestNet3Params.Name || (netParams.Name == chaincfg.MainNetParams.Name && blockNumber >= bitcoin.DynamicDepositorFeeHeightV2) { newDepositorFee, err := bitcoin.CalcDepositorFeeV2(rpcClient, &tx, netParams) if err != nil { return nil, errors.Wrapf(err, "error calculating depositor fee V2 for inbound: %s", tx.Txid) } depositorFee = newDepositorFee }
Makefile (2)
296-296:
⚠️ Potential issueEnsure consistent version references for
OLD_VERSION
.In line 296,
OLD_VERSION
is set to'release/v19'
, while in line 301, it is set to'https://github.com/zeta-chain/node/releases/download/v19.1.1'
. For consistency and to prevent potential mismatches during the build process, consider using the same version format in both instances.Apply this diff to unify the version references:
- $(DOCKER) build -t zetanode:old -f Dockerfile-localnet --target old-runtime-source --build-arg OLD_VERSION='release/v19' . + $(DOCKER) build -t zetanode:old -f Dockerfile-localnet --target old-runtime-source --build-arg OLD_VERSION='https://github.com/zeta-chain/node/releases/download/v19.1.1' .Also applies to: 301-301
249-250:
⚠️ Potential issueCorrect grammatical errors in comments for clarity.
The comments contain grammatical errors that may hinder readability. Correcting them will improve understanding.
Apply this diff to fix the grammatical errors:
-# this test upgrades from v18 and execute the v2 contracts migration process -# this tests is part of upgrade test part because it should run the upgrade from v18 to fully replicate the upgrade process +# This test upgrades from v18 and executes the v2 contracts migration process. +# This test is part of the upgrade tests because it should run the upgrade from v18 to fully replicate the upgrade process.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.# This test upgrades from v18 and executes the v2 contracts migration process. # This test is part of the upgrade tests because it should run the upgrade from v18 to fully replicate the upgrade process.
cmd/zetae2e/local/local.go (2)
239-243:
⚠️ Potential issueHandle Errors from
RunV2Migration
MethodThe
RunV2Migration()
function may return an error that is not currently being handled. For robust error management, it's advisable to check for errors and respond appropriately.Suggested change:
// run the v2 migration if testV2Migration { - deployerRunner.RunV2Migration() + if err := deployerRunner.RunV2Migration(); err != nil { + logger.Print("❌ V2 migration failed: %v", err) + os.Exit(1) + } }📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// run the v2 migration if testV2Migration { if err := deployerRunner.RunV2Migration(); err != nil { logger.Print("❌ V2 migration failed: %v", err) os.Exit(1) } }
366-368:
⚠️ Potential issueAlign Code with Comment Regarding
UpdateChainParamsV2Contracts
The comment indicates that
UpdateChainParamsV2Contracts()
should not be executed whentestV2Migration
is true, as it is already run in the migration process. However, without a conditional check, this function will run regardless of the flag's value.Suggested change:
// note: not run in testV2Migration because it is already run in the migration process - deployerRunner.UpdateChainParamsV2Contracts() + if !testV2Migration { + deployerRunner.UpdateChainParamsV2Contracts() + }This adjustment ensures that
UpdateChainParamsV2Contracts()
is not executed during the migration test, aligning the code behavior with the comment's intent.📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// note: not run in testV2Migration because it is already run in the migration process if !testV2Migration { deployerRunner.UpdateChainParamsV2Contracts() }
contrib/localnet/scripts/start-zetacored.sh (1)
257-268: 🛠️ Refactor suggestion
Consider refactoring repeated code into a loop for maintainability
The code segment adds multiple genesis accounts by repeating similar commands. Refactoring this repetitive code into a loop will enhance maintainability and make it easier to manage account additions in the future.
Here's a suggested refactor:
-# v2 ether tester - address=$(yq -r '.additional_accounts.user_v2_ether.bech32_address' /root/config.yml) - zetacored add-genesis-account "$address" 100000000000000000000000000azeta -# v2 erc20 tester - address=$(yq -r '.additional_accounts.user_v2_erc20.bech32_address' /root/config.yml) - zetacored add-genesis-account "$address" 100000000000000000000000000azeta -# v2 ether revert tester - address=$(yq -r '.additional_accounts.user_v2_ether_revert.bech32_address' /root/config.yml) - zetacored add-genesis-account "$address" 100000000000000000000000000azeta -# v2 erc20 revert tester - address=$(yq -r '.additional_accounts.user_v2_erc20_revert.bech32_address' /root/config.yml) - zetacored add-genesis-account "$address" 100000000000000000000000000azeta +# Array of account keys and their descriptions +declare -A accounts=( + ["user_v2_ether"]="v2 ether tester" + ["user_v2_erc20"]="v2 erc20 tester" + ["user_v2_ether_revert"]="v2 ether revert tester" + ["user_v2_erc20_revert"]="v2 erc20 revert tester" +) +# Loop over the accounts and add them to genesis +for key in "${!accounts[@]}"; do + echo "# ${accounts[$key]}" + address=$(yq -r ".additional_accounts.${key}.bech32_address" /root/config.yml) + zetacored add-genesis-account "$address" 100000000000000000000000000azeta +doneThis refactored code reduces duplication and simplifies the process of adding new accounts.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.# Array of account keys and their descriptions declare -A accounts=( ["user_v2_ether"]="v2 ether tester" ["user_v2_erc20"]="v2 erc20 tester" ["user_v2_ether_revert"]="v2 ether revert tester" ["user_v2_erc20_revert"]="v2 erc20 revert tester" ) # Loop over the accounts and add them to genesis for key in "${!accounts[@]}"; do echo "# ${accounts[$key]}" address=$(yq -r ".additional_accounts.${key}.bech32_address" /root/config.yml) zetacored add-genesis-account "$address" 100000000000000000000000000azeta done
zetaclient/chains/bitcoin/rpc/rpc_live_test.go (3)
81-102: 🛠️ Refactor suggestion
Refactor the
getMempoolSpaceTxsByBlock
function to handle errors more elegantly.Consider refactoring the
getMempoolSpaceTxsByBlock
function to handle errors more gracefully:
- Use
require.NoError
to handle errors and fail the test immediately if an error occurs.- Return early if an error occurs to avoid unnecessary nesting.
- Use
fmt.Errorf
to provide more context for the error messages.Here's a suggested refactor:
func getMempoolSpaceTxsByBlock( t *testing.T, client *rpcclient.Client, blkNumber int64, testnet bool, ) (*chainhash.Hash, []testutils.MempoolTx, error) { blkHash, err := client.GetBlockHash(blkNumber) require.NoError(t, err, "failed to get block hash for block %d", blkNumber) mempoolTxs, err := testutils.GetBlockTxs(context.Background(), blkHash.String(), testnet) require.NoError(t, err, "failed to get mempool.space txs for block %d", blkNumber) return blkHash, mempoolTxs, nil }
119-153: 🛠️ Refactor suggestion
Refactor the
LiveTest_FilterAndParseIncomingTx
test to use table-driven testing.Consider refactoring the
LiveTest_FilterAndParseIncomingTx
test to use table-driven testing. This will make the test more readable, maintainable, and easier to extend with additional test cases.Here's a suggested refactor:
func LiveTest_FilterAndParseIncomingTx(t *testing.T) { testCases := []struct { name string blockHash string expectedLength int expectedValues []struct { value float64 toAddress string memoBytes []byte fromAddress string blockNumber uint64 txHash string } }{ { name: "incoming tx", blockHash: "0000000000000032cb372f5d5d99c1ebf4430a3059b67c47a54dd626550fb50d", expectedLength: 1, expectedValues: []struct { value float64 toAddress string memoBytes []byte fromAddress string blockNumber uint64 txHash string }{ { value: 0.0001, toAddress: "tb1qsa222mn2rhdq9cruxkz8p2teutvxuextx3ees2", memoBytes: []byte{0x44, 0x53, 0x52, 0x52, 0x31, 0x52, 0x6d, 0x44, 0x43, 0x77, 0x57, 0x6d, 0x78, 0x71, 0x59, 0x32, 0x30, 0x31, 0x2f, 0x54, 0x4d, 0x74, 0x73, 0x4a, 0x64, 0x6d, 0x41, 0x3d}, fromAddress: "tb1qyslx2s8evalx67n88wf42yv7236303ezj3tm2l", blockNumber: 2406185, txHash: "889bfa69eaff80a826286d42ec3f725fd97c3338357ddc3a1f543c2d6266f797", }, }, }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { client, err := createRPCClient(chains.BitcoinTestnet.ChainId) require.NoError(t, err) hash, err := chainhash.NewHashFromStr(tc.blockHash) require.NoError(t, err) block, err := client.GetBlockVerboseTx(hash) require.NoError(t, err) inbounds, err := observer.FilterAndParseIncomingTx( client, block.Tx, uint64(block.Height), "tb1qsa222mn2rhdq9cruxkz8p2teutvxuextx3ees2", log.Logger, &chaincfg.TestNet3Params, 0.0, ) require.NoError(t, err) require.Len(t, inbounds, tc.expectedLength) for i, expected := range tc.expectedValues { require.Equal(t, expected.value, inbounds[i].Value) require.Equal(t, expected.toAddress, inbounds[i].ToAddress) require.Equal(t, expected.memoBytes, inbounds[i].MemoBytes) require.Equal(t, expected.fromAddress, inbounds[i].FromAddress) require.Equal(t, expected.blockNumber, inbounds[i].BlockNumber) require.Equal(t, expected.txHash, inbounds[i].TxHash) } }) } }
157-181: 🛠️ Refactor suggestion
Refactor the
LiveTest_FilterAndParseIncomingTx_Nop
test to use table-driven testing.Consider refactoring the
LiveTest_FilterAndParseIncomingTx_Nop
test to use table-driven testing. This will make the test more readable, maintainable, and easier to extend with additional test cases.Here's a suggested refactor:
func LiveTest_FilterAndParseIncomingTx_Nop(t *testing.T) { testCases := []struct { name string blockHash string }{ { name: "no incoming tx", blockHash: "000000000000002fd8136dbf91708898da9d6ae61d7c354065a052568e2f2888", }, } for _, tc := range testCases { t.Run(tc.name, func(t *testing.T) { client, err := createRPCClient(chains.BitcoinTestnet.ChainId) require.NoError(t, err) hash, err := chainhash.NewHashFromStr(tc.blockHash) require.NoError(t, err) block, err := client.GetBlockVerboseTx(hash) require.NoError(t, err) inbounds, err := observer.FilterAndParseIncomingTx( client, block.Tx, uint64(block.Height), "tb1qsa222mn2rhdq9cruxkz8p2teutvxuextx3ees2", log.Logger, &chaincfg.TestNet3Params, 0.0, ) require.NoError(t, err) require.Empty(t, inbounds) }) } }zetaclient/orchestrator/orchestrator.go (1)
392-402: 🛠️ Refactor suggestion
Simplify the logic for handling pending CCTXs.
The code block for handling pending CCTXs can be simplified by removing the redundant checks for empty CCTX lists before resolving signers and observers. Since the
cctxList
is already available, you can directly check its length and continue if it's empty.Here's a suggested refactor:
cctxList := cctxMap[chainID] metrics.PendingTxsPerChain. WithLabelValues(chain.Name()). Set(float64(len(cctxList))) if len(cctxList) == 0 { continue } // update chain parameters for signer and chain observer signer, err := oc.resolveSigner(app, chainID) if err != nil { oc.logger.Error().Err(err). Msgf("runScheduler: unable to resolve signer for chain %d", chainID) continue } ob, err := oc.resolveObserver(app, chainID) if err != nil { oc.logger.Error().Err(err). Msgf("runScheduler: resolveObserver failed for chain %d", chainID) continue } if !app.IsOutboundObservationEnabled() { continue } // ...zetaclient/chains/bitcoin/observer/inbound_test.go (1)
681-682: 🛠️ Refactor suggestion
Use
require.Nil(t, event)
for clarityReplace
require.Equal(t, (*observer.BTCInboundEvent)(nil), event)
withrequire.Nil(t, event)
to improve code clarity and follow idiomatic Go testing practices.- require.Equal(t, (*observer.BTCInboundEvent)(nil), event) + require.Nil(t, event)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.require.NoError(t, err) require.Nil(t, event)
pkg/ticker/ticker_test.go (2)
150-152: 🛠️ Refactor suggestion
Avoid hardcoding line numbers in test assertions to enhance maintainability
Hardcoding line numbers in test assertions can make tests fragile, as any changes to the file will alter line numbers and potentially cause false negatives. To improve the resilience of your tests, consider asserting on patterns that are independent of line numbers.
Apply this diff to modify the assertion:
- assert.ErrorContains(t, err, "ticker_test.go:142") + assert.Contains(t, err.Error(), "ticker_test.go")This change verifies that the error message includes the filename without depending on the exact line number.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.// assert that we get error with the correct file name assert.Contains(t, err.Error(), "ticker_test.go") })
154-176: 🛠️ Refactor suggestion
Avoid hardcoding line numbers in test assertions to enhance maintainability
Similar to the previous comment, hardcoding the line number in the assertion can lead to brittle tests. It's better to avoid relying on specific line numbers in error messages.
Apply this diff to modify the assertion:
- assert.ErrorContains(t, err, "ticker_test.go:162") + assert.Contains(t, err.Error(), "ticker_test.go")This adjustment ensures the test checks for the presence of the filename in the error message without depending on line numbers.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.t.Run("Nil panic", func(t *testing.T) { // ARRANGE // Given a context ctx := context.Background() // And a ticker ticker := New(durSmall, func(_ context.Context, _ *Ticker) error { var a func() a() return nil }) // ACT err := ticker.Run(ctx) // ASSERT assert.ErrorContains( t, err, "panic during ticker run: runtime error: invalid memory address or nil pointer dereference", ) // assert that we get error with the correct line number assert.Contains(t, err.Error(), "ticker_test.go") })
Restoring branch for cherry picking |
Description
Force priorityFee=gasPrice if priorityFee > gasPrice (EIP-1559)
Fixes Polygon outbound
5963
cctx: https://zetachain.blockpi.network/lcd/v1/public/zeta-chain/crosschain/cctx/137/5963Summary by CodeRabbit
New Features
Bug Fixes
Refactor
Documentation