Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Shared sequencer integration for fuel-core 0.40.0 #2451

Open
wants to merge 7 commits into
base: release/v0.40.1
Choose a base branch
from

Conversation

xgreenx
Copy link
Collaborator

@xgreenx xgreenx commented Nov 23, 2024

Description

This PR duplicates #1922, but on top of the fuel-core 0.40.0.
Since the current master is not ready to be released, but we want this feature in the testnet, we will create a minor release with added off-chain logic that posts data to the SS.

All changes are the same as in the original PR, except reqwest is bumped to 0.12, because it has a fix important for SS.

Before requesting review

  • I have reviewed the code myself

@xgreenx xgreenx self-assigned this Nov 23, 2024
@xgreenx xgreenx requested a review from a team November 23, 2024 14:49
@xgreenx xgreenx mentioned this pull request Nov 23, 2024
bin/fuel-core/src/cli/run/shared_sequencer.rs Outdated Show resolved Hide resolved
crates/services/shared-sequencer/src/config.rs Outdated Show resolved Hide resolved
crates/services/shared-sequencer/src/lib.rs Outdated Show resolved Hide resolved
crates/services/shared-sequencer/src/lib.rs Show resolved Hide resolved

let block = self.tendermint()?.block(height).await?;

for item in block.block.data.iter().skip(1) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do we skip one ?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Dentosal Added it, so I don't know=) I will remove this function for now, since it is not used

crates/services/shared-sequencer/src/service.rs Outdated Show resolved Hide resolved
crates/services/shared-sequencer/src/service.rs Outdated Show resolved Hide resolved
Copy link
Contributor

@AurelienFT AurelienFT left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the comments :)

pub tx_bytes: String,
}

pub async fn estimate_transaction(
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do we want any retry logic here? can be a follow up PR

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current implementation doesn't care if we are not able to submit the blob. We just try to do that, and if not, then it's okay.

In the future(follow-up issue), we need to worry about it and guarantee that all blobs are posted and there are no gaps.


impl Client {
/// Create a new shared sequencer client from config.
pub async fn new(endpoints: Endpoints, topic: [u8; 32]) -> anyhow::Result<Self> {
Copy link
Member

@rymnc rymnc Nov 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

i think it should also accept the expected client_id so that we don't misconfigure with the wrong network/chain.

on L68 we can perform the check

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the client_id? Do you mean chain_id? I want to keep the current implementation as simple, as possible.

Plus, if the RPC is malicious, it can lie about its chain_id

.make_payload(timeout_height, fee, signer, account, order, topic, data)
.await?;

let r = self.tendermint()?.broadcast_tx_sync(payload).await?;
Copy link
Member

@rymnc rymnc Nov 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

maybe we should have some retry logic here?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The same logic here #2451 (comment)


/// Sign a blob of data
pub async fn sign(&self, data: &[u8]) -> anyhow::Result<Signature> {
self.sign_message(Message::new(data)).await
Copy link
Member

@rymnc rymnc Nov 24, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

perhaps we should add a check here for the length of the message?
we should return an error if the payload is of length 0, it's behaviour is undefined in the specs of the elliptic curves afaiu

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The Message performs hashing inside, so it will be just empty hash.


let k256_public_key =
k256::PublicKey::from_public_key_der(cached_public_key_bytes)?;
Ok(Some(k256_public_key.into()))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

q: why don't we just set the public key into the enum variant instead of its raw form? we won't have to recalculate it when verifying_key() is called every time.

as i understand the code, make_payload() is leading to this function being called repeatedly

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It is more question to @Dentosal =) Performance is not critical in make_payload because we call this function twice each 12 seconds

Copy link
Member

@rymnc rymnc left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

first pass :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants