-
Notifications
You must be signed in to change notification settings - Fork 136
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom transactions from JDC removed from mempool during update #868
Custom transactions from JDC removed from mempool during update #868
Conversation
This PR needs to be rebased on dev |
815f09a
to
409a8e6
Compare
Bencher
🚨 3 ALERTS: Threshold Boundary Limits exceeded!
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
🚨 1 ALERT: Threshold Boundary Limit exceeded!
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
Bencher
🚨 2 ALERTS: Threshold Boundary Limits exceeded!
Click to view all benchmark results
Bencher - Continuous Benchmarking View Public Perf Page Docs | Repo | Chat | Help |
@lorbax I have made the required changes, I welcome your thoughts |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need rebase and change the code logic
409a8e6
to
61d617d
Compare
@NonsoAmadi10 would you be willing to work on #955 as well? we are trying to implement a practice in the SV2 development community where every bug coming from protocol edge cases needs to be mapped into MG tests that is strategically important for the robustness of SRI here's a link for a Github Discussion on this topic: #944 we have some video tutorials teaching how to write MG tests: |
Sure I would love to take a go at it. Sign me up!! Lol |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CI seems broken on rustfmt
please do the following
run:
cargo +nightly fmt
on all crates included in the CI errors, commit and push again
I have formatted now @plebhash |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Needs some minor fixes
3678e84
to
2a05c03
Compare
2a05c03
to
695b8bf
Compare
2c25b3d
to
5870671
Compare
Please update title to follow the commit guidelines. |
@rrybarczyk i don't quite understand. |
I think she was referring to this document which is linked in our In this specific case, the PR title should be named in a way it tries to wrap the fix you did, instead of just using the same name of the issue. |
4700da5
to
1ef967b
Compare
@rrybarczyk @GitGab19 I have done that and even squashed my commits. Please review |
for short_id in transactions_to_remove { | ||
let result = mempool.safe_lock(|mempool_| -> Result<(), Error> { | ||
// Try to manage this unwrap, we use .ok_or() method to return the proper error | ||
let short_ids_map = mempool_ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to call mempool.safe_lock
in each iteration? couldnt we retrieve short_ids_map
before the for
and just use it?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a feedback from lorban I implemented and modified. If you have a better approach I can compare that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@jbesraa is right
the better approach is to get short_ids_map
once for all outside the loop, namely
let short_ids_map = mempool.safe_lock(|a| a.to_short_ids(nonce).unwrap()).unwrap();
right before the
for short_id in transactions_to_remove {
...
}
for cycle. If you do not succeed in a reasonable time ask help to me or @jbesraa
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That means I would still need to handle the double unwraps well? @lorbax
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, but I suggest first to write a the code as easy as possible, thest it, then add error handling and then test it deeply. You should check that the code you introduced does what it is supposed to do. In particular, the old job must be removed from jds mempool and while updating the jds mempool the fat transactions are not discarded.
You can do that using testnet4, or a custom signet
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm worried that to_short_ids is very expensive not sure if is really better to lock the mutex for all that time. Btw we should have a benchmark for cases where there is contention for the lock.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should look into the new received DeclareMiningJob and remove only the txs that are in the old one but not in the new one. Also better would be to have a rc for each txs and decrement it everytime that the tx is in the old one and not in the new one (this cause we have multiple downstreams). And when we reach 0 we remove the tx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somthing like that: pub mempool: HashMap<Txid, Option<(Transaction,u64)>>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this should look into the new received DeclareMiningJob and remove only the txs that are in the old one but not in the new one. Also better would be to have a rc for each txs and decrement it everytime that the tx is in the old one and not in the new one (this cause we have multiple downstreams). And when we reach 0 we remove the tx
I implemented this suggestion here plebhash@d5ec026
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
somthing like that:
pub mempool: HashMap<Txid, Option<(Transaction,u64)>>
Could we take advantage of the token
or other property in JobDeclaratorDownstream
in order to do that?
@NonsoAmadi10 could you please rebase and update the branch? Ill help with the outstanding issues |
hey @NonsoAmadi10 what's the status here? do you need any input or help unblocking anything? |
@plebhash , @jbesraa told me to rebase and update the branch that he would help with outstanding issues. I think that was why I halted. Other than that I intended on closing the opened conversation here. I have time this month to work here |
thanks for the update! @jbesraa is busy with other activities, so I'll help you with this PR I'll take a couple of days to study the entire PR history and then I'll get back to you |
..to `TryFrom<(u8, &'a mut [u8])> for PoolMessages` Add droppable methods for buffer pool and codec bug: clear fat transactions from mempool before new mining jobs are declared rebase and change the code logic get the list of ids of declared transactions from the old declared_mining_job, and remove exactly those from the mempool chore: format using nightly chore: add a clear_mempool_transaction function chore: format files fetch tx data and insert both key and data if key does not exist in jds mempool modify functions based on PR feedback format file chore: refactor cleared mining jobs format file chore: restructure the jds mempool
Modify the update_mempool function to update the jds mempool to save transactions declared by the downstream and get rid of others. Create a clear_declared_mining_job to remove old fat transactions before a new mining job is declared. Manage every unwrap by returning the proper error class and error message
2ed981c
to
5c7d0e3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey @NonsoAmadi10 , I see the comments have been marked as resolved, but the requested changes don't seem to be in place. Could you please check?
match mempool_.mempool.remove(&txid) { | ||
Some(transaction) => { | ||
debug!("Fat transaction {:?} in job with request id {:?} removed from mempool", transaction, mining_job.request_id); | ||
info!("Fat transaction {:?} in job with request id {:?} removed from mempool", txid, mining_job.request_id); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you remove this info log ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, I will
- Move mempool lock outside the loop to reduce contention - Compare old and new mining jobs to avoid unnecessary removals - Modify transaction counters in-place for better efficiency - Remove unnecessary cloning and HashMap operations - Improve error handling and logging - Address feedback from @jbesraa, @lorbax, @plebhash and @Fi3
@plebhash @lorbax @jbesraa @Fi3 @Shourya742. I have taken the feedback from previous unresolved conversations and a huge help from @plebhash commit. Here is what I did: Optimize clear_declared_mining_job function
I don't know if what I did addressed it or if it has introduced added complexity. I would welcome your feedback |
Thank you so much for the work here, but right now our plan is to refactor the |
What does this PR do?