Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vardiff currently unsupported #1229

Open
rrybarczyk opened this issue Oct 18, 2024 · 13 comments
Open

Vardiff currently unsupported #1229

rrybarczyk opened this issue Oct 18, 2024 · 13 comments

Comments

@rrybarczyk
Copy link
Collaborator

Supporting variable difficulty in the mining proxy and pool is a needed feature for potential users. Previously, #251 was opened but was subsequently closed without addressing it. I do not remember why.

@pavlenex
Copy link
Collaborator

According to @GitGab19 variable difficulty is included with tProxy. Since this feedback comes from early adopter, we should prfioritize this feature when tackling the pool role.

@plebhash
Copy link
Collaborator

plebhash commented Oct 25, 2024

I remember that @rrybarczyk had previously mentioned the idea of modularizing the vardiff logic, so that it is not statically baked into tProxy

this makes a lot of sense, as ideally we need to be able to easily integrate vardiff as a feature to applications as needed


as we tackle this, we should be very strategic about test-driven development

the main challenge around this kind of feature is its statistical nature, which makes it very prone to non-deterministic behavior, especially as different platforms (with different computational capacity) are used to emulate the incoming hashrate

we have some valuable lessons learned from MG CI of tProxy, and more insight can be found on #1208

@plebhash
Copy link
Collaborator

plebhash commented Oct 25, 2024

not directly but somewhat correlated to this topic is the need for modularizing translation logic, which could fit into a potential approach for cgminer integration, as it was laid out here: plebhash/cgminer#1 (comment)

depending on which direction this effort goes, we should also consider creating a separate issue for isolating translation logic

@AkulovOleg
Copy link

Hi, guys!
We need vardiff too
It will be nice if we can prioritize this feature

@pavlenex
Copy link
Collaborator

pavlenex commented Nov 6, 2024

@rrybarczyk Since we have this feature request by two adopters, we should prioritize it as agreed on the last dev call. I'll leave it up to you to decide when you'll have capacity to tackle this one. Not sure if we can get it into 1.2.0 milestone (cc @plebhash @GitGab19)?

@plebhash
Copy link
Collaborator

plebhash commented Nov 6, 2024

@rrybarczyk Since we have this feature request by two adopters, we should prioritize it as agreed on the last dev call. I'll leave it up to you to decide when you'll have capacity to tackle this one. Not sure if we can get it into 1.2.0 milestone (cc @plebhash @GitGab19)?

I don't have any objections to the idea of starting to work on this right away (although I don't feel I will be able to implement this myself on the near future, but I'm happy to help the review process).

My only remark is that I really feel that we should make sure this functionality is properly tested for the long run.

Unfortunately we still have some moving parts under Integration Test Framework.

Solid unit testing can cover a lot of ground (which does not depend on ITF), but I think some scenarios should be covered with ITF as well.

So in case Vardiff is finished before ITF, I would advocate for later having a new dev cycle around Vardiff that adds Integration Tests around it.

Overall I feel it would be bad strategy if we just did Vardiff and moved on to other things without leaving a solid CI around it.

@rrybarczyk
Copy link
Collaborator Author

Agree on testing. Will get started and work with the team on making sure everything is tested up to standard before launching. And if the integration test framework becomes a serious blocker, we can perhaps work out a beta-beta release for users to test out with the understanding that testing is not completely there. May help identify early bugs as well.

@plebhash
Copy link
Collaborator

plebhash commented Nov 19, 2024

@rrybarczyk during the review process of #1254 we just picked up a bug in the unit tests of tProxy:

fn measure_hashrate(duration_secs: u64) -> f64 {
let mut share = generate_random_80_byte_array();
let start_time = Instant::now();
let mut hashes: u64 = 0;
let duration = Duration::from_secs(duration_secs);
while start_time.elapsed() < duration {
for _ in 0..10000 {
hash(&mut share);
hashes += 1;
}
}
let elapsed_secs = start_time.elapsed().as_secs_f64();
hashes as f64 / elapsed_secs
}
fn hash(share: &mut [u8; 80]) -> Target {
let nonce: [u8; 8] = share[0..8].try_into().unwrap();
let mut nonce = u64::from_le_bytes(nonce);
nonce += 1;
share[0..8].copy_from_slice(&nonce.to_le_bytes());
let hash = Sha256::digest(&share).to_vec();
let hash: U256<'static> = hash.try_into().unwrap();
hash.into()
}

this is estimating hashrate for sha256, not sha256d!

as we modularize vardiff, it's important to avoid further propagating this

@Fi3
Copy link
Collaborator

Fi3 commented Nov 19, 2024

@rrybarczyk during the review process of #1254 we just picked up a bug in the unit tests of tProxy:

fn measure_hashrate(duration_secs: u64) -> f64 {
let mut share = generate_random_80_byte_array();
let start_time = Instant::now();
let mut hashes: u64 = 0;
let duration = Duration::from_secs(duration_secs);
while start_time.elapsed() < duration {
for _ in 0..10000 {
hash(&mut share);
hashes += 1;
}
}
let elapsed_secs = start_time.elapsed().as_secs_f64();
hashes as f64 / elapsed_secs
}
fn hash(share: &mut [u8; 80]) -> Target {
let nonce: [u8; 8] = share[0..8].try_into().unwrap();
let mut nonce = u64::from_le_bytes(nonce);
nonce += 1;
share[0..8].copy_from_slice(&nonce.to_le_bytes());
let hash = Sha256::digest(&share).to_vec();
let hash: U256<'static> = hash.try_into().unwrap();
hash.into()
}

this is estimating hashrate for sha256, not sha256d!

as we modularize vardiff, it's important to avoid further propagating this

do not seems a bug to me.

@Fi3
Copy link
Collaborator

Fi3 commented Nov 19, 2024

I mean, I'm ok with using everywhere the double sha but I dont' think that the test is bugged for how we calculate the hashrate. I would love it since that test never work, but I don't see how it can impact the test.

@plebhash
Copy link
Collaborator

plebhash commented Nov 19, 2024

I see what you're saying @Fi3 thanks for pointing that out. Indeed it's important to clarify this.

This inaccuracy on measure_hashrate doesn't severely impact this specific unit test behavior because the mock_mine function is also based on sha256 instead of sha256d.

So we can argue that on the specific context of those unit tests, "one mistake corrects the other", in a sense.


The context where this discussion started was #1254, where this measure_hashrate implementation was used as reference.

There, we need to initialize tProxy for integration tests.

We call measure_hashrate to know the available CPU miner hashpower (in terms of sha256d, not sha256). This helps bootstrap tProxy with the correct initial parameters for vardiff.

We could run integration tests in very different platforms (e.g.: local development machine with high CPU hashpower, or Github CI runners with low CPU hashpower).

Doing hashrate calculation based on sha256 could result in broken integration tests. This specific comment should give you some insight on how important this is on the context of integration tests.


My comment about sha256 vs sha256d on this issue is just a warning so that we avoid propagating this mistake (because it already happened with @jbesraa on #1254 when he used those tProxy unit tests as reference).

@Fi3
Copy link
Collaborator

Fi3 commented Nov 19, 2024

@plebhash I don't know what you need but maybe this can be useful https://github.com/stratum-mining/stratum/blob/main/roles/test-utils/mining-device/src/lib/mod.rs#L622

@plebhash
Copy link
Collaborator

pool is currently using hardcoded share_per_min

let share_per_min = 1.0;

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants