Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Block cache with an infinite stream option #687

Merged
merged 2 commits into from
Aug 24, 2023
Merged

Block cache with an infinite stream option #687

merged 2 commits into from
Aug 24, 2023

Conversation

blt
Copy link
Collaborator

@blt blt commented Aug 22, 2023

What does this PR do?

This commit makes two major changes to the block cache. The most important is that the block cache's major CPU consumption -- the construction of Block instances -- is now done in a separate OS thread from the tokio runtime. This allows us to introduce the second more important change: infinite streams of Blocks. It is now possible for users to construct an unending stream of Block instances that do not loop. We maintain a cache of constructed Blocks up to the maximum total bytes allow to minimize any potential latency impact.

Additional Notes

Configuration is changed but not in a backward incompatible way.

Related issues

REF SMP-664

This commit makes two major changes to the block cache. The most important is
that the block cache's major CPU consumption -- the construction of `Block`
instances -- is now done in a separate OS thread from the tokio runtime. This
allows us to introduce the second more important change: infinite streams of
`Blocks`. It is now possible for users to construct an unending stream of
`Block` instances that do not loop. We maintain a cache of constructed `Block`s
up to the maximum total bytes allow to minimize any potential latency impact.

Configuration is changed but not in a backward incompatible way.

REF SMP-664

Signed-off-by: Brian L. Troutwine <[email protected]>
@blt blt requested a review from a team August 22, 2023 01:23
@github-actions
Copy link

Regression Detector Results

Run ID: 14811a98-a780-4047-b96f-8758c61625ec
Baseline: b6b94f5
Comparison: 7a632ca
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.22 [+0.17, +0.27] 100.00%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.01 [-0.03, +0.02] 30.09%

@goxberry
Copy link
Contributor

This allows us to introduce the second more important change: infinite streams of Blocks. It is now possible for users to construct an unending stream of Block instances that do not loop.

I suspect the period of the PRNG used would affect how long it would take before Blocks begin to repeat, though for this application, it seems unlikely we'd exhaust that period unless it were quite small (e.g., for a period of 2^64 * 4 bytes, generating 10^9 bytes/second, around 334 years; I'm guessing that the 1,024 channels aren't generating 10^9 bytes/second each).

@blt
Copy link
Collaborator Author

blt commented Aug 22, 2023

This allows us to introduce the second more important change: infinite streams of Blocks. It is now possible for users to construct an unending stream of Block instances that do not loop.

I suspect the period of the PRNG used would affect how long it would take before Blocks begin to repeat, though for this application, it seems unlikely we'd exhaust that period unless it were quite small (e.g., for a period of 2^64 * 4 bytes, generating 10^9 bytes/second, around 334 years; I'm guessing that the 1,024 channels aren't generating 10^9 bytes/second each).

Ah yeah, maybe better language would be something like "do not repeat from a finite pool of Block instances of known size". You are right we're playing the odds here and some values will repeat, but not quite so much as if you generate 1Gb of Block instances and loop over that.

Comment on lines +260 to +264
payload::Config::TraceAgent(enc) => {
let ta = match enc {
payload::Encoding::Json => payload::TraceAgent::json(&mut rng),
payload::Encoding::MsgPack => payload::TraceAgent::msg_pack(&mut rng),
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could this be refactored so the configs are handled inside the relevant payload implementation? (Is there a future benefit to having access to the payload configs here?)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be and I started in on that but I felt like it was too big of a change for this PR. I find it awkward, on the subject, that the generator creates the Cache at all but I do find it desirable to maintain backward compatibility with our configs.

Comment on lines +77 to +79
/// Whether to use a fixed or streaming block cache
#[serde(default = "crate::block::default_cache_method")]
pub block_cache_method: block::CacheMethod,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This kind of seems like it should be set automatically by the payload, would that be possible?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it could be but I hadn't considered that. Right now it's the Cache that maintains the difference and the payload is really just a mechanism -- to the Cache -- of how to get a Block. My sense was that we would drive to where streaming was on by default and leave fixed as an option for setups that are especially CPU conscious, so users would rarely see this setting.

I'm not sure we have enough information to say whether it's appropriate to stream or be fixed automatically, based on payload settings. Maybe I'm missing something you're seeing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I see. I buy that. We'll have some mismatch if we introduce any streaming-only payloads. We can handle that when we need to though.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To my mind the payloads should be unaware of how the generate chooses to use them. That's been our approach so far. I don't see that you can make a streaming only payload, or at least I cannot image one.

Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

Regression Detector Results

Run ID: 2e4c86a8-2f60-4380-a95e-897bbcf3967a
Baseline: b6b94f5
Comparison: 8c09706
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput +0.57 [+0.54, +0.59] 100.00%
blackhole_from_apache_common_http ingress throughput +0.08 [+0.03, +0.13] 95.97%

@blt blt merged commit 569fdb0 into main Aug 24, 2023
24 of 25 checks passed
@blt blt deleted the cache_stream branch August 24, 2023 17:25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants