Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Convert DogStatsD payload to use string pool #676

Merged
merged 5 commits into from
Aug 17, 2023
Merged

Conversation

blt
Copy link
Collaborator

@blt blt commented Aug 16, 2023

What does this PR do?

This commit begins the process of converting the dogstatsd payload generation to use the new string pool, introduced in #675. I have only partially converted the event sub-payload -- note that tagsets have not been switched yet -- and this shows a 6% improvement to dogstatsd_setup and a 5% - 74% improvement to dogstatsd_all. It shows there's some promise to the technique that improves as we scale up the total-bytes emitted.

EDIT: By the final commit we're capping _all out at 1Gb/s, up from ~300Mb/s.

Of note I will need to eventually convert the Generator trait to emit a type with a lifetime. Since I can't do that incrementally the generate function in select areas temporarily does not come from the trait. This will have to be resolved by converting all the payloads to non-copying implementations, which will take some time and is outside of the scope of this PR.

Related issues

REF SMP-664

This commit begins the process of converting the dogstatsd payload generation to
use the new string pool, introduced in #675. I have only partially converted the
event sub-payload -- note that tagsets have not been switched yet -- and this
shows a 6% improvement to `dogstatsd_setup` and a 5% - 74% improvement to
`dogstatsd_all`. It shows there's some promise to the technique that improves as
we scale up the total-bytes emitted.

Of note I will need to eventually convert the `Generator` trait to emit a type
with a lifetime. Since I can't do that incrementally the `generate` function in
select areas temporarily does not come from the trait. I'll resolve this as a
part of the work here.

Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

Regression Detector Results

Run ID: 426b95a2-f8ad-486d-afe9-41d04da1a59b
Baseline: d113370
Comparison: 6d2731e
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.07 [+0.02, +0.12] 93.15%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput +0.00 [-0.02, +0.03] 14.66%

Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

Regression Detector Results

Run ID: 2a312524-b510-4918-a5d9-5a88bb2f35cb
Baseline: d113370
Comparison: 3e66017
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput +0.26 [+0.21, +0.31] 100.00%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.06 [-0.10, -0.03] 99.03%

This commit improves the string pool spread in dogstatsd. I still have yet to
adjust the metric generation to avoid cloning -- I'll do that in the next commit
-- but setup is improved by 34% and _all from 52% to 5%. I suspect that if I can
get the generation side to note clone we'll improve on the high-end.

Signed-off-by: Brian L. Troutwine <[email protected]>
@github-actions
Copy link

Regression Detector Results

Run ID: d3d83036-05c6-46b2-9536-c74cc6b80051
Baseline: d113370
Comparison: 85f682e
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput -0.06 [-0.11, -0.01] 86.39%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.52 [-0.55, -0.49] 100.00%

This commit makes an explicit `Template` that is used to generate a full
`Metric<'a>`. Note the new lifetime, we avoid cloning so much now. At the top
end this hits 1Gb/s in the _all benchmark.

Signed-off-by: Brian L. Troutwine <[email protected]>
After instrumentation it appears that lading_rev spends 85% of its time in
`fmt::write`, implying that if we want to go faster we'll need to make coercion
into strings cheaper. Excellent result.

Signed-off-by: Brian L. Troutwine <[email protected]>
@blt blt requested review from scottopell and a team August 16, 2023 23:55
@blt blt marked this pull request as ready for review August 16, 2023 23:55
@github-actions
Copy link

Regression Detector Results

Run ID: c28cf8d5-c672-4a52-ad53-4138aa5ec9e9
Baseline: d113370
Comparison: a0f97c1
Total lading-target CPUs: 4

Explanation

A regression test is an integrated performance test for lading-target in a repeatable rig, with varying configuration for lading-target. What follows is a statistical summary of a brief lading-target run for each configuration across SHAs given above. The goal of these tests are to determine quickly if lading-target performance is changed and to what degree by a pull request.

Because a target's optimization goal performance in each experiment will vary somewhat each time it is run, we can only estimate mean differences in optimization goal relative to the baseline target. We express these differences as a percentage change relative to the baseline target, denoted "Δ mean %". These estimates are made to a precision that balances accuracy and cost control. We represent this precision as a 90.00% confidence interval denoted "Δ mean % CI": there is a 90.00% chance that the true value of "Δ mean %" is in that interval.

We decide whether a change in performance is a "regression" -- a change worth investigating further -- if both of the following two criteria are true:

  1. The estimated |Δ mean %| ≥ 5.00%. This criterion intends to answer the question "Does the estimated change in mean optimization goal performance have a meaningful impact on your customers?". We assume that when |Δ mean %| < 5.00%, the impact on your customers is not meaningful. We also assume that a performance change in optimization goal is worth investigating whether it is an increase or decrease, so long as the magnitude of the change is sufficiently large.

  2. Zero is not in the 90.00% confidence interval "Δ mean % CI" about "Δ mean %". This statement is equivalent to saying that there is at least a 90.00% chance that the mean difference in optimization goal is not zero. This criterion intends to answer the question, "Is there a statistically significant difference in mean optimization goal performance?". It also means there is no more than a 10.00% chance this criterion reports a statistically significant difference when the true difference in mean optimization goal is zero -- a "false positive". We assume you are willing to accept a 10.00% chance of inaccurately detecting a change in performance when no true difference exists.

The table below, if present, lists those experiments that have experienced a statistically significant change in mean optimization goal performance between baseline and comparison SHAs with 90.00% confidence OR have been detected as newly erratic. Negative values of "Δ mean %" mean that baseline is faster, whereas positive values of "Δ mean %" mean that comparison is faster. Results that do not exhibit more than a ±5.00% change in their mean optimization goal are discarded. An experiment is erratic if its coefficient of variation is greater than 0.1. The abbreviated table will be omitted if no interesting change is observed.

No interesting changes in experiment optimization goals with confidence ≥ 90.00% and |Δ mean %| ≥ 5.00%.

Fine details of change detection per experiment.
experiment goal Δ mean % Δ mean % CI confidence
blackhole_from_apache_common_http ingress throughput -0.04 [-0.09, +0.02] 62.65%
apache_common_http_both_directions_this_doesnt_make_sense ingress throughput -0.09 [-0.12, -0.06] 99.97%

@blt blt merged commit 8863b09 into main Aug 17, 2023
18 checks passed
@blt blt deleted the dogstatsd_string_pool branch August 17, 2023 16:39
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants