-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[chore] [deltatocumulative]: linear histograms #36486
base: main
Are you sure you want to change the base?
Conversation
expands the linear architecture to do exponential and fixed-width histograms.
switch dp := any(dp).(type) { | ||
case pmetric.NumberDataPoint: | ||
state := any(state).(pmetric.NumberDataPoint) | ||
data.Number{NumberDataPoint: state}.Add(data.Number{NumberDataPoint: dp}) | ||
case pmetric.HistogramDataPoint: | ||
state := any(state).(pmetric.HistogramDataPoint) | ||
data.Histogram{HistogramDataPoint: state}.Add(data.Histogram{HistogramDataPoint: dp}) | ||
case pmetric.ExponentialHistogramDataPoint: | ||
state := any(state).(pmetric.ExponentialHistogramDataPoint) | ||
data.ExpHistogram{DataPoint: state}.Add(data.ExpHistogram{DataPoint: dp}) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This refactor effectively eliminates the need for the data package, as we no longer rely on type characteristics.
I'll refactor datapoint addition in a future PR, making this part more clear, maybe like this:
var add data.Aggregator = new(data.Add)
switch into := any(dp).(type) {
case pmetric.NumberDataPoint:
add.Numbers(into, dp)
case pmetric.HistogramDataPoint:
add.Histograms(into, dp)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I did a first pass only through the benchmark. I totally understand my comments are nitpicks, I'm just sharing my personal preference when it comes to code style.
From our conversations I understand you prefer a more "declarative" style, but to me it makes the code way harder to read since I expect thing to run in the order they are written. Scrolling up and down several times until I finally understand what it does makes code less readable in my opinion.
Again, not a blocker!
var ( | ||
_ Any = Sum{} | ||
_ Any = Gauge{} | ||
_ Any = ExpHistogram{} | ||
_ Any = Histogram{} | ||
_ Any = Summary{} | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just curious, why do we need to do this?
func next[ | ||
T interface{ DataPoints() Ps }, | ||
Ps interface { | ||
At(int) P | ||
Len() int | ||
}, | ||
P interface { | ||
Timestamp() pcommon.Timestamp | ||
SetStartTimestamp(pcommon.Timestamp) | ||
SetTimestamp(pcommon.Timestamp) | ||
}, | ||
](sel func(pmetric.Metric) T) func(m pmetric.Metric) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not understand any of this 😭. What are we trying to accomplish here? What is next
supposed to do in the benchmark?
I don't mind code duplication if it makes the code more readable 😬
if err := sdktest.Test(tel(b.N), st.tel.reader); err != nil { | ||
b.Fatal(err) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a benchmark or a test? I'm unsure if I'm missing something, but it seems you're trying to do both...?
Is it an option to split them to make the code easier to understand?
Trying to accomplish all things at once also makes the code more fragile since we will also break all things at once if we make a mistake in the future
ts := pcommon.NewTimestampFromTime(now.Add(time.Minute)) | ||
|
||
cases := []Case{{ | ||
name: "sums", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there a reason to split sums, histograms, and exponential histograms into separate benchmarks? Are those metric types expected to be split by separate deltatocumulative processors in real-world scenarios?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Your PR description only shows results for sums
, so I'm not sure if this was an intentional split or you just forgot to benchmark the rest
run := func(b *testing.B, proc consumer.Metrics, cs Case) { | ||
md := pmetric.NewMetrics() | ||
ms := md.ResourceMetrics().AppendEmpty().ScopeMetrics().AppendEmpty().Metrics() | ||
for i := range metrics { | ||
m := ms.AppendEmpty() | ||
m.SetName(strconv.Itoa(i)) | ||
cs.fill(m) | ||
} | ||
|
||
b.ReportAllocs() | ||
b.ResetTimer() | ||
b.StopTimer() | ||
|
||
ctx := context.Background() | ||
for range b.N { | ||
for i := range ms.Len() { | ||
cs.next(ms.At(i)) | ||
} | ||
req := pmetric.NewMetrics() | ||
md.CopyTo(req) | ||
|
||
b.StartTimer() | ||
err := proc.ConsumeMetrics(ctx, req) | ||
b.StopTimer() | ||
require.NoError(b, err) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any special reason to transform this into a function? We're not reusing the code anywhere, so why not just put this inside the b.Run loop?
type Case struct { | ||
name string | ||
fill func(m pmetric.Metric) | ||
next func(m pmetric.Metric) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we remove the abstraction of run
, we could also move this closer to where it's used.
Description
Finishes work started in #35048
That PR only partially introduced a less complex processor architecture by only using it for Sums.
Back then I was not sure of the best way to do it for multiple datatypes, as generics seemed to introduce a lot of complexity regardless of usage.
I since then did of a lot of perf analysis and due to the way Go works (see gcshapes), we do not really gain anything at runtime from using generics, given method calls are still dynamic.
This implementation uses regular Go interfaces and a good old type switch in the hot path (ConsumeMetrics), which lowers mental complexity quite a lot imo.
The value of the new architecture is backed up by the following benchmark:
Testing
This is a refactor, existing tests pass unaltered.
Documentation
not needed