Consider caching/resolving metric metadata in the same way as contexts. #262
Labels
area/core
Core functionality, event model, etc.
effort/intermediate
Involves changes that can be worked on by non-experts but might require guidance.
type/meta
Things that can't be neatly categorized and/or aren't yet fully-formed ideas/thoughts.
Context
Currently, metric events are composed of three main parts: context, values, and metadata. The context deals specifically with the metric's name, including tags. The values, naturally, deal with the actual value of the metric. Finally, metadata deals with other slightly intangible/niche aspects of a metric, such as origin-related information.
These three pieces end up resulting in a
Metric
struct that is somewhat heavyweight: roughly 224 bytes. Crucially, this means that every metric event we shuttle through a topology corresponds to at least 224 bytes of memory used, which can meaningfully show up when there is a higher number of active metric contexts, as these correspond to at least one live instance of a metric tied to that context.Practically speaking, this would mean that in order to hold 250K active metric contexts, we would require around 55MB of memory for the individual
Metric
objects tied to those contexts.As the metadata of a metric generally doesn't change -- it's based on incoming metric tags, or information specific to the network socket the metric arrives on, and so on -- after ingestion, metadata is potentially ripe for caching/resolving in the same way that contexts themselves are cached/resolved. Practically speaking, taking the same approach could shrink
Metric
down to around 96 bytes. Based on the above example, this would mean only needing around 24MB of memory to hold 250K individualMetric
objects, which is a meaningful reduction.The text was updated successfully, but these errors were encountered: