-
Notifications
You must be signed in to change notification settings - Fork 5.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(parser.prometheusremotewrite, serializer.prometheusremotewrite): Native histogram support end-to-end #16121
Changes from all commits
3739be4
e2aced7
283389a
e57fafd
774eda7
166075a
f7beae8
f9d5ac0
e363ef3
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -9,13 +9,16 @@ import ( | |
"github.com/prometheus/common/model" | ||
"github.com/prometheus/prometheus/prompb" | ||
|
||
"github.com/gogo/protobuf/proto" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why do you need this instead of the upstream and well maintained There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. this is what prometheus uses to serialize their protobuf in remote write. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I just found out the rationale for them to use this protobuf lib: prometheus/prometheus#14668. |
||
"github.com/influxdata/telegraf" | ||
"github.com/influxdata/telegraf/metric" | ||
"github.com/influxdata/telegraf/plugins/parsers" | ||
) | ||
|
||
type Parser struct { | ||
DefaultTags map[string]string | ||
|
||
KeepNativeHistogramsAtomic bool `toml:"keep_native_histograms_atomic"` | ||
} | ||
|
||
func (p *Parser) Parse(buf []byte) ([]telegraf.Metric, error) { | ||
|
@@ -61,42 +64,57 @@ func (p *Parser) Parse(buf []byte) ([]telegraf.Metric, error) { | |
} | ||
|
||
for _, hp := range ts.Histograms { | ||
h := hp.ToFloatHistogram() | ||
|
||
if hp.Timestamp > 0 { | ||
t = time.Unix(0, hp.Timestamp*1000000) | ||
} | ||
if p.KeepNativeHistogramsAtomic { | ||
// If keeping it atomic, we serialize the histogram into one single Telegraf metric | ||
// For now we keep the histogram as a serialized proto | ||
// Another option is to convert it to multi-field Telegraf metric | ||
serialized, err := proto.Marshal(&hp) | ||
if err != nil { | ||
return nil, fmt.Errorf("failed to marshal histogram: %w", err) | ||
} | ||
fields := map[string]any{ | ||
metricName: string(serialized), | ||
} | ||
Comment on lines
+71
to
+80
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. You really should go for the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Totally agree, I wanted to do that too, but it's actually not straightforward. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. In particular, the problematic fields are:
|
||
|
||
fields := map[string]any{ | ||
metricName + "_sum": h.Sum, | ||
} | ||
m := metric.New("prometheus_remote_write", tags, fields, t) | ||
metrics = append(metrics, m) | ||
|
||
fields = map[string]any{ | ||
metricName + "_count": h.Count, | ||
} | ||
m = metric.New("prometheus_remote_write", tags, fields, t) | ||
metrics = append(metrics, m) | ||
m := metric.New("prometheus_remote_write", tags, fields, t, telegraf.Histogram) | ||
metrics = append(metrics, m) | ||
} else { | ||
h := hp.ToFloatHistogram() | ||
|
||
count := 0.0 | ||
iter := h.AllBucketIterator() | ||
for iter.Next() { | ||
bucket := iter.At() | ||
fields := map[string]any{ | ||
metricName + "_sum": h.Sum, | ||
} | ||
m := metric.New("prometheus_remote_write", tags, fields, t) | ||
metrics = append(metrics, m) | ||
|
||
count = count + bucket.Count | ||
fields = map[string]any{ | ||
metricName: count, | ||
metricName + "_count": h.Count, | ||
} | ||
m = metric.New("prometheus_remote_write", tags, fields, t) | ||
metrics = append(metrics, m) | ||
|
||
localTags := make(map[string]string, len(tags)+1) | ||
localTags[metricName+"_le"] = fmt.Sprintf("%g", bucket.Upper) | ||
for k, v := range tags { | ||
localTags[k] = v | ||
} | ||
count := 0.0 | ||
iter := h.AllBucketIterator() | ||
for iter.Next() { | ||
bucket := iter.At() | ||
|
||
m := metric.New("prometheus_remote_write", localTags, fields, t) | ||
metrics = append(metrics, m) | ||
count = count + bucket.Count | ||
fields = map[string]any{ | ||
metricName: count, | ||
} | ||
|
||
localTags := make(map[string]string, len(tags)+1) | ||
localTags[metricName+"_le"] = fmt.Sprintf("%g", bucket.Upper) | ||
for k, v := range tags { | ||
localTags[k] = v | ||
} | ||
|
||
m := metric.New("prometheus_remote_write", localTags, fields, t) | ||
metrics = append(metrics, m) | ||
} | ||
} | ||
} | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please use
metric_version
similar to what is done in the prometheus parser?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switching metric_version from prometheus parser seems to have more implications other than affecting how histogram is parsed.
Now, on the prometheusremotewrite side, it seems the current implementation is equivalent to v2. I could emulate a v1 here - where the atomic parsing of both classic and native would be implemented.