-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
avoid duplicates at PromQL level #11
Comments
Currently the cputime query is anyway somewhat redundant with the endtime,starttime,cores queries. Good to have a cross check, though it takes a bit more time to run the extra queries. |
With the updates for KSM 2.0 #22
Shortly after redeploying KSM to trigger the multi-instance issue, now this gives an error "many-to-many matching not allowed: matching labels must be unique on one side". Or maybe I didn't test it correctly before. |
Here is an example of the same pod record, duplicated for each KSM instance that Prom collected the record from:
|
The 'instance' and 'pod' labels represent the KSM instance and need to be filtered out to avoid mismatch errors ("many-to-many matching not allowed: matching labels must be unique on one side") , in the situation where KSM restarts (or runs >1 replica).
This works for the one side of the CPU query:
But no matter what I tried I could not get a vector result from the other side; this is where the mismatch error is coming from:
I don't understand that because 'on (exported_pod)' should mean only the exported_pod label is used for matching.
Particularly odd: both of these queries work, which each ignore just one label:
But ignoring both labels causes a many-to-many matching error, just like using on(exported_pod):
And this works but it has problematic duplicate entries:
so that is how it is working currently, and it relies on the rearrange function ignoring duplicates.
It might be preferable to avoid duplicates at the PromQL level instead of in the python code - however the prometheus queries are subject to complex vagaries and occasional syntax changes so maybe deduplicating in python is safer.
The text was updated successfully, but these errors were encountered: