Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Investigate options for controlling downsampling #77

Open
Miles-Garnsey opened this issue Aug 12, 2022 · 0 comments
Open

Investigate options for controlling downsampling #77

Miles-Garnsey opened this issue Aug 12, 2022 · 0 comments
Assignees

Comments

@Miles-Garnsey
Copy link

At present, when Cassandra produces metrics, they are added to MCAC via a push model. MCAC then writes the metrics out into files for consumption by collectd.

This leaves MCAC with no options to downsample metrics in cluster which have a large number of tables. Ultimately, this can lead to excess resource consumption and further issues downstream (e.g. adding too many metrics to collectd can overwhelm it).

This ticket tracks work to investigate the addition of a configurable throttling mechanism at the MCAC level, so that users can downsample metrics scrapes to avoid overloading collectd.

Some options for configuring such a mechanism are as follows:

  1. Set a maximum metrics collected per scrape and adaptively drop samples to ensure it isn't breached.
  2. Set a fixed % samples to drop.
  3. Set some sort of aggregation mechanism instead of just dropping samples.
@Miles-Garnsey Miles-Garnsey self-assigned this Aug 12, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant