Skip to content

Commit

Permalink
chore(outputs): Fix line-length in READMEs (#16079)
Browse files Browse the repository at this point in the history
Co-authored-by: Dane Strandboge <[email protected]>
  • Loading branch information
srebhan and DStrand1 authored Oct 25, 2024
1 parent 22b153a commit 13d053f
Show file tree
Hide file tree
Showing 13 changed files with 249 additions and 211 deletions.
14 changes: 9 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
# ![tiger](assets/TelegrafTigerSmall.png "tiger") Telegraf

[![GoDoc](https://img.shields.io/badge/doc-reference-00ADD8.svg?logo=go)](https://godoc.org/github.com/influxdata/telegraf) [![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/) [![Go Report Card](https://goreportcard.com/badge/github.com/influxdata/telegraf)](https://goreportcard.com/report/github.com/influxdata/telegraf) [![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)
[![GoDoc](https://img.shields.io/badge/doc-reference-00ADD8.svg?logo=go)](https://godoc.org/github.com/influxdata/telegraf)
[![Docker pulls](https://img.shields.io/docker/pulls/library/telegraf.svg)](https://hub.docker.com/_/telegraf/)
[![Go Report Card](https://goreportcard.com/badge/github.com/influxdata/telegraf)](https://goreportcard.com/report/github.com/influxdata/telegraf)
[![Circle CI](https://circleci.com/gh/influxdata/telegraf.svg?style=svg)](https://circleci.com/gh/influxdata/telegraf)

Telegraf is an agent for collecting, processing, aggregating, and writing
metrics, logs, and other arbitrary data.
Expand Down Expand Up @@ -76,13 +79,14 @@ Also, join us on our [Community Slack](https://influxdata.com/slack) or
[Community Forums](https://community.influxdata.com/) if you have questions or
comments for our engineering teams.

If you are completely new to Telegraf and InfluxDB, you can also enroll for free at
[InfluxDB university](https://www.influxdata.com/university/) to take courses to
learn more.
If you are completely new to Telegraf and InfluxDB, you can also enroll for free
at [InfluxDB university](https://www.influxdata.com/university/) to take courses
to learn more.

## ℹ️ Support

[![Slack](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack)](https://www.influxdata.com/slack) [![Forums](https://img.shields.io/badge/discourse-join_forums-blue.svg?logo=discourse)](https://community.influxdata.com/)
[![Slack](https://img.shields.io/badge/slack-join_chat-blue.svg?logo=slack)](https://www.influxdata.com/slack)
[![Forums](https://img.shields.io/badge/discourse-join_forums-blue.svg?logo=discourse)](https://community.influxdata.com/)

Please use the [Community Slack](https://influxdata.com/slack) or
[Community Forums](https://community.influxdata.com/) if you have questions or
Expand Down
19 changes: 11 additions & 8 deletions plugins/outputs/cloudwatch/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.

## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
Expand Down Expand Up @@ -75,15 +76,17 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.
## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf"

## If you have a large amount of metrics, you should consider to send statistic
## values instead of raw metrics which could not only improve performance but
## also save AWS API cost. If enable this flag, this plugin would parse the required
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch.
## You could use basicstats aggregator to calculate those fields. If not all statistic
## fields are available, all fields would still be sent as raw metrics.
## If you have a large amount of metrics, you should consider to send
## statistic values instead of raw metrics which could not only improve
## performance but also save AWS API cost. If enable this flag, this plugin
## would parse the required CloudWatch statistic fields (count, min, max, and
## sum) and send them to CloudWatch. You could use basicstats aggregator to
## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false

## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision)
## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false
```

Expand Down
19 changes: 11 additions & 8 deletions plugins/outputs/cloudwatch/sample.conf
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,8 @@

## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
Expand Down Expand Up @@ -34,13 +35,15 @@
## Namespace for the CloudWatch MetricDatums
namespace = "InfluxData/Telegraf"

## If you have a large amount of metrics, you should consider to send statistic
## values instead of raw metrics which could not only improve performance but
## also save AWS API cost. If enable this flag, this plugin would parse the required
## CloudWatch statistic fields (count, min, max, and sum) and send them to CloudWatch.
## You could use basicstats aggregator to calculate those fields. If not all statistic
## fields are available, all fields would still be sent as raw metrics.
## If you have a large amount of metrics, you should consider to send
## statistic values instead of raw metrics which could not only improve
## performance but also save AWS API cost. If enable this flag, this plugin
## would parse the required CloudWatch statistic fields (count, min, max, and
## sum) and send them to CloudWatch. You could use basicstats aggregator to
## calculate those fields. If not all statistic fields are available, all
## fields would still be sent as raw metrics.
# write_statistics = false

## Enable high resolution metrics of 1 second (if not enabled, standard resolution are of 60 seconds precision)
## Enable high resolution metrics of 1 second (if not enabled, standard
## resolution are of 60 seconds precision)
# high_resolution_metrics = false
37 changes: 21 additions & 16 deletions plugins/outputs/cloudwatch_logs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,9 +7,12 @@ This plugin will send logs to Amazon CloudWatch.
This plugin uses a credential chain for Authentication with the CloudWatch Logs
API endpoint. In the following order the plugin will attempt to authenticate.

1. Web identity provider credentials via STS if `role_arn` and `web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source credentials are evaluated from subsequent rules).
The `endpoint_url` attribute is used only for Cloudwatch Logs service. When fetching credentials, STS global endpoint will be used.
1. Web identity provider credentials via STS if `role_arn` and
`web_identity_token_file` are specified
1. Assumed credentials via STS if `role_arn` attribute is specified (source
credentials are evaluated from subsequent rules). The `endpoint_url`
attribute is used only for Cloudwatch Logs service. When fetching
credentials, STS global endpoint will be used.
1. Explicit credentials from `access_key`, `secret_key`, and `token` attributes
1. Shared profile from `profile` attribute
1. [Environment Variables][1]
Expand Down Expand Up @@ -56,7 +59,8 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.

## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
Expand All @@ -74,30 +78,31 @@ See the [CONFIGURATION.md][CONFIGURATION.md] for more details.

## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default.
## ex: endpoint_url = "http://localhost:8000"
## default, e.g endpoint_url = "http://localhost:8000"
# endpoint_url = ""

## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name"

## Log stream in log group
## Either log group name or reference to metric attribute, from which it can be parsed:
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
## Either log group name or reference to metric attribute, from which it can
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## exist, it will be created. Since AWS is not automatically delete logs
## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location"

## Source of log data - metric name
## specify the name of the metric, from which the log data should be retrieved.
## I.e., if you are using docker_log plugin to stream logs from container, then
## specify log_data_metric_name = "docker_log"
## specify the name of the metric, from which the log data should be
## retrieved. I.e., if you are using docker_log plugin to stream logs from
## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log"

## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container, then
## specify log_data_source = "field:message"
## I.e., if you are using docker_log plugin to stream logs from container,
## then specify log_data_source = "field:message"
log_data_source = "field:message"
```
28 changes: 15 additions & 13 deletions plugins/outputs/cloudwatch_logs/sample.conf
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,8 @@

## Amazon Credentials
## Credentials are loaded in the following order
## 1) Web identity provider credentials via STS if role_arn and web_identity_token_file are specified
## 1) Web identity provider credentials via STS if role_arn and
## web_identity_token_file are specified
## 2) Assumed credentials via STS if role_arn is specified
## 3) explicit credentials from 'access_key' and 'secret_key'
## 4) shared profile from 'profile'
Expand All @@ -30,29 +31,30 @@

## Endpoint to make request against, the correct endpoint is automatically
## determined and this option should only be set if you wish to override the
## default.
## ex: endpoint_url = "http://localhost:8000"
## default, e.g endpoint_url = "http://localhost:8000"
# endpoint_url = ""

## Cloud watch log group. Must be created in AWS cloudwatch logs upfront!
## For example, you can specify the name of the k8s cluster here to group logs from all cluster in oine place
## For example, you can specify the name of the k8s cluster here to group logs
## from all cluster in oine place
log_group = "my-group-name"

## Log stream in log group
## Either log group name or reference to metric attribute, from which it can be parsed:
## tag:<TAG_NAME> or field:<FIELD_NAME>. If log stream is not exist, it will be created.
## Since AWS is not automatically delete logs streams with expired logs entries (i.e. empty log stream)
## you need to put in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
## Either log group name or reference to metric attribute, from which it can
## be parsed, tag:<TAG_NAME> or field:<FIELD_NAME>. If the log stream is not
## exist, it will be created. Since AWS is not automatically delete logs
## streams with expired logs entries (i.e. empty log stream) you need to put
## in place appropriate house-keeping (https://forums.aws.amazon.com/thread.jspa?threadID=178855)
log_stream = "tag:location"

## Source of log data - metric name
## specify the name of the metric, from which the log data should be retrieved.
## I.e., if you are using docker_log plugin to stream logs from container, then
## specify log_data_metric_name = "docker_log"
## specify the name of the metric, from which the log data should be
## retrieved. I.e., if you are using docker_log plugin to stream logs from
## container, then specify log_data_metric_name = "docker_log"
log_data_metric_name = "docker_log"

## Specify from which metric attribute the log data should be retrieved:
## tag:<TAG_NAME> or field:<FIELD_NAME>.
## I.e., if you are using docker_log plugin to stream logs from container, then
## specify log_data_source = "field:message"
## I.e., if you are using docker_log plugin to stream logs from container,
## then specify log_data_source = "field:message"
log_data_source = "field:message"
16 changes: 10 additions & 6 deletions plugins/outputs/logzio/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,12 +41,16 @@ to use them.

### Required parameters

* `token`: Your Logz.io token, which can be found under "settings" in your account.
Your Logz.io `token`, which can be found under "settings" in your account, is
required.

### Optional parameters

* `check_disk_space`: Set to true if Logz.io sender checks the disk space before adding metrics to the disk queue.
* `disk_threshold`: If the queue_dir space crosses this threshold (in % of disk usage), the plugin will start dropping logs.
* `drain_duration`: Time to sleep between sending attempts.
* `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk in this location.
* `url`: Logz.io listener URL.
- `check_disk_space`: Set to true if Logz.io sender checks the disk space before
adding metrics to the disk queue.
- `disk_threshold`: If the queue_dir space crosses this threshold
(in % of disk usage), the plugin will start dropping logs.
- `drain_duration`: Time to sleep between sending attempts.
- `queue_dir`: Metrics disk path. All the unsent metrics are saved to the disk
in this location.
- `url`: Logz.io listener URL.
11 changes: 6 additions & 5 deletions plugins/outputs/opentelemetry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,12 @@ the `[output.opentelemetry.coralogix]` section.

There, you can find the required setting to interact with the server.

- The `private_key` is your Private Key, which you can find in Settings > Send Your Data.
- The `application`, is your application name, which will be added to your metric attributes.
- The `subsystem`, is your subsystem, which will be added to your metric attributes.
- The `private_key` is your Private Key, which you can find in
`Settings > Send Your Data`.
- The `application`, is your application name, which will be added to your
`metric attributes`.
- The `subsystem`, is your subsystem, which will be added to your metric
attributes.

More information in the
[Getting Started page](https://coralogix.com/docs/guide-first-steps-coralogix/).
Expand All @@ -103,7 +106,5 @@ data is interpreted as:
Also see the [OpenTelemetry input plugin](../../inputs/opentelemetry/README.md).

[schema]: https://github.com/influxdata/influxdb-observability/blob/main/docs/index.md

[implementation]: https://github.com/influxdata/influxdb-observability/tree/main/influx2otel

[repo]: https://github.com/influxdata/influxdb-observability
24 changes: 13 additions & 11 deletions plugins/outputs/postgresql/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ to use them.
## Non-standard parameters:
## pool_max_conns (default: 1) - Maximum size of connection pool for parallel (per-batch per-table) inserts.
## pool_min_conns (default: 0) - Minimum size of connection pool.
## pool_max_conn_lifetime (default: 0s) - Maximum age of a connection before closing.
## pool_max_conn_lifetime (default: 0s) - Maximum connection age before closing.
## pool_max_conn_idle_time (default: 0s) - Maximum idle time of a connection before closing.
## pool_health_check_period (default: 0s) - Duration between health checks on idle connections.
# connection = ""
Expand Down Expand Up @@ -91,8 +91,9 @@ to use them.
# ]

## Templated statements to execute when adding columns to a table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped. Points
## containing fields for which there is no column will have the field omitted.
## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped. Points containing fields for which there is no
## column will have the field omitted.
# add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ]
Expand All @@ -103,25 +104,26 @@ to use them.
# ]

## Templated statements to execute when adding columns to a tag table.
## Set to an empty list to disable. Points containing tags for which there is no column will be skipped.
## Set to an empty list to disable. Points containing tags for which there is
## no column will be skipped.
# tag_table_add_column_templates = [
# '''ALTER TABLE {{ .table }} ADD COLUMN IF NOT EXISTS {{ .columns|join ", ADD COLUMN IF NOT EXISTS " }}''',
# ]

## The postgres data type to use for storing unsigned 64-bit integer values (Postgres does not have a native
## unsigned 64-bit integer type).
## The postgres data type to use for storing unsigned 64-bit integer values
## (Postgres does not have a native unsigned 64-bit integer type).
## The value can be one of:
## numeric - Uses the PostgreSQL "numeric" data type.
## uint8 - Requires pguint extension (https://github.com/petere/pguint)
# uint64_type = "numeric"

## When using pool_max_conns>1, and a temporary error occurs, the query is retried with an incremental backoff. This
## controls the maximum backoff duration.
## When using pool_max_conns > 1, and a temporary error occurs, the query is
## retried with an incremental backoff. This controls the maximum duration.
# retry_max_backoff = "15s"

## Approximate number of tag IDs to store in in-memory cache (when using tags_as_foreign_keys).
## This is an optimization to skip inserting known tag IDs.
## Each entry consumes approximately 34 bytes of memory.
## Approximate number of tag IDs to store in in-memory cache (when using
## tags_as_foreign_keys). This is an optimization to skip inserting known
## tag IDs. Each entry consumes approximately 34 bytes of memory.
# tag_cache_size = 100000

## Enable & set the log level for the Postgres driver.
Expand Down
Loading

0 comments on commit 13d053f

Please sign in to comment.