Skip to content

Commit

Permalink
Merge pull request #16965 from RamanaReddy8801/Updated-Druid-Document…
Browse files Browse the repository at this point in the history
…ation

Updated Apache Druid Documentation
  • Loading branch information
bradleycamacho authored Apr 17, 2024
2 parents c3f6a27 + a6321b0 commit 1d5bc23
Showing 1 changed file with 51 additions and 50 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
<Step>
## Expose Druid metrics using Prometheus Emitter [#Expose]

1. Add `prometheus.emitter` to the end of the extensions load list in your `apache-druid-24.0.0/conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` file:
1. Add `prometheus.emitter` to the end of the extensions load list in your `apache-druid-$version/conf/druid/single-server/micro-quickstart/_common/common.runtime.properties` file:

```yml
druid.extensions.loadList=["druid-hdfs-storage", "druid-kafka-indexing-service", "druid-datasketches", "druid-multi-stage-query", "prometheus-emitter"]
Expand All @@ -57,7 +57,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
<tbody>
<tr>
<td>
`PATH/TO/broker.runtime.properties`
`PATH/TO/broker/runtime.properties`
</td>
<td>
```yml
Expand All @@ -72,7 +72,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
</tr>
<tr>
<td>
`PATH/TO/Coordinator-Overlord.runtime.properties`
`PATH/TO/coordinator-overlord/runtime.properties`
</td>
<td>
```yml
Expand All @@ -87,7 +87,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
</tr>
<tr>
<td>
`PATH/TO/Historical.runtime.properties`
`PATH/TO/historical/runtime.properties`
</td>
<td>
```yml
Expand All @@ -102,7 +102,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
</tr>
<tr>
<td>
`PATH/TO/MiddleManager.runtime.properties`
`PATH/TO/middleManager/runtime.properties`
</td>
<td>
```yml
Expand All @@ -117,7 +117,7 @@ To use the Apache Druid integration, you need to first [install the infrastructu
</tr>
<tr>
<td>
`PATH/TO/Router.runtime.properties`
`PATH/TO/router/runtime.properties`
</td>
<td>
```yml
Expand All @@ -139,14 +139,14 @@ To use the Apache Druid integration, you need to first [install the infrastructu

1. Run the following commands to create a folder named `prometheus-emitter` within the `extensions` folder directory of your Apache Druid setup:
```shell
cd apache-druid-29.0.0/extensions/
cd apache-druid-$version/extensions/
```
```shell
sudo mkdir prometheus-emitter
```
2. Navigate to the druid download directory and run the following command to create jar files that the server calls on start up:
```shell
java \
sudo java \
-cp "lib/*" \
-Ddruid.extensions.directory="extensions" \
-Ddruid.extensions.hadoopDependenciesDir="hadoop-dependencies" \
Expand All @@ -167,60 +167,61 @@ To use the Apache Druid integration, you need to first [install the infrastructu

```yml
integrations:
- name: nri-prometheus
- name: nri-prometheus
config:
# When standalone is set to false nri-prometheus requires an infrastructure agent to work and send data. Defaults to true
standalone: false
# When standalone is set to false nri-prometheus requires an infrastructure agent to work and send data. Defaults to true
standalone: false

# When running with infrastructure agent emitters will have to include infra-sdk
emitters: infra-sdk
# When running with infrastructure agent emitters will have to include infra-sdk
emitters: infra-sdk

# The name of your cluster. It's important to match other New Relic products to relate the data.
cluster_name: "prometheus-druid"
# The name of your cluster. It's important to match other New Relic products to relate the data.
cluster_name: "Apache-druid"

targets:
targets:
- description: Secure etcd example
urls: ["https://<YOUR_HOST_IP>:19091/metrics",https://<YOUR_HOST_IP>:19092/metrics", https://<YOUR_HOST_IP>:19093/metrics",https://<YOUR_HOST_IP>:19094/metrics",https://<YOUR_HOST_IP>:19095/metrics" ]
# tls_config:
# ca_file_path: "/etc/etcd/etcd-client-ca.crt"
# cert_file_path: "/etc/etcd/etcd-client.crt"
# key_file_path: "/etc/etcd/etcd-client.key"

# Whether the integration should run in verbose mode or not. Defaults to false.
verbose: false

# Whether the integration should run in audit mode or not. Defaults to false.
# Audit mode logs the uncompressed data sent to New Relic. Use this to log all data sent.
# It does not include verbose mode. This can lead to a high log volume, use with care.
audit: false

# The HTTP client timeout when fetching data from endpoints. Defaults to "5s" if it is not set.
# This timeout in seconds is passed as well as a X-Prometheus-Scrape-Timeout-Seconds header to the exporters
# scrape_timeout: "5s"

# Length in time to distribute the scraping from the endpoints. Default to "30s" if it is not set.
scrape_duration: "5s"
# Number of worker threads used for scraping targets.
# For large clusters with many (>400) endpoints, slowly increase until scrape
# time falls between the desired `scrape_duration`.
# Increasing this value too much will result in huge memory consumption if too
# many metrics are being scraped.
# Default: 4
# worker_threads: 4

# Whether the integration should skip TLS verification or not. Defaults to false.
insecure_skip_verify: false
urls: ["http://<YOUR_HOST_IP>:19091/metrics","http://<YOUR_HOST_IP>:19092/metrics", "http://<YOUR_HOST_IP>:19093/metrics","http://<YOUR_HOST_IP>:19094/metrics","http://<YOUR_HOST_IP>:19095/metrics"]
# tls_config:
# ca_file_path: "/etc/etcd/etcd-client-ca.crt"
# cert_file_path: "/etc/etcd/etcd-client.crt"
# key_file_path: "/etc/etcd/etcd-client.key"

# Whether the integration should run in verbose mode or not. Defaults to false.
verbose: false

# Whether the integration should run in audit mode or not. Defaults to false.
# Audit mode logs the uncompressed data sent to New Relic. Use this to log all data sent.
# It does not include verbose mode. This can lead to a high log volume, use with care.
audit: false

# The HTTP client timeout when fetching data from endpoints. Defaults to "5s" if it is not set.
# This timeout in seconds is passed as well as a X-Prometheus-Scrape-Timeout-Seconds header to the exporters
# scrape_timeout: "5s"

# Length in time to distribute the scraping from the endpoints. Default to "30s" if it is not set.
scrape_duration: "5s"

# Number of worker threads used for scraping targets.
# For large clusters with many (>400) endpoints, slowly increase until scrape
# time falls between the desired `scrape_duration`.
# Increasing this value too much will result in huge memory consumption if too
# many metrics are being scraped.
# Default: 4
# worker_threads: 4

# Whether the integration should skip TLS verification or not. Defaults to false.
insecure_skip_verify: false

timeout: 10s
```
</Step>
<Step>
## Forward Druid logs to New Relic
1. Create a file named logging.yml in the infrastrucutre agent directory:
1. Edit the log file named `logging.yml` located at the following path:

```shell
touch /etc/newrelic-infra/logging.d/logging.yml
cd /etc/newrelic-infra/logging.d
```

2. Add the following snippet to the `logging.yml` file:
Expand All @@ -238,7 +239,7 @@ Use the instructions in our [infrastructure agent docs](/docs/infrastructure/ins

```shell
sudo systemctl restart newrelic-infra.service
```
```
</Step>
<Step>

Expand All @@ -248,7 +249,7 @@ Once you've completed the setup above, you can view your metrics using our pre-b

1. Go to **[one.newrelic.com](https://one.newrelic.com/) > + Add data**.
2. Click on the **Dashboards** tab.
3. In the search box, type `Apache-druid`.
3. In the search box, type `Apache druid`.
4. Select it and click **Install**.

To instrument the Apache Druid quickstart and to see metrics and alerts, you can also follow our [Apache Druid quickstart page](https://newrelic.com/instant-observability/apache-druid) by clicking on the `Install now` button.
Expand Down

0 comments on commit 1d5bc23

Please sign in to comment.