Skip to content

Commit

Permalink
DOC Review (#39)
Browse files Browse the repository at this point in the history
* pages edited with doc concerns

Signed-off-by: dishanktiwari2501 <[email protected]>

* data dog section

Signed-off-by: dishanktiwari2501 <[email protected]>

* prometheus  section

Signed-off-by: dishanktiwari2501 <[email protected]>

* prometheus  section

Signed-off-by: dishanktiwari2501 <[email protected]>

* changes updated

Signed-off-by: dishanktiwari2501 <[email protected]>

* changes made as suggested

Signed-off-by: dishanktiwari2501 <[email protected]>

* updated the product name

Signed-off-by: dishanktiwari2501 <[email protected]>

* updated the product name

Signed-off-by: dishanktiwari2501 <[email protected]>

* updated the docs

Signed-off-by: dishanktiwari2501 <[email protected]>

* updated the docs

Signed-off-by: dishanktiwari2501 <[email protected]>

---------

Signed-off-by: dishanktiwari2501 <[email protected]>
  • Loading branch information
dishanktiwari2501 authored Apr 30, 2024
1 parent 2fa5e21 commit 928c667
Show file tree
Hide file tree
Showing 32 changed files with 2,423 additions and 1,944 deletions.
5 changes: 2 additions & 3 deletions Contributing.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ All types of contributions are encouraged and valued. See the [Table of Contents

The Chef Automate HA equates to reliability, efficiency, and productivity, built on Redundancy and Failover. It aids in addressing significant issues like service failure and zone failure. Please refer to the public [documentation](https://docs.chef.io/automate/ha/) of Automate HA for more information.

This project is a documentation-only repository that provides guided steps on how to build and integrate Monitoring, Alerting, and Centralised logging tools with Chef Automate HA. Based on our analysis we have selected a few tools which is our recommendation.
This project is a documentation-only repository that provides guided steps on how to build and integrate Monitoring, Alerting, and Centralized logging tools with Chef Automate HA. Based on our analysis we have selected a few tools which is our recommendation.


## I Want To Contribute
Expand All @@ -32,8 +32,7 @@ This section guides you on how to suggest enhancement in this documentation. Fol
- Make sure that you have tried the Automate HA product for your usage and are well aware of its use cases, support, and functionality.
- Read the existing Monitoring, Alerting, and Logging [documentation](https://github.com/chef/monitoring-integration-automate/blob/Adding-Contributing.md/Whitepaper_AutomateHA_Monitoring_and_Alerting.md) of this repo carefully and find out if that documentation is already covered.
- Perform a [search](/issues) to see if the enhancement has already been suggested. If it has, add a comment to the existing issue instead of opening a new one.
- Find out whether your idea fits with the scope and aims of the project.

- Find out whether your idea fits with the scope and aims of the project.

#### How Do I Submit a Good Documentation Enhancement Suggestion?

Expand Down
179 changes: 98 additions & 81 deletions ELK/ELK-installation-and-configuration.md
Original file line number Diff line number Diff line change
@@ -1,136 +1,153 @@
# ELK(ElasticSearch, Logstash, and Kibana) Integration for Chef Automate HA
# ELK(Elasticsearch, Logstash, and Kibana) Integration for Chef Automate HA

## Introduction to ELK Stack

ELK Stack is open-source software that allows the search and visualization of logs generated by systems. ELK Stack has 3 primary components.
ELK Stack is open-source software that allows the search and visualization of logs generated by systems. ELK Stack has three primary components.

* **Elasticsearch** - A search engine that stores all collected logs.
* **Logstash** - A data processing component that sends incoming logs to Elasticsearch.
* **Kibana** - A web interface for visualization and searching of logs within Elasticsearch.
1. **Elasticsearch**: A search engine that stores all collected logs.

* **Filebeat** will be used to push logs from Chef Automate HA nodes to Logstash.
1. **Logstash**: A data processing component that sends incoming logs to Elasticsearch.

Installation of the ELK stack can be done in various ways depending on the organizational needs and requirements. Please refer to this document for details on system requirements and various kinds of installations. As part of this documentation, we have focused on one of the ways of installation.
1. **Kibana**: A web interface for visualization and searching of logs within Elasticsearch.

1. **Filebeat**: Filebeat is used to push logs from Chef Automate HA nodes to Logstash.

The Elastic site (https://elastic.co) should be referenced for details on the sizing and configuration of ELK Stack.
Installation of the ELK stack can be done in various ways depending on the organizational needs and requirements. Please refer to this document for details on system requirements and various kinds of installations. As part of this documentation, we have focused on one of the ways of installation.

## Install Prerequistes
The [Elastic site](https://elastic.co) should be referenced for details on the sizing and configuration of ELK Stack.

## Install Prerequisites

+ Dependency: ELK needs Java as a dependency, install the same if not:
1. Dependency: ELK needs Java as a dependency, install the same using the following command:

```sh
sudo apt-get install openjdk-8-jdk
```

+ Download and install the public signing key:
1. Download and install the public signing key:

```sh
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
```

+ Installing from the APT repository:
1. Installing from the APT repository:

```sh
sudo apt-get install apt-transport-https
```

+ Save the repository definition to /etc/apt/sources.list.d/elastic-8.x.list:
1. Save the repository definition to /etc/apt/sources.list.d/elastic-8.x.list:

```sh
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
```

## ELK Installation

## ELK Installation:
Please follow the below steps for installing the elastic stack in the normal way.
Please follow the below steps to install the elastic stack in the normal way.

### Elasticsearch Installation and Configuration

+ Install the Elasticsearch Debian package with:
1. Install the Elasticsearch Debian package with:

```sh
sudo apt-get update && sudo apt-get install elastic search
```

+ Configure the elastic settings:
1. Configure the elastic settings:

```sh
sudo nano /etc/elasticsearch/elasticsearch.yml
```

+ Uncomment the Port and add the port number (for example - 9200)
+ uncomment and the correct "network.host" IP
+ If required, add cluster or "discovery.type: single-node" and node settings.
* Uncomment the Port and add the port number (for example - 9200)
* uncomment and the correct "network.host" IP
* If required, add cluster or "discovery.type: single-node" and node settings.

![Elastic-configure](images/Elastic-configure.png)
![Elastic-configure](images/Elastic-configure.png)

+ Start the elastic services:
1. Start the elastic services:

```sh
sudo systemctl daemon-reload
sudo systemctl start elasticsearch.service
sudo systemctl enable elasticsearch.service
```

For other ways to install elasticsearch; pls follow reference at [Elasticsearch-installation](https://www.elastic.co/guide/en/elasticsearch/reference/current/install-elasticsearch.html)

### Kibana Installation and Configuration:
### Kibana Installation and Configuration

+ To install Kibana run the following:
* To install Kibana run the following:

apt install kibana
sudo nano /etc/kibana/kibana.yml --> configure the kibana
sudo systemctl start kibana.service
sudo systemctl enable kibana.service
```sh
apt install kibana
sudo nano /etc/kibana/kibana.yml --> configure the kibana
sudo systemctl start kibana.service
sudo systemctl enable kibana.service
```

+ Uncomment the Port and add the port number for kibana (for example - 5601)
+ uncomment and add correct "server.host" for kibana
+ uncomment and add correct "network.host" for elasticsearch
* Uncomment the Port and add the port number for kibana (for example - 5601).

![Elastic-configure](images/Kibana-configure.png)
* uncomment and add correct "server.host" for kibana.

For other ways to install kibana; pls follow reference at [Kibana-installation](https://www.elastic.co/guide/en/kibana/current/install.html)
* uncomment and add correct "network.host" for Elasticsearch.

### Logstash Installation and Configuration:
![Elastic-configure](images/Kibana-configure.png)

+ To install Logstash run the following:

apt install logstash
sudo systemctl start logstash.service
sudo systemctl enable logstash.service
For other ways to install kibana; pls follow reference at [Kibana-installation](https://www.elastic.co/guide/en/kibana/current/install.html)

### Configuration of Logstash
### Logstash Installation and Configuration

1. Create a configuration file to allow Filebeat to communicate with Logstash.
* To install Logstash run the following:

```
sudo nano /etc/logstash/conf.d/chef-beats-input.conf
```
2. Enter the following in the chef-beats-input.conf file to allow Filebeat to send logs to Logstash over TCP port 5044.

```
# Read input from filebeat on Chef Automate HA nodes by listening to port 5044 on which filebeat will send the data
input {
beats {
port => "5044"
}
}
filter {
#If log line contains 'hab' then we will tag that entry as Chef
if [message] =~ "hab" {
grok {
match => ["message", "^(hab)"]
add_tag => ["Chef Automate HA"]
}
}
}
output {
stdout {
codec => rubydebug
}
# Send parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
}
}
```sh
apt install logstash
sudo systemctl start logstash.service
sudo systemctl enable logstash.service
```

3. Restart the logstash service:
### Configuration of Logstash

```
sudo systemctl restart logstash.service
```
1. Create a configuration file to allow Filebeat to communicate with Logstash.

For other ways to install logstash; pls follow reference at [Logstash-installation](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html)
```sh
sudo nano /etc/logstash/conf.d/chef-beats-input.conf
```

1. Enter the following in the chef-beats-input.conf file to allow Filebeat to send logs to Logstash over TCP port 5044.

```sh
# Read input from filebeat on Chef Automate HA nodes by listening to port 5044 on which filebeat will send the data
input {
beats {
port => "5044"
}
}
filter {
#If log line contains 'hab' then we will tag that entry as Chef
if [message] =~ "hab" {
grok {
match => ["message", "^(hab)"]
add_tag => ["Chef Automate HA"]
}
}
}
output {
stdout {
codec => rubydebug
}
# Send parsed log events to elasticsearch
elasticsearch {
hosts => ["localhost:9200"]
}
}
```
1. Restart the logstash service:
```sh
sudo systemctl restart logstash.service
```
For other ways to install logstash; follow reference at [Logstash Installation](https://www.elastic.co/guide/en/logstash/current/installing-logstash.html) page.
Loading

0 comments on commit 928c667

Please sign in to comment.