Skip to content
This repository has been archived by the owner on Sep 30, 2023. It is now read-only.

Commit

Permalink
Merge pull request #90 from madflojo/develop
Browse files Browse the repository at this point in the history
Ready for 2017.01-beta release
  • Loading branch information
madflojo authored Jan 7, 2017
2 parents 34c2b7f + 0c8cb79 commit fc1160b
Show file tree
Hide file tree
Showing 22 changed files with 690 additions and 112 deletions.
100 changes: 100 additions & 0 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# Contributing to Automatron

Community contributions are essential to the growth of Automatron. Both code and documentation contributions are not only welcomed, they are encouraged. The following guidelines will explain how to get started with contributing to this project.

## Accept our Contributor License Agreement

Before starting to contribute to Automatron please review and accept our [Contributor License Agreement](https://goo.gl/forms/44vauc2jjlNlln2t1)

## Core vs. Plugins

Contributing to the Core platform and contributing Plugins have two different guidelines and requirements. The below will explain some basic concepts of how to contribute different functionality.

### Core

Automatron follows a pluggable architecture with the majority of features being provided by plugins located within the `plugins/` directory. This allows us to keep the Core framework minimal and simple.

At this time there are 4 primary core components of Automatron.

* `discovery` - This component is used to launch node Discovery plugins which serve the purpose of finding new nodes to monitor.
* `runbooks` - The Runbooks component is used to parse and update the monitoring and actioning "rules" applied to monitored nodes.
* `monitoring` - Monitoring is a component that is used to schedule defined checks as well as launching and executing the defined check.
* `actioning` - The Actioning component listen for events based on checks and performs actions specified within Runbooks.

These components are written in Python and as such should follow basic Python development practices.

### Plugins

Where the Automatron Core provides a monitoring and actioning framework, the functional features are all provided by Automatron Plugins. Plugins, are the fastest way to add features to Automatron. As such it is suggested that new contributors start by adding a plugin before adding core functionality.

#### Executable Plugins

At this time there are 6 types of Plugins.

* `actions` - Executables used to perform corrective actions.
* `checks` - Executables used to check system health (Nagios compatible).
* `datastores` - Python modules used by Automatron Core to access datastores.
* `discovery` - Python modules used by Automatron to automatically detect new monitoring targets.
* `logging` - Python modules used to provide custom logging mechanisms to Automatron Core.
* `vetting` - Executables used to identify `facts` for discovered monitoring targets.

While `datastores`, `logging`, and `discovery` plugins are Python modules; `actions`, `checks` and `vetting` are simply executables.
Python is the preferred approach however, these plugins may also be written in Perl or BASH. While other languages are accepted it is important to ensure capabilities are available across as many platforms as possible. When writing plugins do consider the availability of functionality on generic systems.

## Contribution Workflow & Requirements

Automatron follows a workflow very similar to the GitHub flow.

Pull Requests for new features should be opened against the `develop` branch. It is recommended to create a feature branch on your local repository from the `develop` branch to avoid merge conflicts or and ease the integration process.

```console
$ git checkout develop
$ git checkout -b new-feature
```

Periodically the `develop` branch will be merged into the `master` branch to start the process of creating a new release. Prior to merging changes into `master` a release branch will be created for the previous release base.

When opening a Pull Request for a bug fix, if the fix is for the current release, the Pull Request should be opened to the `master` branch. If the fix is for a previous release, the Pull Request should be opened to the release specific branch.

If the bug fix should also be incorporated with the `develop` branch a second Pull Request should be opened to the `develop` branch.

### Tests are required for Core and some Plugins

Any Pull Requests for the Automatron core code base should include applicable `unit`, `integration` and `functional` tests. Automatron uses Coveralls to ensure code coverage does not decrease with each Pull Request.

While not strictly enforced, plugins should also include tests where applicable. In some cases it may not be possible to provide `unit` or `integration` tests for plugins. In these cases it is recommended to create `functional` tests.

### Documentation is required

Documentation of new functionality is important to increase the adoption of Automatron. As such, you may be asked to provide documentation for new functionality created by your Pull Request. This is especially true for new plugins being submitted as plugins must be documented in order for users to adopt the plugin.

Documentation is just as important as new functionality, as such documentation based pull requests are encouraged. For idea's on current gaps please reference our [documentation board](https://github.com/madflojo/automatron/projects/1).

## Developer environment

To ease the development and testing experience of Automatron a `docker-compose` environment has been created and is included within the repository.

To launch a local instance of Automatron simply execute the following `docker-compose` command.

```console
$ sudo docker-compose up --build automatron
```

If you wish to execute tests you can do so by running the following `docker-compose` command.

```console
$ sudo docker-compose up --build test
```

To test documentation updates you can launch a `mkdocs` container as well.

```console
$ sudo docker-compose up --build mkdocs
```

To wipe and reset the `docker-compose` environment simply run the following.

```console
$ sudo docker-compose kill automatron redis
$ sudo docker-compose rm automatron redis tests mkdocs
```
5 changes: 4 additions & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,10 @@ RUN apt-get update --fix-missing && \
python-pip \
python-dev \
nmap \
curl
curl \
libffi-dev \
build-essential \
libssl-dev
ADD requirements.txt /
RUN pip install --upgrade setuptools
RUN pip install -r /requirements.txt
Expand Down
24 changes: 17 additions & 7 deletions config/config.yml.example
Original file line number Diff line number Diff line change
Expand Up @@ -27,24 +27,34 @@ discovery: # Discovery Configurations
upload_path: /tmp/
vetting_interval: 30
plugins:
## Start web service for HTTP GET or POST requests
webping:
webping: # Web Service for HTTP GET or POST requests
ip: 0.0.0.0
port: 20000
## Use NMAP to find new hosts
# nmap:
# nmap: # NMAP Scanning for new hosts
# target: 10.0.0.1/8
# flags: -sP
# interval: 40
## Query DigitalOcean
# digitalocean:
# digitalocean: # Query DO's API
# url: https://api.digitalocean.com/v2
# api_key: example
# interval: 60
# aws: # Query AWS' API
# aws_access_key_id: example
# aws_secret_access_key: example
# interval: 60
# filter:
# - PublicIpAddress
# - PrivateIpAddress
# linode:
# url: https://api.linode.com
# api_key: example
# interval: 60


datastore: # Datastore Configurations
## Default Datastore Engine
engine: redis
## Datastore Specific configuration
## Datastore Specific configuration
plugins:
## Redis
redis:
Expand Down
8 changes: 8 additions & 0 deletions docker-compose.yml
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,14 @@ services:
test:
build: .
command: python /tests.py
mkdocs:
image: thinkcube/mkdocs
volumes:
- ./:/automatron
ports:
- 8000:8000
working_dir: /automatron
command: mkdocs serve -a 0.0.0.0:8000
coverage:
build: .
command: coverage run tests.py && coverage report
2 changes: 1 addition & 1 deletion docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,7 @@ actions:

## Next Steps

* Follow our quick start guide: [Automatron in 10 minutes](Automatron-in-10-minutes)
* Follow our quick start guide: [Automatron in 10 minutes](automatron-in-10-minutes)
* Check out [example Runbooks](https://github.com/madflojo/automatron/tree/master/config/runbooks/examples) for automating common tasks
* Read our [Runbook Reference](Runbooks) documentation to better understand the anatomy of a Runbook
* Deploy a [Docker container](https://hub.docker.com/r/madflojo/automatron/) instance of Automatron
Expand Down
24 changes: 24 additions & 0 deletions docs/plugins/discovery/aws.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
The `aws` Discovery plugin is used to discover new instances on Amazon Web Services. This plugin will periodically check AWS and add all identified instances to the "potential targets" queue.

## Configuration

This plugin does require some configuration in Automatron's master configuration file `config.yml`.

```yaml
discovery:
plugins:
aws:
aws_access_key_id: example
aws_secret_access_key: example
interval: 60
filter:
- PublicIpAddress
- PrivateIpAddress
```
The `aws` plugin requires four configuration items.

* `aws_access_key_id` - This an Key ID for AWS
* `aws_secret_access_key` - This is the secret key used to authenticate with AWS
* `interval` - This is the frequency to query Digital Ocean's API
* `filter` - This is used to define whether Public or Private IP addresses are used for target identification
4 changes: 3 additions & 1 deletion docs/plugins/discovery/digitalocean.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,11 +8,13 @@ This plugin does require some configuration in Automatron's master configuration
discovery:
plugins:
digitalocean:
url: http://example.com
api_key: example
interval: 60
```
The `digitalocean` plugin requires two configuration items.

* `api_key` - This is the Digital Ocean API key.
* `url` - This is the URL of Digital Ocean's API
* `api_key` - This is the Digital Ocean API key
* `interval` - This is the frequency to query Digital Ocean's API
20 changes: 20 additions & 0 deletions docs/plugins/discovery/linode.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
The `linode` Discovery plugin is used to discover new Linode servers. This plugin will periodically perform an HTTP GET request against Linode's API. All servers identified are then added to the "potential targets" queue.

## Configuration

This plugin does require some configuration in Automatron's master configuration file `config.yml`.

```yaml
discovery:
plugins:
linode:
url: http://example.com
api_key: example
interval: 60
```
The `linode` plugin requires three configuration items.

* `url` - This is the URL to Linode's API
* `api_key` - This is the Linode API key
* `interval` - This is the frequency to query Linode's API
18 changes: 18 additions & 0 deletions docs/plugins/discovery/roster.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
The `roster` Discovery plugin is used to discover new hosts via the Automatron base configuration file. This plugin allows users to simply specify hosts within the main configuration file `config/config.yml`.

## Configuration

This plugin requires configuration in Automatron's master configuration file `config.yml`.

```yaml
discovery:
plugins:
roster:
hosts:
- 10.0.0.1
- 10.0.0.3
```
The `roster` plugin requires one configuration items.

* `hosts` - A list of target hosts.
7 changes: 5 additions & 2 deletions mkdocs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -17,9 +17,12 @@ pages:
- Plugins:
- Index: plugins.md
- Discovery:
- nmap: plugins/discovery/nmap.md
- Web Ping: plugins/discovery/webping.md
- AWS: plugins/discovery/aws.md
- Digital Ocean: plugins/discovery/digitalocean.md
- Linode: plugins/discovery/linode.md
- Network Map (nmap): plugins/discovery/nmap.md
- Roster file: plugins/discovery/roster.md
- Web Ping: plugins/discovery/webping.md
- Checks:
- Network:
- Ping: plugins/checks/network/ping.md
Expand Down
61 changes: 61 additions & 0 deletions plugins/discovery/aws/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
''' AWS discovery plugin '''

import time
import json
import requests
from core.discover import BaseDiscover
import core.logs
import boto3

class Discover(BaseDiscover):
''' Main Discover Class '''

def start(self):
''' Start Discovery '''
logs = core.logs.Logger(config=self.config, proc_name="discovery.aws")
logger = logs.getLogger()
logger = logs.clean_handlers(logger)
logger.info("Getting hosts from AWS")


while True:
# Setup IP List
ip_addrs = []

try:
# Connect to AWS
session = boto3.session.Session(
aws_access_key_id=self.config['discovery']['plugins']['aws']['aws_access_key_id'],
aws_secret_access_key=self.config['discovery']['plugins']['aws']['aws_secret_access_key'])
# Get Regions then connect to each and list instances
for region in session.get_available_regions('ec2'):
ec2 = session.client("ec2", region)
data = ec2.describe_instances()
for reservation in data['Reservations']:
for instance in reservation['Instances']:
# Check if filter should be public or private IP's
if 'filter' in self.config['discovery']['plugins']['aws']:
ip_types = self.config['discovery']['plugins']['aws']['filter']
else: # Default to both
ip_types = [ 'PrivateIPAddress', 'PublicIPAddress' ]
# Get IP's and Append to list
for ip_type in ip_types:
ip_addrs.append(instance[ip_type])
except Exception as e:
logger.debug("Failed to query AWS: {0}".format(e.message))

# Process found IP's
for ip in ip_addrs:
if self.dbc.new_discovery(ip=ip):
logger.debug("Added host {0} to discovery queue".format(ip))
else:
logger.debug("Failed to add host {0} to discovery queue".format(ip))

logger.debug("Found {0} hosts".format(len(ip_addrs)))
if "unit_testing" in self.config.keys():
# Break out of loop for unit testing
break
else:
time.sleep(self.config['discovery']['plugins']['aws']['interval'])
# Return true for unit testing
return True
8 changes: 4 additions & 4 deletions plugins/discovery/digitalocean/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ def start(self):
logs = core.logs.Logger(config=self.config, proc_name="discovery.digitalocean")
logger = logs.getLogger()
logger = logs.clean_handlers(logger)
logger.info("Getting hosts from DigitalOcean")
logger.debug("Getting hosts from DigitalOcean")

# Define DO information for API Request
url = "https://api.digitalocean.com/v2/droplets"
url = "{0}/droplets".format(self.config['discovery']['plugins']['digitalocean']['url'])
headers = {
'Content-Type': 'application/json',
'Authorization': 'Bearer {0}'.format(
Expand Down Expand Up @@ -49,7 +49,7 @@ def start(self):
for ip_type in ['v4', 'v6']:
for interface in droplet['networks'][ip_type]:
if interface['type'] == "public":
logger.info("Found host: {0}".format(interface['ip_address']))
logger.debug("Found host: {0}".format(interface['ip_address']))
ip_addrs.append(interface['ip_address'])
else:
logger.warn("Unable to query DigitalOcean API: HTTP Response {0}".format(r.status_code))
Expand All @@ -60,7 +60,7 @@ def start(self):
else:
logger.debug("Failed to add host {0} to discovery queue".format(ip))

logger.debug("Found {0} hosts".format(len(ip_addrs)))
logger.info("Found {0} hosts".format(len(ip_addrs)))
if "unit_testing" in self.config.keys():
# Break out of loop for unit testing
break
Expand Down
Loading

0 comments on commit fc1160b

Please sign in to comment.