Skip to content

Commit

Permalink
Update AWX docs
Browse files Browse the repository at this point in the history
  • Loading branch information
beeankha committed May 9, 2019
1 parent c7fe840 commit 3719666
Show file tree
Hide file tree
Showing 4 changed files with 199 additions and 220 deletions.
16 changes: 8 additions & 8 deletions docs/clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,23 +52,23 @@ The current standalone instance configuration doesn't change for a 3.1+ deploy.
hostA
hostB
hostC
[instance_group_east]
hostB
hostC
[instance_group_west]
hostC
hostD
```

The `database` group remains for specifying an external postgres. If the database host is provisioned seperately this group should be empty
```
[tower]
hostA
hostB
hostC
[database]
hostDB
```
Expand All @@ -86,7 +86,7 @@ The current standalone instance configuration doesn't change for a 3.1+ deploy.
* There are various new fields for RabbitMQ:
- `rabbitmq_port=5672` - RabbitMQ is installed on each instance and is not optional, it's also not possible to externalize. It is
possible to configure what port it listens on and this setting controls that.
- `rabbitmq_vhost=tower` - Tower configures a rabbitmq virtualhost to isolate itself. This controls that settings.
- `rabbitmq_vhost=tower` - Tower configures a rabbitmq virtualhost to isolate itself. This controls that setting.
- `rabbitmq_username=tower` and `rabbitmq_password=tower` - Each instance will be configured with these values and each instance's Tower
instance will be configured with it also. This is similar to our other uses of usernames/passwords.
- `rabbitmq_cookie=<somevalue>` - This value is unused in a standalone deployment but is critical for clustered deployments.
Expand Down Expand Up @@ -162,7 +162,7 @@ When a job is scheduled to run on an "isolated" instance:
- a static inventory file
- pexpect passwords
- environment variables
- the `ansible`/`ansible-playbook` command invocation, i.e.,
- the `ansible`/`ansible-playbook` command invocation, i.e.,
`bwrap ... ansible-playbook -i /path/to/inventory /path/to/playbook.yml -e ...`

* Once the metadata has been rsynced to the isolated host, the "controller
Expand Down Expand Up @@ -320,7 +320,7 @@ Ideally a regular user of Tower should not notice any semantic difference to the
pointing out the differences in how the system behaves.

When a job is submitted from the API interface it gets pushed into the Celery queue on RabbitMQ. A single RabbitMQ instance is the responsible master for
individual queues but each Tower instance will connect to and receive jobs from that queue using a Fair scheduling algorithm. Any instance on the cluster is
individual queues but each Tower instance will connect to and receive jobs from that queue using a Fair scheduling algorithm. Any instance on the cluster is
just as likely to receive the work and execute the task. If a instance fails while executing jobs then the work is marked as permanently failed.

If a cluster is divided into separate Instance Groups then the behavior is similar to the cluster as a whole. If two instances are assigned to a group then
Expand Down Expand Up @@ -430,7 +430,7 @@ When verifying acceptance we should ensure the following statements are true
a) instances are shared between groups
b) instances are isolated to particular groups
Organizations, Inventories, and Job Templates should be variously assigned to one or many groups and jobs should execute
in those groups in preferential order as resources are available.
in those groups in preferential order as resources are available.

## Performance Testing

Expand Down
103 changes: 22 additions & 81 deletions docs/inventory_plugins.md
Original file line number Diff line number Diff line change
@@ -1,120 +1,61 @@
# Transition to Ansible Inventory Plugins
Inventory updates change from using scripts which are vendored as executable
python scripts to using dynamically-generated
YAML files which conform to the specifications of the `auto` inventory plugin
which are then parsed by their respective inventory plugin.
Inventory updates change from using scripts which are vendored as executable Python scripts to using dynamically-generated YAML files which conform to the specifications of the `auto` inventory plugin which are then parsed by their respective inventory plugin.

The major organizational change is that the inventory plugins are
part of the Ansible core distribution, whereas the same logic used to
be a part of AWX source.
The major organizational change is that the inventory plugins are part of the Ansible core distribution, whereas the same logic used to be a part of AWX source.

## Prior Background for Transition

AWX used to maintain logic that parsed `.ini` inventory file contents,
in addition to interpreting the JSON output of scripts, re-calling with
the `--host` option in the case the `_meta.hostvars` key was not provided.
AWX used to maintain logic that parsed `.ini` inventory file contents, in addition to interpreting the JSON output of scripts, re-calling with the `--host` option in the case the `_meta.hostvars` key was not provided.

### Switch to Ansible Inventory

The CLI entry point `ansible-inventory` was introduced in Ansible 2.4.
In Tower 3.2, inventory imports began running this command
as an intermediary between the inventory and
the import's logic to save content to database. Using `ansible-inventory`
eliminates the need to maintain source-specific logic,
relying on Ansible's code instead. This also allows us to
count on a consistent data structure outputted from `ansible-inventory`.
There are many valid structures that a script can provide, but the output
from `ansible-inventory` will always be the same,
thus the AWX logic to parse the content is simplified.
This is why even scripts must be ran through the `ansible-inventory` CLI.

Along with this switchover, a backported version of
`ansible-inventory` was provided that supported Ansible versions 2.2 and 2.3.
The CLI entry point `ansible-inventory` was introduced in Ansible 2.4. In Tower 3.2, inventory imports began running this command as an intermediary between the inventory and the import's logic to save content to database. Using `ansible-inventory` eliminates the need to maintain source-specific logic, relying on Ansible's code instead. This also allows us to count on a consistent data structure outputted from `ansible-inventory`. There are many valid structures that a script can provide, but the output from `ansible-inventory` will always be the same, thus the AWX logic to parse the content is simplified. This is why even scripts must be ran through the `ansible-inventory` CLI.

Along with this switchover, a backported version of `ansible-inventory` was provided that supported Ansible versions 2.2 and 2.3.

### Removal of Backport

In AWX 3.0.0 (and Tower 3.5), the backport of `ansible-inventory`
was removed, and support for using custom virtual environments was added.
This set the minimum version of Ansible necessary to run _any_
inventory update to 2.4.
In AWX 3.0.0 (and Tower 3.5), the backport of `ansible-inventory` was removed, and support for using custom virtual environments was added. This set the minimum version of Ansible necessary to run _any_ inventory update to 2.4.

## Inventory Plugin Versioning

Beginning in Ansible 2.5, inventory sources in Ansible started migrating
away from "contrib" scripts (meaning they lived in the contrib folder)
to the inventory plugin model.
Beginning in Ansible 2.5, inventory sources in Ansible started migrating away from "contrib" scripts (meaning they lived in the contrib folder) to the inventory plugin model.

In AWX 4.0.0 (and Tower 3.5) inventory source types start to switchover
to plugins, provided that sufficient compatibility is in place for
the version of Ansible present in the local virtualenv.
In AWX 4.0.0 (and Tower 3.5) inventory source types start to switchover to plugins, provided that sufficient compatibility is in place for the version of Ansible present in the local virtualenv.

To see what version the plugin transition will happen, see
`awx/main/models/inventory.py` and look for the source name as a
subclass of `PluginFileInjector`, and there should be an `initial_version`
which is the first version that testing deemed to have sufficient parity
in the content its inventory plugin returns. For example, `openstack` will
begin using the inventory plugin in Ansible version 2.8.
If you run an openstack inventory update with Ansible
2.7.x or lower, it will use the script.
To see what version the plugin transition will happen, see `awx/main/models/inventory.py` and look for the source name as a subclass of `PluginFileInjector`, and there should be an `initial_version` which is the first version that testing deemed to have sufficient parity in the content its inventory plugin returns. For example, `openstack` will begin using the inventory plugin in Ansible version 2.8. If you run an openstack inventory update with Ansible 2.7.x or lower, it will use the script.

### Sunsetting the scripts

Eventually, it is intended that all source types will have moved to
plugins. For any given source, after the `initial_version` for plugin use
is higher than the lowest supported Ansible version, the script can be
removed and the logic for script credential injection will also be removed.
Eventually, it is intended that all source types will have moved to plugins. For any given source, after the `initial_version` for plugin use is higher than the lowest supported Ansible version, the script can be removed and the logic for script credential injection will also be removed.

For example, after AWX no longer supports Ansible 2.7, the script
`awx/plugins/openstack_inventory.py` will be removed.
For example, after AWX no longer supports Ansible 2.7, the script `awx/plugins/openstack_inventory.py` will be removed.

## Changes to Expect in Imports

An effort was made to keep imports working in the exact same way after
the switchover. However, the inventory plugins are a fundamental rewrite
and many elements of default behavior has changed. These changes also
include many backward incompatible changes. Because of this, what you
get via an inventory import will be a superset of what you get from the script
but will not match the default behavior you would get from the inventory
plugin on the CLI.
An effort was made to keep imports working in the exact same way after the switchover. However, the inventory plugins are a fundamental rewrite and many elements of default behavior has changed. These changes also include many backward incompatible changes. Because of this, what you get via an inventory import will be a superset of what you get from the script but will not match the default behavior you would get from the inventory plugin on the CLI.

Because inventory plugins add additional variables, if you downgrade Ansible, you should
turn on `overwrite` and `overwrite_vars` to get rid of stale
variables (and potentially groups) no longer returned by the import.
Because inventory plugins add additional variables, if you downgrade Ansible, you should turn on `overwrite` and `overwrite_vars` to get rid of stale variables (and potentially groups) no longer returned by the import.

### Changes for Compatibility

Programatically-generated examples of inventory file syntax used in
updates (with dummy data) can be found in `awx/main/tests/data/inventory/scripts`,
these demonstrate the inventory file syntax used to restore old behavior
from the inventory scripts.
Programatically-generated examples of inventory file syntax used in updates (with dummy data) can be found in `awx/main/tests/data/inventory/scripts`, these demonstrate the inventory file syntax used to restore old behavior from the inventory scripts.

#### hostvar keys and values

More hostvars will appear if the inventory plugins are used.
To maintain backward compatibility,
the old names are added back where they have the same meaning as a
variable returned by the plugin. New names are not removed.
More hostvars will appear if the inventory plugins are used. To maintain backward compatibility, the old names are added back where they have the same meaning as a variable returned by the plugin. New names are not removed.

A small number of hostvars will be lost because of general deprecation needs.

#### Host names

In many cases, the host names will change. In all cases, accurate host
tracking will still be maintained via the host `instance_id`.
(after: https://github.com/ansible/awx/pull/3362)
In many cases, the host names will change. In all cases, accurate host tracking will still be maintained via the host `instance_id`. (after: https://github.com/ansible/awx/pull/3362)

## How do I write my own Inventory File?

If you do not want any of this compatibility-related functionality, then
you can add an SCM inventory source that points to your own file.
You can also apply a credential of a `managed_by_tower` type to that inventory
source that matches the credential you are using, as long as that is
not `gce` or `openstack`.
If you do not want any of this compatibility-related functionality, then you can add an SCM inventory source that points to your own file. You can also apply a credential of a `managed_by_tower` type to that inventory source that matches the credential you are using, as long as that is not `gce` or `openstack`.

All other sources provide _secrets_ via environment variables, so this
can be re-used without any problems for SCM-based inventory, and your
inventory file can be used securely to specify non-sensitive configuration
details such as the keyed_groups to provide, or hostvars to construct.
All other sources provide _secrets_ via environment variables, so this can be re-used without any problems for SCM-based inventory, and your inventory file can be used securely to specify non-sensitive configuration details such as the keyed_groups to provide, or hostvars to construct.

## Notes on Technical Implementation of Injectors

Expand All @@ -132,21 +73,21 @@ inventory updates. The following fields from the
- `instance_filters`
- `group_by`

The way these data are applied to the environment (including files and
The way these data are applied to the environment (including files and
environment vars) is highly dependent on the specific source.

With plugins, the inventory file may reference files that contain secrets
from the credential. With scripts, typically an environment variable
will reference a filename that contains a ConfigParser format file with
parameters for the update, and possibly including fields from the credential.

Caution: Please do not put secrets from the credential into the
**Caution:** Please do not put secrets from the credential into the
inventory file for the plugin. Right now there appears to be no need to do
this, and by using environment variables to specify secrets, this keeps
open the possibility of showing the inventory file contents to the user
as a latter enhancement.

Logic for setup for inventory updates using both plugins and scripts live
Logic for setup for inventory updates using both plugins and scripts live in the
inventory injector class, specific to the source type.

Any credentials which are not source-specific will use the generic
Expand Down
Loading

0 comments on commit 3719666

Please sign in to comment.