Skip to content
This repository has been archived by the owner on Oct 16, 2023. It is now read-only.

Commit

Permalink
Merge pull request #19 from cloudnautique/fix_readmes_pin_nginx_version
Browse files Browse the repository at this point in the history
Fix readmes pin nginx version
  • Loading branch information
cloudnautique authored Aug 9, 2022
2 parents 48acbdb + cebd532 commit 072e1d7
Show file tree
Hide file tree
Showing 5 changed files with 51 additions and 47 deletions.
37 changes: 18 additions & 19 deletions mariadb/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,21 +12,21 @@ This Acorn provides a multi-node galera cluster.

`acorn run [MARIADB_GALERA_IMAGE]`

This will create a three node cluster with a default database acorn.
This will create a three-node cluster with a default database acorn.

You can get the username and root password if needed from the generated secrets.

## Production considerations

By default this will start a single instance of MariaDB with 1 replica on a 10GB volume from the default storage class. In a production setting you will want to customize this, along with the size and storage class of the backup volumes.
By default, this will start a single instance of MariaDB with 1 replica on a 10GB volume from the default storage class. In a production setting, you will want to customize this, along with the size and storage class of the backup volumes.

#### TODOs

Document how to mount volumes and custom types.

* Add a way to reset root password
* Add a way to reset the root password
* Add a way to pass in custom backup scripts
* Add clean up of older backups.. also limit the number kept.
Add clean-up of older backups.. also limit the number kept.

## Available options

Expand All @@ -51,19 +51,19 @@ Ports: mariadb-0:3306/tcp

### Accessing mariadb

By default the Acorn creates a single replica which can be accessed via the `mariadb-0` service.
By default, the Acorn creates a single replica that can be accessed via the `mariadb-0` service.

If you are going to run in an active-active state with multiple r/w replicas you will want to expose the `mariadb` service and access that through a load balancer.

### Adding replicas

By default the MariaDB chart starts a single r/w replica. In production settings you would typically want more then one replica running. Users have two options with this chart. One method is to add additional passive followers to the primary server. When one of these passive replicas fail or experience an outage nothing happens to the running primary server. If the primary r/w replica fails then service will be down until it is restored.
By default, the MariaDB chart starts a single r/w replica. In production settings, you would typically want more than one replica running. Users have two options with this chart. One method is to add additional passive followers to the primary server. When one of these passive replicas fails or experiences an outage nothing happens to the running primary server. If the primary r/w replica fails then service will be down until it is restored.

Alternatively, the Acorn can configure the replicas to run in an active-active state with multiple replicas able to perform r/w operations.

#### Active-Passive replication

If you would like to run active-passive then you will need to create a custom yaml file like so:
If you would like to run active-passive then you will need to create a custom YAML file like so:

config.yaml

Expand All @@ -79,13 +79,12 @@ replicas:
Then update your deployment:
`acorn update [APP-NAME] --custom-mariadb-config @config.yaml --replicas 2`

This will startup a second replica that can be used for backups, and read-only access.
This will start up a second replica that can be used for backups, and read-only access.

#### Active-Active replication

Galera clusters have a quorem algorithm to prevent split brain scenarios. Ideally clusters run with an odd number of replicas.

By default there are three replicas running and there shouldn't be less. This allows for 1 replica to fail and still serve data. Additional replicas can be added by updating the application to the total number of replicas desired in the end state.
Galera clusters have a quorum algorithm to prevent split-brain scenarios. Clusters should run with an odd number of replicas to avoid split-brain scenarios.
Three replicas are running by default and there shouldn't be fewer. This allows for 1 replica to fail and still serve data. Additional replicas can be added by updating the application to the total number of replicas desired in the end state.

`acorn update [APP-NAME] --replicas 5`

Expand Down Expand Up @@ -155,7 +154,7 @@ Here is an example of how you could do daily backups:
If you would like to add backups to an already running cluster, you can do:
`acorn update [APP-NAME] --backup-schedule "0 0 * * *"`

Backups are run from pod that will mount both the data volume from the `mariadb-0` replica and a separate backup volume. The job uses `mariabackup` to perform the backup of the database cluster.
Backups are run from a pod that will mount both the data volume from the `mariadb-0` replica and a separate `mariadb-backup-vol` volume. The job uses the `mariabackup` user to perform the backup of the database cluster.

#### Listing available backups

Expand Down Expand Up @@ -205,14 +204,14 @@ config_block:
key: "value"
```
So to pass or update a setting in the `mysqld` configuration block create a config.yaml with the content:
So to pass or update a setting in the `mysqld` configuration block creates a config.yaml with the content:

```yaml
mysqld:
max_connections: 1024
```

You can set per-replica configurations if needed by placing the configurations under the `replica` top level key. Each node, specified in `mariadb-\(i)` where `i` is the replica number, can have custom configuration per config block.
You can set per-replica configurations if needed by placing the configurations under the `replica` top-level key. Each node, specified in `mariadb-\(i)` where `i` is the replica number, can have a custom configuration per config block.

```yaml
mysqld:
Expand All @@ -225,9 +224,9 @@ replicas:

Then run/update the app like so:

`acorn run [MARIADB_GALERA_IMAGE] --custom-mariadb-confg @config.yaml`
`acorn run [MARIADB_GALERA_IMAGE] --custom-mariadb-config @config.yaml`

This will get merged with the configuration defined in the Acorn. the defaul config block can be found [here](https://github.com/acorn-io/acorn-library/blob/main/mariadb-galera/Acornfile#L207).
This will get merged with the configuration defined in the Acorn. the default config block can be found [here](https://github.com/acorn-io/acorn-library/blob/main/mariadb-galera/Acornfile#L207).

Some of the configuration values can not be changed.

Expand All @@ -247,7 +246,7 @@ The clusters will come up as expected after this.

### Active - Active recovery from shutdown/quorem loss

When a cluster is completely shutdown, or has lost a majority of the nodes you need to follow a series of manual steps to recover.
When a cluster is completely shut down or has lost a majority of the nodes you need to follow a series of manual steps to recover.

1.) Update the deployment with the `acorn update [APP-NAME] --recovery` flag.

Expand All @@ -260,8 +259,8 @@ mariadb-1-7d977b8fb8-f8lwx/mariadb-1: 2022-06-17 23:57:17 0 [Note] WSREP: Recove
mariadb-2-7f49689648-6h7kf/mariadb-2: 2022-06-17 23:57:18 0 [Note] WSREP: Recovered position: 8d5f1139-ee97-11ec-b8ef-7359029eaa77:3
```

3.) Find the node with the highest position value. In this case we can use `mariadb-1` or `mariadb-2` since they are both at 3.
3.) Find the node with the highest position value. In this case, we can use `mariadb-1` or `mariadb-2` since they are both at 3.

4.) Update the app so that `acorn update [APP-NAME] --recovery --force-recover --boot-strap-index 2`. We are using `2` because it is the most advanced. If the containers have come up and you do not see "failed to update grastate.data" then the app is ready to update.
4.) Update the app so that `acorn update [APP-NAME] --recovery --force-recover --boot-strap-index 2`. We are using `2` because it is the most advanced. If the containers have come up and you do not see `failed to update grastate.data` then the app is ready to update.

5.) `acorn update [APP-NAME] --recovery=false --force-recover=false`. This will cause the containers to restart and the new boot-strap-index node will start the cluster.
4 changes: 2 additions & 2 deletions nginx/Acornfile
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ args: {
}

containers: nginx: {
image: "nginx"
image: "nginx:1.23-alpine"
scale: args.replicas
ports: expose: "80:80/http"

Expand All @@ -25,7 +25,7 @@ containers: nginx: {
if args.gitRepo != "" {
sidecars: {
git: {
image: "alpine/git:v2.34.2"
image: "alpine/git:v2.36.2"
init: true
dirs: {
"/var/www/html": "volume://site-data"
Expand Down
12 changes: 8 additions & 4 deletions nginx/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ This will clone content from this site into the HTML root directory and serve it

To expose this service via ingress:

`acorn run -d my-app.example.com:nginx [IMAGE] --git-repo ...`
`acorn run -p my-app.example.com:nginx [IMAGE] --git-repo ...`

### Available options

Expand All @@ -35,15 +35,15 @@ Ports: nginx:80/http
### Configure custom server blocks

Create a custom secret with the keys equal to the name of the file to place in `/etc/nginx/conf.d/`
The content should be a base64 encoded nginx server block.
The content should be a base64 encoded Nginx server block.

When running the acorn:

`acorn run -s my-server-blocks:nginx-server-blocks [IMAGE] ...`

### Configure base configuration

Create a custom secret with that has a data key `template` with the full content of the nginx.conf file to be used.
Create a custom secret that has a data key `template` with the full content of the nginx.conf file to be used.

When running the acorn pass in the secret name:

Expand All @@ -53,6 +53,10 @@ When running the acorn pass in the secret name:

Create a secret with the ssh keys to use. The keys must already be trusted by the remote repository. You can create the secret like:

`kubectl create secret -n acorn-redis generic my-ssh-keys --from-file=/Users/me/.ssh/id_rsa`
`acorn secret create my-ssh-keys --file=/Users/me/.ssh/id_rsa`

when you run the acorn bind in the secret:

```shell
acorn run -s my-ssh-keys:git-clone-ssh-keys [IMAGE]
```
43 changes: 22 additions & 21 deletions redis/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,25 @@
# Redis Acorn

---
This Acorn deploys Redis in a single leader with multiple followers or in Redis Cluster configuration.
This Acorn deploys Redis in a single leader with multiple followers.

NOTE: Redis Cluster and Sentinel support still WIP.

## Quick start

To quickly deploy a replicated Redis setup simply run the acorn:
To quickly deploy a single Redis instance simply run the acorn:

`acorn run <REDIS_IMG>`

This will create a single Redis server and a single read only replica.
This will create a single Redis server.

Auth will be setup, and you can obtain the password under the token via:
`acorn secret expose redis-auth-<uid>`

If you set the value in the env var REDISCLI_AUTH the `redis-cli` will automatically pick it up.
`export REDISCLI_AUTH=<value>`

You can connect to the Redis instance via the `redis-cli -h <lb-ip>` if the env var above is set you will automatically be logged in, otherwise you need to `AUTH <secret value>`
You can connect to the Redis instance via the `redis-cli -h <lb-ip>` if the env var above is set you will automatically be logged in, otherwise, you need to `AUTH <secret value>`

### Available options

Expand All @@ -27,22 +29,21 @@ Secrets: redis-auth, redis-leader-config, redis-user-data, redis-follower-conf
Container: redis-0, redis-follower-0
Ports: redis-0:6379/tcp, redis-follower-0:6379/tcp

--redis-follower-config string User provided configuration for leader and cluster servers
--redis-leader-config string User provided configuration for leader and cluster servers
--redis-leader-count int Redis leader replica count. Setting this value 3 and above will configure Redis cluster.
--redis-password string Sets the requirepass value otherwise one is randomly generated
--redis-replica-count int Redis replicas per leader. To run in stand alone set to 0
--follower-config string User provided configuration for leader and cluster servers
--leader-config string User provided configuration for leader and cluster servers
--leader int Redis leader replica count. Setting this value 3 and above will configure Redis cluster.
--replica int Redis replicas per leader. To run in stand alone set to 0
```

## Advanced Usage

### Stand alone/Dev mode

You can run in stand alone mode with only a single read-write instance by setting the `--redis-replica-count` to `0`.
You can run in stand-alone mode with only a single read-write instance by setting the `--replica-count` to `0`.

### Custom configuration

Custom configuration can be provided for leaders and follower node types. The passed in configuration will be merged with the Acorn values. The configuration data can be passed in via `yaml` or `cue` file. It should be in the form of `key: value` pairs.
Custom configuration can be provided for leaders and follower node types. The passed-in configuration will be merged with the Acorn values. The configuration data can be passed in via `yaml` or `cue` file. It should be in the form of `key: value` pairs.

For example redis-test.yaml

Expand All @@ -53,9 +54,9 @@ save: "1800 1 150 50 60 10000"
```
Can be passed like:
`acorn run <IMAGE> --redis-leader-config @redis-test.yaml --redis-replica-count 0`
`acorn run <IMAGE> --leader-config @redis-test.yaml --replica-count 0`

This will merge with the predefined redis config. There are some values that can not be overriden:
This will merge with the predefined Redis config. Some values can not be overridden:

#### All Server Roles

Expand Down Expand Up @@ -83,25 +84,25 @@ appendonly

### Adding additional replicas

When running in leader/follower mode you can add additional read-only replicas if needed. Update the app with `--redis-replica-count <total>`
When running in leader/follower mode you can add additional read-only replicas if needed. Update the app with `--replica-count <total>`

### Running in cluster mode

To run in cluster mode, you will need to determine how many primary and how many replicas you would like to run. You will need a minimum of 3 leader nodes to setup the cluster. Then you can specify how many replicas to back up each leader. A simple cluster with redundancy can be deployed as follows:
To run in cluster mode, you will need to determine how many primary and how many replicas you would like to run. You will need a minimum of 3 leader nodes to set up the cluster. Then you can specify how many replicas to back up each leader. A simple cluster with redundancy can be deployed as follows:

`acorn run <REDIS_IMAGE> --redis-leader-count 3 --redis-replica-count 1`
`acorn run <REDIS_IMAGE> --leader-count 3 --replica-count 1`

This will create a cluster with three nodes each backed up by a single replica. This will deploy 6 containers in total. Every time you scale up a leader you will also scale up a replica.

#### Adding additional nodes

To add additional nodes, simply change the scale of the `--redis-leader-count` to a higher number.
`acorn update --image [REDIS] [APP_NAME] --redis-leader-count 4 --redis-replica-count 1`
To add additional nodes, simply change the scale of the `--leader-count` to a higher number.
`acorn update --image [REDIS] [APP_NAME] --leader-count 4 --replica-count 1`

This will add an additional leader and replica (assuming there were 3 leaders previously). These new pods will be added to the cluster one as a leader and the other a replica of that new leader. The cluster will automatically be rebalanced once the new leader has been added.
This will add an additional leader and replica (assuming there were 3 leaders previously). These new pods will be added to the cluster one as a leader and the other as a replica of that new leader. The cluster will automatically be rebalanced once the new leader has been added.

#### Removing nodes

Before removing nodes from the redis cluster you must first empty them. Nodes will be removed in descending order. Nodes are named redis-[LEADER]-[FOLLOWER] so the highest leader and all followers will be removed on the scale down operation. **Note** During normal redis cluster operations leaders and followers might switch roles. This process requires manual intervention to detach the replicas and empty any leaders. Once this is done, you can scale down the cluster.
Before removing nodes from the Redis cluster you must first empty them. Nodes will be removed in descending order. Nodes are named redis-[LEADER]-[FOLLOWER] so the highest leader and all followers will be removed on the scale-down operation. **Note** During normal Redis cluster operations leaders and followers might switch roles. This process requires manual intervention to detach the replicas and empty any leaders. Once this is done, you can scale down the cluster.

Follow REDIS docs: <https://redis.io/docs/manual/scaling/#removing-a-node> to learn how to empty, reshard and remove nodes.
Follow REDIS docs: <https://redis.io/docs/manual/scaling/#removing-a-node> to learn how to empty, re-shard, and remove nodes.
2 changes: 1 addition & 1 deletion registry/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,7 +98,7 @@ s3:
This config blob is using data from the secret `user-secret-data`. This should be populated ahead of time:

`kubectl create secret generic my-data --type opaque --from-literal=s3accesskey=myaccesskey --from-literal=s3secretkey=mysecretkey`
`acorn secret create my-data --data s3accesskey=myaccesskey --data s3secretkey=mysecretkey`

To consume this as part of the deployment run:

Expand Down

0 comments on commit 072e1d7

Please sign in to comment.