Skip to content

Commit

Permalink
Change to demo helm chart repo
Browse files Browse the repository at this point in the history
  • Loading branch information
bashofmann committed Sep 1, 2021
1 parent de304a3 commit bc94fd2
Show file tree
Hide file tree
Showing 107 changed files with 6,324 additions and 1,587 deletions.
42 changes: 42 additions & 0 deletions .github/workflows/pr.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
name: Lint and Test Charts

on: pull_request

jobs:
lint-test:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Set up Helm
uses: azure/setup-helm@v1
with:
version: v3.5.4

- uses: actions/setup-python@v2
with:
python-version: 3.7

- name: Set up chart-testing
uses: helm/[email protected]

- name: Run chart-testing (list-changed)
id: list-changed
run: |
changed=$(ct list-changed)
if [[ -n "$changed" ]]; then
echo "::set-output name=changed::true"
fi
- name: Run chart-testing (lint)
run: ct lint
#
# - name: Create kind cluster
# uses: helm/[email protected]
# if: steps.list-changed.outputs.changed == 'true'
#
# - name: Run chart-testing (install)
# run: ct install
37 changes: 37 additions & 0 deletions .github/workflows/release.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
name: Release Charts

on:
push:
branches:
- master

jobs:
release:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.5.4

- name: Helm Repos
run: |
helm repo add grafana https://grafana.github.io/helm-charts
helm repo add presslabs https://presslabs.github.io/charts
helm repo add raphaelmonrouzeau https://raphaelmonrouzeau.github.io/charts/repository/
helm repo add bitnami https://charts.bitnami.com/bitnami
- name: Run chart-releaser
uses: helm/[email protected]
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
8 changes: 8 additions & 0 deletions Dockerfile.dapper
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
FROM quay.io/helmpack/chart-testing:latest

RUN apk add make

ENV DAPPER_SOURCE /repo
WORKDIR ${DAPPER_SOURCE}

ENTRYPOINT ["make"]
5 changes: 5 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
lint:
ct lint

install:
ct install
50 changes: 13 additions & 37 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,46 +1,22 @@
# Quickstart examples for Rancher
# Rancher Rodeo Helm Chart Repository

## Summary
This repository contains Helm charts that are used during Rancher Rodeo webinars and for demonstration purposes.

This repo contains scripts that will allow you to quickly deploy instances for use during a Rancher Rodeo.
**The Helm charts in this repository are not production ready and are meant for demonstration purposes only!**

The contents aren't intended for production but are here to get you up and running quickly during the rodeo session, either with DO, AWS, or Vagrant.

## DO / AWS Quickstart

The `do` folder and `aws` folder each contain Terraform scripts to stand up an instance for the Rancher Server and a configurable number of instances for the Kubernetes nodes. By default this number is set to one but can be set with `count_agent_all_nodes` in the `terraform.tfvars` file.

### How to use

- Clone this repository and go into the corresponding subfolder for your provider
- Move the file `terraform.tfvars.example` to `terraform.tfvars` and edit (see inline explanation)
- Run `terraform init`
- Run `terraform apply`

When provisioning has finished you will have instances that you can use to deploy Rancher Server and Kubernetes.

### How to Remove

To remove the VMs that have been deployed run `terraform destroy --force`

**Please be aware that you will be responsible for the usage charges with Digital Ocean and Amazon Web Services**

## Vagrant Quickstart

The `vagrant` folder contains the configuration to deploy a single VM for the Rancher Server and one or more VMs for the Kubernetes cluster. By default this number is set to one but can be changed by adjusting `count` under `node` in `config.yaml`.

If you set `rodeo` to `false` in `config.yaml` the installation will provision a complete Rancher Server and Kubernetes cluster all at once. Use this to redeploy a Vagrant cluster quickly.
### How to Use

The prerequistes for this are [vagrant](https://www.vagrantup.com) and [virtualbox](https://www.virtualbox.org), installed on the PC you intend to run it on, and 6GB free memory
Helm CLI:

### How to Use
```shell
helm repo add rodeo https://rancher.github.io/rodeo
```

- Clone this repository and go into the `vagrant` subfolder
- Edit `config.yaml` to set any values for the installation
- Run `vagrant up`
Rancher:

When provisioning is finished the Rancher Server will be available via SSH at `172.22.101.101` and the nodes will be available on sequential IPs starting at `172.22.101.111`. If you set `rodeo` to `false`, the Rancher Server UI will be available at https://172.22.101.101/.
* Go to the Apps Marketplace in Rancher
* Add a new Chart Repository to the HTTP(S) URL `https://rancher.github.io/rodeo` without authentication

### How to Remove
### How to Contribute

To remove the VMs that have been deployed run `vagrant destroy -f`. The configuration uses linked clones, so if you want to destroy the origin instance, open the VirtualBox Manager and remove the base `ubuntu-xenial` instance left behind.
Create a pull request to the master branch. Make sure to bump the chart version in the Chart.yaml. Once the pull request is merged, a Github Action workflow will automatically build and package a new release of the changed charts and publish them to the chart repository.
107 changes: 0 additions & 107 deletions aws/files/userdata_agent

This file was deleted.

66 changes: 0 additions & 66 deletions aws/files/userdata_server

This file was deleted.

Loading

0 comments on commit bc94fd2

Please sign in to comment.