Skip to content

Commit

Permalink
Fixed some doc issues and trying an automatic release of charts
Browse files Browse the repository at this point in the history
Signed-off-by: Carlos Ravelo <[email protected]>
  • Loading branch information
gandazgul committed Dec 7, 2020
1 parent 0239f08 commit 22b970d
Show file tree
Hide file tree
Showing 5 changed files with 87 additions and 13 deletions.
74 changes: 74 additions & 0 deletions .github/workflows/release.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,74 @@
name: Release Charts

on:
push:
branches:
- master
paths:
- "charts/**"

jobs:
pre-release:
runs-on: ubuntu-latest
timeout-minutes: 5
steps:
- name: Block concurrent releases
uses: softprops/turnstyle@v1
with:
continue-after-seconds: 180
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}

release:
needs: pre-release
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
fetch-depth: 0

- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Install Helm
uses: azure/setup-helm@v1
with:
version: v3.4.0

- name: Run chart-releaser
uses: helm/[email protected]
with:
charts_repo_url: https://gandazgul.github.io/k8s-infrastructure/helmrepo/
env:
CR_TOKEN: "${{ secrets.GITHUB_TOKEN }}"

# Update the generated timestamp in the index.yaml
# needed until https://github.com/helm/chart-releaser/issues/90
# or helm/chart-releaser-action supports this
post-release:
needs: release
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v2
with:
ref: "gh-pages"
fetch-depth: 0

- name: Configure Git
run: |
git config user.name "$GITHUB_ACTOR"
git config user.email "[email protected]"
- name: Commit and push timestamp updates
run: |
if [[ -f index.yaml ]]; then
export generated_date=$(date --utc +%FT%T.%9NZ)
sed -i -e "s/^generated:.*/generated: \"$generated_date\"/" index.yaml
git add index.yaml
git commit -sm "Update generated timestamp [ci-skip]" || exit 0
git push
fi
3 changes: 0 additions & 3 deletions .idea/infrastructure.iml → .idea/k8s-infrastructure.iml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion .idea/modules.xml

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ created that pods can use as volumes. There's a k8s cron job included to make di
## My Home Setup

A small business server running as a master node and worker. I plan to add at least one other
node to learn to manage a "cluster" and to try and automate node onboarding. I've tested the manual node onboarding with VMs and it works well. Look at this script [https://github.com/gandazgul/k8s-infrastructure/blob/master/k8s-config/2-configK8SNode.sh]()
node to learn to manage a "cluster" and to try to automate node on-boarding. I've tested the manual node on-boarding with VMs, and it works well. Look at this script [https://github.com/gandazgul/k8s-infrastructure/blob/master/k8s-config/2-configK8SNode.sh]()

## Helm repo

Expand Down
19 changes: 11 additions & 8 deletions index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
This is a collection of scripts to deploy kubernetes on Fedora. Tested on Fedora 31.

It's also a collection of helm charts that I developed or customized (See [Repo](#helm-repo)), as well as [helmfiles](https://github.com/roboll/helmfile/)
to deploy all of the supported applications.
to deploy all the supported applications.

The storage is handled with PersistenceVolumes mapped to mount points on the host and pre-existing claims
created that pods can use as volumes. There's a k8s cron job included to make differential backups between the main mount point and the backup one.
Expand All @@ -16,7 +16,9 @@ created that pods can use as volumes. There's a k8s cron job included to make di
## My Home Setup

A small business server running as a master node and worker. I plan to add at least one other
node to learn to manage a "cluster" and to try and automate node onboarding. I've tested the manual node onboarding with VMs and it works well. Look at this script [https://github.com/gandazgul/k8s-infrastructure/blob/master/k8s-config/2-configK8SNode.sh]()
node to learn to manage a "cluster" and to try to automate node on-boarding. I've tested the
manual node on-boarding with VMs, and it works well.
Look at this script [https://github.com/gandazgul/k8s-infrastructure/blob/master/k8s-config/2-configK8SNode.sh]()

## Helm repo

Expand All @@ -37,16 +39,17 @@ By following these steps you will install a fully functioning kubernetes master
points as I like them
4. Copy the scripts over `scp -r ./k8s-config fedora-ip:~/`
5. `ssh fedora-ip`
6. Run `~/k8s-config/2-configK8SMaster` - This will install K8s and configure the master to run pods, it will also install
6. Run your modified `~/k8s-config/1-fedoraPostInstall.sh`
7. Then run `~/k8s-config/2-configK8SMaster` - This will install K8s and configure the master to run pods, it will also install
Flannel network plugin
* Wait for the flannel for your architecture to show `1` in all columns then press ctrl+c
7. If something fails, you can reset with `sudo kubeadm reset`, delete kubeadminit.lock and try again, all of the
8. If something fails, you can reset with `sudo kubeadm reset`, delete kubeadminit.lock and try again, all the
scripts are safe to re-run.
Verify Kubelet that is running with `sudo systemctl status kubelet`
Once Flannel is working:
8. Install Storage, Helm, etc. run `3-installStorageAndHelm.sh`
9. Verify Kubelet that is running with `sudo systemctl status kubelet`
Once Flannel is working, and you verified kubelet:
10. Install Storage, Helm, etc. run `3-installStorageAndHelm.sh`
This will install a hostpath auto provisioner for quick testing of new pod configs, it will also install the helm
client with the tiller and diff plugins.
client with the plugins.
9. Verify kubectl works: (NOTE: Kubectl does not need sudo, it will fail with sudo)
* `kubectl get nodes` ← gets all nodes, you should see your node listed and `Ready`
* `kubectl get all --all-namespaces` ← shows everything that’s running in kubernetes
Expand Down

0 comments on commit 22b970d

Please sign in to comment.