-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathk8s_setup
109 lines (68 loc) · 3.99 KB
/
k8s_setup
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
#!sh
#on all nodes - masters and workers
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable"
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.13.5-00 kubeadm=1.13.5-00 kubectl=1.13.5-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
exit
On master 1 - what about other masters?
# sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address 192.168.50.2 --apiserver-cert-extra-sans="192.168.50.2"
sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address $NODEIP --apiserver-cert-extra-sans="$NODEIP"
# maybe extra options? sudo kubeadm init --apiserver-cert-extra-sans="192.168.50.10" --node-name k8s-master
sudo sed -i "s/^KUBELET_KUBEADM_ARGS=/KUBELET_KUBEADM_ARGS=--node-ip=$NODEIP /" /var/lib/kubelet/kubeadm-flags.env
sudo systemctl daemon-reload && sudo systemctl restart kubelet
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
cp $HOME/.kube/config /vagrant/myconfig
curl https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml | sed 's/--kube-subnet-mgr/--kube-subnet-mgr \n - --iface=enp0s8/' > mykube-flannel.yml
kubectl apply -f mykube-flannel.yml
#kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Just wait for pods to start. No need to reboot or find how to restart the master services.
Need to have the workers join before they all start...
kubectl get pods --all-namespaces -w
multi-master setup
#Had to upgrade to 1.14.3-00 for --experimental-upload-certs to be accepted
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.14.3-00 kubeadm=1.14.3-00 kubectl=1.14.3-00 --allow-change-held-packages
sudo systemctl stop kubelet
sudo kubeadm init --config=/vagrant/kubeadm-config.yml --experimental-upload-certs
sudo sed -i "s/^KUBELET_KUBEADM_ARGS=/KUBELET_KUBEADM_ARGS=--node-ip=$NODEIP /" /var/lib/kubelet/kubeadm-flags.env
sudo systemctl daemon-reload && sudo systemctl restart kubelet
worker's
To add a new worker, get the cluster join command on the master with ..
kubeadm token create --print-join-command
run join command with sudo
As the workers are also multi-homed hosts like the masters, we ened to specify the IP the kubelet will use to communicate with the cluster as well
https://github.com/kubernetes/kubeadm/issues/203
The Vagrantfile includes a script to add the correct IP address to use for the kubelet. It adds it to ~vagrant/bash_profile so you can use the env variable $NODEIP to use the correct address.
sudo sed -i "s/^KUBELET_KUBEADM_ARGS=/KUBELET_KUBEADM_ARGS=--node-ip=$NODEIP /" /var/lib/kubelet/kubeadm-flags.env
sudo systemctl daemon-reload && sudo systemctl restart kubelet
haproxy for access to multi-master setups.
Also configuring to act as a Router form the nodes network to the ourside network. Note the the Nodes have a default route via their NAT interface which covers all routes not attached ar routed via the haproxy router.
sh /vagrant/haproxy_setup
append to /etc/haproxy/haproxy.cfg
frontend k8s-api
bind 0.0.0.0:6443
mode tcp
option tcplog
default_backend k8s-api
backend k8s-api
mode tcp
# option tcplog
option tcp-check
balance roundrobin
default-server inter 10s downinter 5s rise 2 fall 2 slowstart 60s maxconn 250 maxqueue 256 weight 100
server 192.168.50.2 192.168.50.2:6443 check
# server 192.168.50.3 192.168.50.3:6443 check
# server 192.168.50.4 192.168.50.4:6443 check
Then ...
sudo systemctl restart haproxy