It could be required to run the following to add capacity if you need to install multiple cluster:
sudo sysctl fs.inotify.max_user_watches=524288
sudo sysctl fs.inotify.max_user_instances=512
As Kind is using Docker we need to install it on the host.
Please run the following
sudo apt update && sudo apt install docker.io -y
You should have an output similar to:
Hit:1 http://us-west-2.ec2.archive.ubuntu.com/ubuntu jammy InRelease
Get:2 http://us-west-2.ec2.archive.ubuntu.com/ubuntu jammy-updates InRelease [114 kB]
...
No VM guests are running outdated hypervisor (qemu) binaries on this host.
Allow the user ubuntu to run docker:
sudo usermod -aG docker $USER && newgrp docker
and restart the web terminal demon to take changes into account for all terminal sessions:
sudo service ttyd restart
You should notice the web terminal is restarted.
Run the following commands to install Kind:
curl -Lo ./kind https://kind.sigs.k8s.io/dl/v0.15.0/kind-linux-amd64
chmod +x ./kind
sudo mv ./kind /usr/bin/kind
You should have an output similar to:
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 97 100 97 0 0 509 0 --:--:-- --:--:-- --:--:-- 510
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 6716k 100 6716k 0 0 6658k 0 0:00:01 0:00:01 --:--:-- 26.6M
Kubernetes provides a command line tool named kubectl
for communicating with a Kubernetes cluster's control plane, using the Kubernetes API. For more information please refer to https://kubernetes.io/docs/reference/kubectl/.
Run the following to install kubectl
:
sudo snap install kubectl --classic
You should have an output similar to:
kubectl 1.25.1 from Canonical✓ installed
Prepare cluster configuration file:
tee kind-config.yaml <<EOF
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
- role: worker
EOF
In this lab we install a Kind cluster with Kubernetes v1.23, run the following command to deploy the cluster:
kind create cluster --name demo --config kind-config.yaml --image="kindest/node:v1.23.10@sha256:f047448af6a656fae7bc909e2fab360c18c487ef3edc93f06d78cdfd864b2d12"
You should have an output similar to:
Creating cluster "demo" ...
✓ Ensuring node image (kindest/node:v1.23.10) 🖼
✓ Preparing nodes 📦 📦 📦
✓ Writing configuration 📜
✓ Starting control-plane 🕹️
✓ Installing CNI 🔌
✓ Installing StorageClass 💾
✓ Joining worker nodes 🚜
Set kubectl context to "kind-demo"
You can now use your cluster with:
kubectl cluster-info --context kind-demo
Not sure what to do next? 😅 Check out https://kind.sigs.k8s.io/docs/user/quick-start/
To deploy Calisti SMM you need to download SMM CLI. Download the SMM CLI binary from https://calisti.app/download.
Then move the binary file in a folder included in your PATH:
tar -xvf smm_1.10.0_linux_amd64.tar
chmod +x ./smm
and restart the web terminal demon to take changes into account for all terminal sessions:
sudo service ttyd restart
./smm --version
Service Mesh Manager CLI version 1.10.0 (1d17b3310) built on 2022-08-04T20:15:29Z
You need to activate SMM with the registration key/password generated on Calisti Download center at: https://calisti.app/download
SMM_REGISTRY_PASSWORD=REDACTED ./smm activate \
--host=registry.eticloud.io \
--prefix=smm \
--user=REDACTED
You should have an output similar to:
? Are you sure to use the following context? kind-demo (API Server: https://127.0.0.1:40343) Yes
✓ validate-kubeconfig ❯ checking cluster reachability...
✓ Configuring docker image access for Service Mesh Manager.
✓ If you want to change these settings later, please use the 'registry' and 'activate' commands
We will install SMM with Dashboard WebUI, prepare the configuration file:
tee enable-dashboard-expose.yaml <<EOF
spec:
smm:
exposeDashboard:
meshGateway:
enabled: true
auth:
forceUnsecureCookies: true
mode: anonymous
EOF
Install SMM on the cluster with the config file for the Dashboard:
./smm --non-interactive install -a --additional-cp-settings ./enable-dashboard-expose.yaml --cluster-name kind-demo
Check the installation with:
./smm istio cluster status
You should have an output similar to:
? Are you sure to use the following context? kind-demo (API Server: https://127.0.0.1:42005) (Y/n
? Are you sure to use the following context? kind-demo (API Server: https://127.0.0.1:42005) Yes
✓ validate-kubeconfig ❯ checking cluster reachability...
logged in as kubernetes-admin
Clusters
---
Name Type Provider Regions Version Distribution Status Message
kind-demo Local kind [] v1.23.10 KIND Ready
ControlPlanes
---
Cluster Name Version Trust Domain Pods Proxies
kind-demo cp-v113x.istio-system 1.13.5 [cluster.local] [istiod-cp-v113x-6db79d7d4d-lwsqk.istio-system] 22/22
Since version 0.13.0, MetalLB is configured via CRs and the original way of configuring it via a ConfigMap based configuration is not working anymore. Run the following command to install MetalLb:
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.13.5/config/manifests/metallb-native.yaml
We will setup MetalLb using layer2 protocol. We need to provide MetalLb a range of IP addresses it controls. We want this range to be on the docker kind network.
docker network inspect -f '{{.IPAM.Config}}' kind
You should have an output similar to:
[{172.18.0.0/16 172.18.0.1 map[]} {fc00:f853:ccd:e793::/64 map[]}]
We notice the CIDR used by Docker is 172.18.0.0/16
so we can allocate to MetalLb the following range 172.18.255.200-172.18.255.250
.
Let's create the config file with the following command:
tee metallb-l2.yaml <<EOF
apiVersion: metallb.io/v1beta1
kind: IPAddressPool
metadata:
name: default
namespace: metallb-system
spec:
addresses:
- 172.18.255.200-172.18.255.250
---
apiVersion: metallb.io/v1beta1
kind: L2Advertisement
metadata:
name: default
namespace: metallb-system
EOF
Then apply the configuration:
kubectl apply -f metallb-l2.yaml
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install caddy
Prepare the variable ingressip equals to the smm-ingressgateway-external EXTERNAL-IP address:
ingressip=$(kubectl get svc smm-ingressgateway-external -n smm-system -o jsonpath="{.status.loadBalancer.ingress[0].ip}")
echo $ingressip
Configure the reverse proxy:
caddy reverse-proxy --from :8083 --to ${ingressip}:80 &
You should have an output similar to:
eti-lab> 2022/09/24 09:40:15.349 WARN admin admin endpoint disabled
2022/09/24 09:40:15.350 INFO tls.cache.maintenance started background certificate maintenance {"cache": "0xc00033e9a0"}
2022/09/24 09:40:15.350 INFO tls cleaning storage unit {"description": "FileStorage:/home/ubuntu/.local/share/caddy"}
2022/09/24 09:40:15.350 INFO tls finished cleaning storage units
2022/09/24 09:40:15.350 INFO http.log server running {"name": "proxy", "protocols": ["h1", "h2", "h3"]}
Caddy proxying http://:8083 -> 172.18.255.200:80
We have now a proxy configured between port 8083 and port 80 on load balancer IP address.
You can open the browser at dashboard
Use the token generated by the following command:
./smm login
./smm demoapp install
kubectl get istiocontrolplanes -A
NAMESPACE NAME MODE NETWORK STATUS MESH EXPANSION EXPANSION GW IPS ERROR AGE
istio-system cp-v113x ACTIVE network1 Available true ["172.18.255.200"] 16m
Istio is using the network1 network name, so set the WorkloadGroup’s network setting to network1, too.
Tear down the analytics-v1 Pods:
kubectl scale deployment -n smm-demo analytics-v1 --replicas=0
Check there is no more analytics-v1 Pods:
kubectl get pods -n smm-demo | grep analytics
Create the workload group definition with network1:
tee workload-analytics.yaml <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: WorkloadGroup
metadata:
labels:
app: analytics
version: v0
name: analytics-v0
namespace: smm-demo
spec:
metadata:
labels:
app: analytics
version: v0
probe:
httpGet:
path: /
host: 127.0.0.1
port: 8080
scheme: HTTP
template:
network: network1
ports:
http: 8080
grpc: 8082
tcp: 8083
serviceAccount: default
EOF
Deploy the workload group:
kubectl apply -f workload-analytics.yaml
workloadgroup.networking.istio.io/analytics-v0 created
mTLS settings
tee permissive-mtls.yaml <<EOF
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: analytics
namespace: smm-demo
spec:
mtls:
mode: PERMISSIVE
selector:
matchLabels:
app: analytics
version: v0
EOF
Apply mTLS peer authentication:
kubectl apply -f permissive-mtls.yaml
Check mTLS peer authentication:
peerauthentication.security.istio.io/analytics created
Run systemd ubuntu container as simulated "VM":
docker run -d --rm --privileged --cgroupns=host --name systemd-ubuntu --network=kind --tmpfs /tmp --tmpfs /run --tmpfs /run/lock -v /sys/fs/cgroup:/sys/fs/cgroup jrei/systemd-ubuntu
docker exec -t systemd-ubuntu bash -c "apt-get -y update && apt-get install -y curl iptables iproute2 sudo"
VM_NAME=systemd-ubuntu
Open a bash session on the simulated "VM":
docker exec -it ${VM_NAME} bash
Start fake analytics services inside the "VM":
# inside container
mkdir /empty_dir
cat <<EOF > /lib/systemd/system/smm-demo-analytics.service
[Unit]
Description=SMM demo app - analytics
After=network-online.target
[Service]
ExecStart=/usr/bin/bash -c "cd /empty_dir && /usr/bin/python3 -m http.server 8080"
KillMode=process
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
ln -s /lib/systemd/system/smm-demo-analytics.service /etc/systemd/system/multi-user.target.wants/
systemctl start smm-demo-analytics.service
Ensure the pkg download cache is in the same filesystem as the target
# still inside container
mkdir /tmp/smm-agent
rm -rf /var/cache/smm-agent
ln -s /tmp/smm-agent /var/cache/smm-agent
exit
Get the container's IP on the docker network
NODEIP=$(docker exec -t ${VM_NAME} bash -c "ip a show dev eth0" | awk '$1 == "inet" {gsub(/\/.*$/, "", $2); print $2}')
echo $NODEIP
Get the ingress-gateway IP for Calisti
INGRESS_IP=$(kubectl get svc smm-ingressgateway-external -n smm-system -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $INGRESS_IP
Get the bearer token for the smm-demo namespace's service-account
SA_NAMESPACE=smm-demo
SA_SERVICEACCOUNT=default
SA_SECRET_NAME=$(kubectl get serviceaccount $SA_SERVICEACCOUNT -n $SA_NAMESPACE -o json | jq -r '.secrets[0].name')
BEARER_TOKEN=$(kubectl get secret -n $SA_NAMESPACE ${SA_SECRET_NAME} -o json | jq -r '.data.token | @base64d')
Get the smm-agent
in the VM
docker exec -t ${VM_NAME} bash -c "curl http://${INGRESS_IP}/get/smm-agent | bash"
Do the setup of the smm-agent
docker exec -t ${VM_NAME} bash -c "smm-agent set workload-group smm-demo analytics-v0"
docker exec -t ${VM_NAME} bash -c "smm-agent set node-ip ${NODEIP}"
docker exec -t ${VM_NAME} bash -c "smm-agent set bearer-token ${BEARER_TOKEN}"
Start the smm-agent
docker exec -t ${VM_NAME} bash -c "smm-agent reconcile"
eti-lab> docker exec -t ${VM_NAME} bash -c "smm-agent reconcile"
✓ reconciling host operating system
✓ uuid not present in config file, new uuid is generated uuid=0327b100-a008-4364-a2b3-7d1777bb5b9a
✓ configuration loaded config=/etc/smm/agent.yaml
✓ install-pilot-agent ❯ downloading and installing OS package component=pilot-agent, platform={linux amd64 deb 0xc000116630}
✓ install-pilot-agent ❯ downloader reconciles with exponential backoff downloader={pilot-agent {linux amd64 deb 0xc000116630} true 0xc000694850}
✓ install-pilot-agent/download-pilot-agent/download-cache-dir ❯ create directory path=/var/cache/smm-agent/downloads, mode=0
✓ install-pilot-agent/download-pilot-agent ❯ downloading package from SMM component=pilot-agent, platform={linux amd64 deb 0xc000116630}
✓ install-pilot-agent/install-os-package ❯ installing debian package package=/tmp/smm-agent.2724887957
✓ install-pilot-agent/install-os-package/dpkg ❯ executing command command=dpkg, args=[-i /tmp/smm-agent.2724887957], timeout=5m0s
✓ install-pilot-agent/install-os-package/dpkg ❯ command executed successfully command=dpkg, args=[-i /tmp/smm-agent.2724887957], stdout=Selecting previously unselected package istio-sidecar.
(Reading database ... 9289 files and directories currently installed.)
Preparing to unpack /tmp/smm-agent.2724887957 ...
Unpacking istio-sidecar (1.13.5) ...
Setting up istio-sidecar (1.13.5) ...
, stderr=
✓ update-host-files ❯ updating host files gateway_addresses=map[istiod.istio-system.svc:[172.18.255.200]]
✓ update-host-files/file ❯ checking content changes filename=/etc/hosts
✓ update-host-files/file ❯ checking file stats changes mode=420, user_name=root, group_name=root
✓ update-host-files/file ❯ comparing to existing file's stats existing_mode=420, existing_user=root, existing_group=root
✓ update-host-files/file ❯ replacing target file with temp file temp_filename=/etc/.smm-agent-temp.82718693, target_filename=/etc/hosts
✗ update-host-files/file ❯ replacing target file with temp file: rename /etc/.smm-agent-temp.82718693 /etc/hosts: device or resource busy temp_filename=/etc/.smm-agent-temp.82718693, target_filename=/etc/hosts
✓ update-host-files/file ❯ Attempting direct file write target_filename=/etc/hosts, content=127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.5 70c83c449172
fc00:f853:ccd:e793::5 70c83c449172
172.18.255.200 istiod.istio-system.svc
127.0.0.1 70c83c449172
✓ update-host-files/file ❯ file is updated successfully filename=/etc/hosts
✓ update-cluster-env/cluster-env-dir ❯ create directory path=/var/lib/istio/envoy, mode=0
✓ update-cluster-env/file ❯ checking content changes filename=/var/lib/istio/envoy/cluster.env
✓ update-cluster-env/file ❯ checking file stats changes mode=420, user_name=root, group_name=root
✓ update-cluster-env/file ❯ replacing target file with temp file temp_filename=/var/lib/istio/envoy/.smm-agent-temp.1795122235, target_filename=/var/lib/istio/envoy/cluster.env
✓ update-cluster-env/file ❯ file is updated successfully filename=/var/lib/istio/envoy/cluster.env
✓ update-root-cert/root-cert-dir ❯ create directory path=/etc/certs, mode=0
✓ update-root-cert/file ❯ checking content changes filename=/etc/certs/root-cert.pem
✓ update-root-cert/file ❯ checking file stats changes mode=420, user_name=istio-proxy, group_name=istio-proxy
✓ update-root-cert/file ❯ replacing target file with temp file temp_filename=/etc/certs/.smm-agent-temp.2198813229, target_filename=/etc/certs/root-cert.pem
✓ update-root-cert/file ❯ file is updated successfully filename=/etc/certs/root-cert.pem
✓ update-mesh-config/mesh-config-dir ❯ create directory path=/etc/istio/config, mode=0
✓ update-mesh-config/file ❯ checking content changes filename=/etc/istio/config/mesh
✓ update-mesh-config/file ❯ checking file stats changes mode=420, user_name=istio-proxy, group_name=istio-proxy
✓ update-mesh-config/file ❯ comparing to existing file's stats existing_mode=420, existing_user=istio-proxy, existing_group=istio-proxy
✓ update-mesh-config/file ❯ replacing target file with temp file temp_filename=/etc/istio/config/.smm-agent-temp.63478690, target_filename=/etc/istio/config/mesh
✓ update-mesh-config/file ❯ file is updated successfully filename=/etc/istio/config/mesh
✓ update-bearer-token/bearer-token-dir ❯ create directory path=/var/run/secrets/kubernetes.io/serviceaccount, mode=0
✓ update-bearer-token/file ❯ checking content changes filename=/var/run/secrets/kubernetes.io/serviceaccount/token
✓ update-bearer-token/file ❯ checking file stats changes mode=420, user_name=istio-proxy, group_name=istio-proxy
✓ update-bearer-token/file ❯ replacing target file with temp file temp_filename=/var/run/secrets/kubernetes.io/serviceaccount/.smm-agent-temp.3965915508, target_filename=/var/run/secrets/kubernetes.io/serviceaccount/token
✓ update-bearer-token/file ❯ file is updated successfully filename=/var/run/secrets/kubernetes.io/serviceaccount/token
✓ systemd-ensure-node-exporter-running/systemctl/show ❯ showing current service state args=[show -p SubState --value smm-node-exporter]
✓ systemd-ensure-node-exporter-running/systemctl/show ❯ showing current service state args=[show -p ActiveState --value smm-node-exporter]
✓ systemd-ensure-node-exporter-running/systemctl/is-enabled ❯ checking if the service is enabled args=[is-enabled smm-node-exporter]
✓ systemd-ensure-node-exporter-running ❯ current service states sub_state=dead, active_state=inactive, is_enabled=disabled
✓ systemd-ensure-node-exporter-running/systemctl ❯ enabling service args=[smm-node-exporter]
✓ systemd-ensure-node-exporter-running/systemctl/enable ❯ executing command command=systemctl, args=[enable smm-node-exporter], timeout=5m0s
✓ systemd-ensure-node-exporter-running/systemctl/enable ❯ command executed successfully command=systemctl, args=[enable smm-node-exporter], stdout=, stderr=Created symlink /etc/systemd/system/multi-user.target.wants/smm-node-exporter.service -> /lib/systemd/system/smm-node-exporter.service.
✓ systemd-ensure-node-exporter-running/systemctl ❯ starting service args=[smm-node-exporter]
✓ systemd-ensure-node-exporter-running/systemctl/start ❯ executing command command=systemctl, args=[start smm-node-exporter], timeout=5m0s
✓ systemd-ensure-node-exporter-running/systemctl/start ❯ command executed successfully command=systemctl, args=[start smm-node-exporter], stdout=, stderr=
✓ install-smm-agent ❯ downloading and installing OS package component=smm-agent, platform={linux amd64 deb 0xc000116630}
✓ install-smm-agent ❯ downloader reconciles with exponential backoff downloader={smm-agent {linux amd64 deb 0xc000116630} true 0xc000128f10}
✓ install-smm-agent/download-smm-agent/download-cache-dir ❯ create directory path=/var/cache/smm-agent/downloads, mode=0
✓ install-smm-agent/download-smm-agent ❯ downloading package from SMM component=smm-agent, platform={linux amd64 deb 0xc000116630}
✓ install-smm-agent/install-os-package ❯ installing debian package package=/tmp/smm-agent.804078050
✓ install-smm-agent/install-os-package/dpkg ❯ executing command command=dpkg, args=[-i /tmp/smm-agent.804078050], timeout=5m0s
✓ install-smm-agent/install-os-package/dpkg ❯ command executed successfully command=dpkg, args=[-i /tmp/smm-agent.804078050], stdout=(Reading database ... 9306 files and directories currently installed.)
Preparing to unpack /tmp/smm-agent.804078050 ...
Unpacking smm-agent (1.10.0~snapshot.202208041444-SNAPSHOT-1d17b3310) over (1.10.0~snapshot.202208041444-SNAPSHOT-1d17b3310) ...
Setting up smm-agent (1.10.0~snapshot.202208041444-SNAPSHOT-1d17b3310) ...
, stderr=
✓ restart-after-smm-agent-upgrade/systemctl ❯ restarting service args=[smm-agent]
✓ restart-after-smm-agent-upgrade/systemctl/restart ❯ executing command command=systemctl, args=[restart smm-agent], timeout=5m0s
✓ restart-after-smm-agent-upgrade/systemctl/restart ❯ command executed successfully command=systemctl, args=[restart smm-agent], stdout=, stderr=
✓ systemd-ensure-smm-agent-running/systemctl/show ❯ showing current service state args=[show -p SubState --value smm-agent]
✓ systemd-ensure-smm-agent-running/systemctl/show ❯ showing current service state args=[show -p ActiveState --value smm-agent]
✓ systemd-ensure-smm-agent-running/systemctl/is-enabled ❯ checking if the service is enabled args=[is-enabled smm-agent]
✓ systemd-ensure-smm-agent-running ❯ current service states sub_state=running, active_state=active, is_enabled=enabled
✓ systemd-ensure-istio-running/systemctl/show ❯ showing current service state args=[show -p SubState --value istio]
✓ systemd-ensure-istio-running/systemctl/show ❯ showing current service state args=[show -p ActiveState --value istio]
✓ systemd-ensure-istio-running/systemctl/is-enabled ❯ checking if the service is enabled args=[is-enabled istio]
✓ systemd-ensure-istio-running ❯ current service states sub_state=dead, active_state=inactive, is_enabled=disabled
✓ systemd-ensure-istio-running/systemctl ❯ enabling service args=[istio]
✓ systemd-ensure-istio-running/systemctl/enable ❯ executing command command=systemctl, args=[enable istio], timeout=5m0s
✓ systemd-ensure-istio-running/systemctl/enable ❯ command executed successfully command=systemctl, args=[enable istio], stdout=, stderr=Created symlink /etc/systemd/system/multi-user.target.wants/istio.service -> /lib/systemd/system/istio.service.
✓ systemd-ensure-istio-running/systemctl ❯ starting service args=[istio]
✓ systemd-ensure-istio-running/systemctl/start ❯ executing command command=systemctl, args=[start istio], timeout=5m0s
✓ systemd-ensure-istio-running/systemctl/start ❯ command executed successfully command=systemctl, args=[start istio], stdout=, stderr=
✓ wait-for-registration ❯ waiting for workloadentry to be registered timeout=300, interval=5
✓ wait-for-registration ❯ health check fails, waiting for workloadentry to be successfully registered...
✓ wait-for-registration ❯ workloadentry is registered successfully
✓ changes were made to the host operating system
✓ reconciled host operating system
Check on Calisti dashboard:
Terminate the "VM":
docker stop ${VM_NAME}
Check on Calisti dashboard:
- On RHEL:
[root@b8700f04bbc0 /]# systemctl start smm-demo-analytics.service
Failed to connect to bus: No such file or directory
- On Ubuntu:
docker container exits ! if not using flag --cgroupns=host and allow rw on volume /sys/fs/cgroup