Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Consul interdomain example. Nsc to workload connectivity #6490

Open
wants to merge 6 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions examples/nsm_consul/.gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
!**/kustomization.yaml
!**/patch-*.yaml
70 changes: 70 additions & 0 deletions examples/nsm_consul/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,70 @@
# NSM + Consul interdomain example over kind clusters

This example show how Consul can be used over nsm


## Requires

- [Load balancer](./loadbalancer)
- [Interdomain DNS](./dns)
- [Interdomain spire](./spire)
- [Interdomain nsm](./nsm)


## Run

Install Consul for the second cluster:
```bash
brew tap hashicorp/tap
brew install hashicorp/tap/consul-k8s
consul-k8s install -config-file=helm-consul-values.yaml -set global.image=hashicorp/consul:1.12.0 --kubeconfig=$KUBECONFIG2
```

### Verify NSM+CONSUL

Install networkservice for the second cluster::
```bash
kubectl --kubeconfig=$KUBECONFIG2 apply -f networkservice.yaml
```

Start `alpine` networkservicemesh client for the first cluster:

```bash
kubectl --kubeconfig=$KUBECONFIG1 apply -f client/client.yaml
```

Create kubernetes service for the networkservicemesh endpoint:
```bash
kubectl --kubeconfig=$KUBECONFIG2 apply -f service.yaml
```

Start `auto-scale` networkservicemesh endpoint:
```bash
kubectl --kubeconfig=$KUBECONFIG2 apply -k nse-auto-scale
```

Install `static-server` Consul workload on the second cluster:
```bash
kubectl --kubeconfig=$KUBECONFIG2 apply -f server/static-server.yaml
```

Verify connection from networkservicemesh client to consul server:
```bash
kubectl --kubeconfig=$KUBECONFIG1 exec -it alpine-nsc -- apk add curl
kubectl --kubeconfig=$KUBECONFIG1 exec -it alpine-nsc -- curl 172.16.1.2:8080
```

You should see "hello world" answer.

## Cleanup


```bash
kubectl --kubeconfig=$KUBECONFIG2 delete deployment static-server
kubectl --kubeconfig=$KUBECONFIG2 delete -k nse-auto-scale
kubectl --kubeconfig=$KUBECONFIG1 delete -f client/client.yaml
kubectl --kubeconfig=$KUBECONFIG2 delete -f networkservice.yaml
consul-k8s uninstall --kubeconfig=$KUBECONFIG2 -auto-approve=true -wipe-data=true
kubectl --kubeconfig=$KUBECONFIG2 delete pods --all
kind delete clusters cluster-1 cluster-2
```
16 changes: 16 additions & 0 deletions examples/nsm_consul/client/client.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
---
apiVersion: v1
kind: Pod
metadata:
name: alpine-nsc
labels:
app: alpine-nsc
annotations:
networkservicemesh.io: kernel://[email protected]/nsm-1?app=alpine-nsc
spec:
containers:
- name: alpine-nsc
image: alpine:3.15.0
imagePullPolicy: IfNotPresent
stdin: true
tty: true
157 changes: 157 additions & 0 deletions examples/nsm_consul/dns/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,157 @@
## Setup DNS for two clusters
MarinaShustowa marked this conversation as resolved.
Show resolved Hide resolved

This example shows how to simply configure three k8s clusters to know each other.
Can be skipped if clusters setupped with external DNS.

## Run

Expose dns service for first cluster
```bash
kubectl --kubeconfig=$KUBECONFIG1 expose service kube-dns -n kube-system --port=53 --target-port=53 --protocol=TCP --name=exposed-kube-dns --type=LoadBalancer
```

Wait for assigning IP address (note: you should see IP address in logs. If you dont see repeat this):
```bash
kubectl --kubeconfig=$KUBECONFIG1 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "ip"}}'
ip1=$(kubectl --kubeconfig=$KUBECONFIG1 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "ip"}}')
if [[ $ip1 == *"no value"* ]]; then
ip1=$(kubectl --kubeconfig=$KUBECONFIG1 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "hostname"}}')
ip1=$(dig +short $ip1 | head -1)
fi
echo Selected externalIP: $ip1 for cluster1
```

Expose dns service for the second cluster:
```bash
kubectl --kubeconfig=$KUBECONFIG2 expose service kube-dns -n kube-system --port=53 --target-port=53 --protocol=TCP --name=exposed-kube-dns --type=LoadBalancer
```

Wait for assigning IP address (note: you should see IP address in logs. If you dont see repeat this):
```bash
kubectl --kubeconfig=$KUBECONFIG2 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "ip"}}'
ip2=$(kubectl --kubeconfig=$KUBECONFIG2 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "ip"}}')
if [[ $ip2 == *"no value"* ]]; then
ip2=$(kubectl --kubeconfig=$KUBECONFIG2 get services exposed-kube-dns -n kube-system -o go-template='{{index (index (index (index .status "loadBalancer") "ingress") 0) "hostname"}}')
ip2=$(dig +short $ip2 | head -1)
fi
echo Selected externalIP: $ip2 for cluster2
```

Add DNS forwarding from cluster1 to cluster2:
```bash
cat > configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
k8s_external my.cluster1
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
loop
reload 5s
}
my.cluster2:53 {
forward . ${ip2}:53 {
force_tcp
}
}
EOF
kubectl --kubeconfig=$KUBECONFIG1 apply -f configmap.yaml
cat > custom-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
server.override: |
k8s_external my.cluster2
proxy1.server: |
my.cluster2:53 {
forward . ${ip2}:53 {
force_tcp
}
}
EOF

kubectl --kubeconfig=$KUBECONFIG1 apply -f custom-configmap.yaml
```

Add DNS forwarding from cluster2 to cluster1:
```bash
cat > configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns
namespace: kube-system
data:
Corefile: |
.:53 {
errors
health {
lameduck 5s
}
ready
kubernetes cluster.local in-addr.arpa ip6.arpa {
pods insecure
fallthrough in-addr.arpa ip6.arpa
ttl 30
}
k8s_external my.cluster2
prometheus :9153
forward . /etc/resolv.conf {
max_concurrent 1000
}
loop
reload 5s
}
my.cluster1:53 {
forward . ${ip1}:53 {
force_tcp
}
}
EOF
kubectl --kubeconfig=$KUBECONFIG2 apply -f configmap.yaml
cat > custom-configmap.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
name: coredns-custom
namespace: kube-system
data:
server.override: |
k8s_external my.cluster1
proxy1.server: |
my.cluster1:53 {
forward . ${ip1}:53 {
force_tcp
}
}
EOF
kubectl --kubeconfig=$KUBECONFIG2 apply -f custom-configmap.yaml
```

## Cleanup

```bash
kubectl --kubeconfig=$KUBECONFIG1 delete service -n kube-system exposed-kube-dns
kubectl --kubeconfig=$KUBECONFIG2 delete service -n kube-system exposed-kube-dns
```

10 changes: 10 additions & 0 deletions examples/nsm_consul/helm-consul-values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
---
global:
name: consul
datacenter: dc1
server:
replicas: 1
connectInject:
enabled: true
transparentProxy:
defaultEnabled: false
6 changes: 6 additions & 0 deletions examples/nsm_consul/kind-cluster-config.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
---
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
- role: worker
73 changes: 73 additions & 0 deletions examples/nsm_consul/loadbalancer/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,73 @@
# Kubernetes load balancer

Before starting with installation, make sure you meet all the [requirements](https://metallb.universe.tf/#requirements). In particular, you should pay attention to network addon [compatibility](https://metallb.universe.tf/installation/clouds/).

If you’re trying to run MetalLB on a cloud platform, you should also look at the cloud compatibility page and make sure your cloud platform can work with MetalLB (most cannot).

There are three supported ways to install MetalLB: using plain Kubernetes manifests, using Kustomize, or using Helm.

## Run

Apply metallb for the first cluster:
```bash
if [[ ! -z $CLUSTER1_CIDR ]]; then
kubectl --kubeconfig=$KUBECONFIG1 apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl --kubeconfig=$KUBECONFIG1 apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
cat > metallb-config.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $CLUSTER1_CIDR
EOF
kubectl --kubeconfig=$KUBECONFIG1 apply -f metallb-config.yaml
kubectl --kubeconfig=$KUBECONFIG1 wait --for=condition=ready --timeout=5m pod -l app=metallb -n metallb-system
fi
```

Apply metallb for the second cluster:
```bash
if [[ ! -z $CLUSTER2_CIDR ]]; then
kubectl --kubeconfig=$KUBECONFIG2 apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/namespace.yaml
kubectl --kubeconfig=$KUBECONFIG2 apply -f https://raw.githubusercontent.com/metallb/metallb/v0.12.1/manifests/metallb.yaml
cat > metallb-config.yaml <<EOF
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: default
protocol: layer2
addresses:
- $CLUSTER2_CIDR
EOF
kubectl --kubeconfig=$KUBECONFIG2 apply -f metallb-config.yaml
kubectl --kubeconfig=$KUBECONFIG2 wait --for=condition=ready --timeout=5m pod -l app=metallb -n metallb-system
fi
```

## Cleanup

Delete metallb-system namespace from all clusters:

```bash
if [[ ! -z $CLUSTER1_CIDR ]]; then
kubectl --kubeconfig=$KUBECONFIG2 delete ns metallb-system
fi
```

```bash
if [[ ! -z $CLUSTER2_CIDR ]]; then
kubectl --kubeconfig=$KUBECONFIG1 delete ns metallb-system
fi
```
18 changes: 18 additions & 0 deletions examples/nsm_consul/networkservice.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
---
apiVersion: networkservicemesh.io/v1
kind: NetworkService
metadata:
name: autoscale-consul-proxy
namespace: nsm-system
spec:
payload: IP
matches:
- source_selector:
fallthrough: true
routes:
- destination_selector:
podName: "{{ .podName }}"
- source_selector:
routes:
- destination_selector:
any: "true"
1 change: 1 addition & 0 deletions examples/nsm_consul/nse-auto-scale/iptables-map
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
-I PREROUTING 1 -p tcp -i {{ .NsmInterfaceName }} -j DNAT --to-destination 127.0.0.1
20 changes: 20 additions & 0 deletions examples/nsm_consul/nse-auto-scale/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization

bases:
- https://github.com/networkservicemesh/deployments-k8s/apps/nse-supplier-k8s?ref=b4bddacfa45fafb7c15a769a1fc0f319e63d6a8d

patchesStrategicMerge:
- patch-supplier.yaml

configMapGenerator:
- name: supplier-pod-template-configmap
files:
- pod-template.yaml
- name: iptables-map
files:
- iptables-map

generatorOptions:
disableNameSuffixHash: true
Loading