Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Simple exercise to simulate how istio-proxy-nse will work #2

Open
edwarnicke opened this issue May 9, 2022 · 1 comment
Open

Simple exercise to simulate how istio-proxy-nse will work #2

edwarnicke opened this issue May 9, 2022 · 1 comment

Comments

@edwarnicke
Copy link
Member

This proof of concept involves creating a 'pseudo-nsc' in order to allow us to see behavior of the approach
when the appropriate IP Tables rules are added. Its purpose is to demonstrate that the additional IP Tables rules being added on top of what is normally provided by the istio-proxy do in fact allow a workload connected over a vWire to an istio-proxy-nse Pod to communicate as expected. Please note: this does not demonstrate the DNS part of the solution.

Install Istio and Bookinfo example

Follow istio kind install instructions

kind create cluster --name istio-testing
kubectl config use-context kind-istio-testing

and then then install istio

istioctl install --set profile=demo -y
kubectl label namespace default istio-injection=enabled

Install the sample application:

kubectl apply -f https://raw.githubusercontent.com/istio/istio/release-1.13/samples/bookinfo/platform/kube/bookinfo.yaml

Create a simple ubuntu container with privilege, and a service for it

Create ubuntu.yaml:

cat > ubuntu.yaml <<EOF
---
apiVersion: v1
kind: Pod
metadata:
  name: ubuntu
  labels:
    app: ubuntu
spec:
  containers:
    - image: ubuntu
      command:
        - "sleep"
        - "604800"
      imagePullPolicy: IfNotPresent
      name: ubuntu
      securityContext:
        privileged: true
---
apiVersion: v1
kind: Service
metadata:
  name: ubuntu
spec:
  ports:
    - name: http
      port: 80
      protocol: TCP
      targetPort: 8000
  selector:
    app: ubuntu
EOF
kubectl apply -f ./ubuntu.yaml

Setup the ubuntu server

Get a shell to ubuntu

kubectl exec --stdin --tty  ubuntu  -- /bin/bash

Install some software:

apt-get update && apt-get install -y iptables netcat iproute2 iputils-ping dnsutils curl python3

Create a secondary netns and connecting veth pair:

NSM_INTERFACE=veth1
NSM_SRC_IP=10.0.1.2
NSM_DST_IP=10.0.1.1
NSC_NETNS=nsc
ip netns add ${NSC_NETNS}
ip link add ${NSM_INTERFACE} type veth peer name ${NSM_INTERFACE}-peer
ip link set ${NSM_INTERFACE}-peer netns ${NSC_NETNS}
ip addr add ${NSM_DST_IP}/24 dev ${NSM_INTERFACE}
ip link set ${NSM_INTERFACE} up
ip -n ${NSC_NETNS} addr add ${NSM_SRC_IP}/24 dev ${NSM_INTERFACE}-peer
ip -n ${NSC_NETNS} link set ${NSM_INTERFACE}-peer up
ip -n ${NSC_NETNS} link set lo up
ip -n ${NSC_NETNS} route add default via ${NSM_DST_IP}

Augment IP Tables Rules:

The ubuntu Pod will already be wired up with iptables rules for Istio.
These additional ip tables rules will cause istio to treat things running in the ${NSC_NETNS}
as if they were running locally, thus simulating an NSC connecting over a vWire to an istio-proxy-nse.

iptables-legacy -t nat -N NSM_PREROUTE
iptables-legacy -t nat -A NSM_PREROUTE -j ISTIO_REDIRECT
iptables-legacy -t nat -I PREROUTING 1 -p tcp -i ${NSM_INTERFACE} -j NSM_PREROUTE
iptables-legacy -t nat -N NSM_OUTPUT
iptables-legacy -t nat -A NSM_OUTPUT -j DNAT --to-destination ${NSM_SRC_IP}
iptables-legacy -t nat -A OUTPUT -p tcp -s 127.0.0.6 -j NSM_OUTPUT
iptables-legacy -t nat -N NSM_POSTROUTING
iptables-legacy -t nat -A NSM_POSTROUTING -j SNAT --to-source ${NSM_DST_IP}
iptables-legacy -t nat -A POSTROUTING -p tcp -o ${NSM_INTERFACE} -j NSM_POSTROUTING

Because the NSM_OUTPUT rule will result in a packet with src=127.0.0.6 and dst=${NSM_SRC_IP}
in order for it to survive the routing process so that can be fixed in NSM_POSTROUTING
we need to enable route_localnet for the ${NSM_INTERFACE}:

echo 1 >  /proc/sys/net/ipv4/conf/${NSM_INTERFACE}/route_localnet

Check to see if the ${NSC_NETNS} can reach istio services

Because we are not passing DNS yet in this example we will capture the PRODUCT_PAGE_IP:

PRODUCT_PAGE_IP=$(dig +short productpage.default.svc.cluster.local)
ip netns exec ${NSC_NETNS} curl -sS ${PRODUCT_PAGE_IP}:9080/productpage | grep -o "<title>.*</title>"

So we've now shown our 'simulated' NSCs can in fact address services over istio.

Check to see if services exposed in ${NSC_NETS} can be reached via istio.

Now we need to check that we can expose services from our simulated NSCs via istio.

So start a simple webserver in the ${NSC_NETNS}

cat > testfile <<EOF
This is a testfile
EOF
ip netns exec ${NSC_NETNS} python3 -m http.server

In a separate terminal window run:

kubectl run curl -it --rm --image=osexp2000/ubuntu-with-utils
curl -s -S ubuntu/testfile
exit
@denis-tingaikin
Copy link
Member

@rejmond As I know we already tested this successfully. Is it correct?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants