You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This proof of concept involves creating a 'pseudo-nsc' in order to allow us to see behavior of the approach
when the appropriate IP Tables rules are added. Its purpose is to demonstrate that the additional IP Tables rules being added on top of what is normally provided by the istio-proxy do in fact allow a workload connected over a vWire to an istio-proxy-nse Pod to communicate as expected. Please note: this does not demonstrate the DNS part of the solution.
Create a secondary netns and connecting veth pair:
NSM_INTERFACE=veth1
NSM_SRC_IP=10.0.1.2
NSM_DST_IP=10.0.1.1
NSC_NETNS=nsc
ip netns add ${NSC_NETNS}
ip link add ${NSM_INTERFACE}type veth peer name ${NSM_INTERFACE}-peer
ip link set${NSM_INTERFACE}-peer netns ${NSC_NETNS}
ip addr add ${NSM_DST_IP}/24 dev ${NSM_INTERFACE}
ip link set${NSM_INTERFACE} up
ip -n ${NSC_NETNS} addr add ${NSM_SRC_IP}/24 dev ${NSM_INTERFACE}-peer
ip -n ${NSC_NETNS} link set${NSM_INTERFACE}-peer up
ip -n ${NSC_NETNS} link set lo up
ip -n ${NSC_NETNS} route add default via ${NSM_DST_IP}
Augment IP Tables Rules:
The ubuntu Pod will already be wired up with iptables rules for Istio.
These additional ip tables rules will cause istio to treat things running in the ${NSC_NETNS}
as if they were running locally, thus simulating an NSC connecting over a vWire to an istio-proxy-nse.
Because the NSM_OUTPUT rule will result in a packet with src=127.0.0.6 and dst=${NSM_SRC_IP}
in order for it to survive the routing process so that can be fixed in NSM_POSTROUTING
we need to enable route_localnet for the ${NSM_INTERFACE}:
This proof of concept involves creating a 'pseudo-nsc' in order to allow us to see behavior of the approach
when the appropriate IP Tables rules are added. Its purpose is to demonstrate that the additional IP Tables rules being added on top of what is normally provided by the istio-proxy do in fact allow a workload connected over a vWire to an istio-proxy-nse Pod to communicate as expected. Please note: this does not demonstrate the DNS part of the solution.
Install Istio and Bookinfo example
Follow istio kind install instructions
and then then install istio
Install the sample application:
Create a simple ubuntu container with privilege, and a service for it
Create ubuntu.yaml:
Setup the ubuntu server
Get a shell to ubuntu
kubectl exec --stdin --tty ubuntu -- /bin/bash
Install some software:
apt-get update && apt-get install -y iptables netcat iproute2 iputils-ping dnsutils curl python3
Create a secondary netns and connecting veth pair:
Augment IP Tables Rules:
The ubuntu Pod will already be wired up with iptables rules for Istio.
These additional ip tables rules will cause istio to treat things running in the ${NSC_NETNS}
as if they were running locally, thus simulating an NSC connecting over a vWire to an istio-proxy-nse.
Because the NSM_OUTPUT rule will result in a packet with src=127.0.0.6 and dst=${NSM_SRC_IP}
in order for it to survive the routing process so that can be fixed in NSM_POSTROUTING
we need to enable route_localnet for the ${NSM_INTERFACE}:
Check to see if the ${NSC_NETNS} can reach istio services
Because we are not passing DNS yet in this example we will capture the PRODUCT_PAGE_IP:
So we've now shown our 'simulated' NSCs can in fact address services over istio.
Check to see if services exposed in ${NSC_NETS} can be reached via istio.
Now we need to check that we can expose services from our simulated NSCs via istio.
So start a simple webserver in the ${NSC_NETNS}
In a separate terminal window run:
kubectl run curl -it --rm --image=osexp2000/ubuntu-with-utils curl -s -S ubuntu/testfile exit
The text was updated successfully, but these errors were encountered: