Citrix k8s node controller is deployed as a pod in Kubernetes cluster that provides a network between the Kubernetes cluster and the Ingress Citrix ADC.
Note: Citrix k8s node controller currently works only with flannel as the Container Network Interface (CNI). The scope of Citrix node controller can be extended to other CNI.
In Kubernetes environments, when you expose the services for external access through the Ingress device, to route the traffic into the cluster, you need to appropriately configure the network between the Kubernetes nodes and the Ingress device. Configuring the network is challenging as the pods use private IP addresses based on the CNI framework. Without proper network configuration, the Ingress device cannot access these private IP addresses. Also, manually configuring the network to ensure such reachability is cumbersome in Kubernetes environments.
Citrix k8s node controller is deployed as a pod in Kubernetes cluster that provides a network between the Kubernetes cluster and the Ingress Citrix ADC.
The following diagram provides the high-level architecture of the Citrix k8s node controller:
The are the main components of the Citrix k8s node controller:
Ingress Interface
The Ingress interface component is responsible for the interaction with Citrix ADC through NITRO REST API. It maintains the NITRO sessions and invokes it when required.K8s Interface
This K8s Interface component interacts with the Kube API server through K8s Go client. It ensures the availability of the client and maintains a healthy client session.Node Watcher
The Node Watcher component monitors the node events through K8s interface. It responds to the node events such as node addition, deletion, or modification with its callback functions.Input Feeder
The Input Feeder component provides inputs to the config decider. Some of the inputs are auto detected and the rest are taken from the Citrix k8s node controller deployment YAML file.Config Decider
The Config Decider component takes inputs from both the node watcher and the input feeder. Using the inputs it decides the best network automation required between the cluster and Citrix ADC.Core
The Core component interacts with the node watcher and updates the corresponding config engine. It is responsible for starting the best config engine for the corresponding cluster.Config Maps
The Config Maps component controls the Citrix k8s node controller. It allows you to define the Citrix k8s node controller to automatically create, apply, and delete routing configuration on Citrix ADC.Citrix k8s node controller monitors the node events and establishes a route between the cluster nodes and Citrix ADC using VXLAN. Citrix k8s node controller adds a route on the Citrix ADC when a new node joins to the cluster. Similarly when a node leaves the cluster, Citrix k8s node controller removes the associated route from the Citrix ADC. Citrix k8s node controller uses VXLAN overlay between the Kubernetes cluster and Citrix ADC for service routing.
Citrix k8s node controller can be used in the following two ways:
- Inside the cluster - In this configuration, the Citrix k8s node controller is run as pod.
- Outside the cluster - In this configuration, the Citrix k8s node controller is run as a process.
Important: Citrix recommends that you use Inside the cluster configuration for production. And, use the Outside the cluster configuration for development environments.
Refer the deployment page for running Citrix k8s node controller as a pod inside the Kubernetes cluster.
Before you deploy the citrix-k8s-node-controller package, ensure that you have installed Go package.
Perform the following:
-
Download or clone the
citrix-k8s-node-controller
package. -
Navigate to the build directory and start the
citrix-k8s-node-controller
using the following command:make run
-
Deploy the config map using the following command:
kubectl apply -f https://raw.githubusercontent.com/citrix/citrix-k8s-node-controller/master/deploy/config_map.yaml
For questions and support the following channels are available:
Describe the Issue in Details, Collects the logs and Use the discussion forum to raise the issue.
This project adheres to the Kubernetes Community Code of Conduct. By participating in this project you agree to abide by its terms.