Skip to content

Latest commit

 

History

History
110 lines (85 loc) · 3.22 KB

module-3-deploy-eks.md

File metadata and controls

110 lines (85 loc) · 3.22 KB

Module 3 - Deploy an AWS EKS cluster using Calico CNI

  1. Create the AWS EKS cluster connected to the two subnets designated for it in the previous step.

    eksctl create cluster \
      --name $CLUSTERNAME \
      --region $REGION \
      --version $K8SVERSION \
      --vpc-public-subnets $SUBNETPUBEKS1AID,$SUBNETPUBEKS1BID \
      --without-nodegroup
  2. Uninstall the AWS VPC CNI and install Calico CNI.

    To install Calico CNI we need first remove the AWS VPC CNI and then install it. For further information about Calico CNI installation on AWS EKS, please refer to the Project Calico documentation

    Steps

    • Uninstall AWS VPN CNI

      kubectl delete daemonset -n kube-system aws-node
    • Install Calico CNI

      kubectl create -f https://raw.githubusercontent.com/projectcalico/calico/v3.25.1/manifests/tigera-operator.yaml
    • Create the installation configuration.

      kubectl create -f - <<EOF
      kind: Installation
      apiVersion: operator.tigera.io/v1
      metadata:
        name: default
      spec:
        kubernetesProvider: EKS
        cni:
          type: Calico
        calicoNetwork:
          bgp: Disabled
      EOF
  3. Create the nodegroup and the nodes. Two nodes are enough to demonstrate the concept.

    eksctl create nodegroup $CLUSTERNAME-ng \
      --cluster $CLUSTERNAME \
      --region $REGION \
      --node-type $INSTANCETYPE \
      --nodes 2 \
      --nodes-min 0 \
      --nodes-max 2 \
      --max-pods-per-node 100 \
      --ssh-access \
      --ssh-public-key $KEYPAIRNAME

    After the node group and the nodes creation, the AWS resources should look like the following diagram:

    egress-gateway-v0 0 2-NodeGroups

  4. Install the EBS driver for the EKS cluster

    # install EBS driver
    kubectl apply -k "github.com/kubernetes-sigs/aws-ebs-csi-driver/deploy/kubernetes/overlays/stable/?ref=release-1.12"
    # check driver pods status
    kubectl get pods -n kube-system -w | grep -i ebs-csi
  5. Attach the tigera-egw-policy to the nodegroup's role.

    • Retrive the nodegroup role name.

      aws eks describe-nodegroup \
        --cluster $CLUSTERNAME \
        --nodegroup-name $CLUSTERNAME-ng \
        --query 'nodegroup.nodeRole' \
        --region $REGION \
        --output text \
        --no-cli-pager \
          | export NGROLENAME=$(awk -F "/" '{print $2}') && echo $NGROLENAME
         # Persist for later sessions in case of disconnection.
         echo export NGROLENAME=$NGROLENAME >> ~/egwLabVars.env 
    • Attach the tigera-egw-policy to the nodegroup's role.

      aws iam attach-role-policy \
        --role-name $NGROLENAME \
        --policy-arn $TIGERAEGWPOLICYARN 

➡️ Module 4 - Connect the AWS EKS cluster to Calico Cloud

⬅️ Module 2 - Getting Started
↩️ Back to Main