Skip to content
This repository has been archived by the owner on Mar 23, 2024. It is now read-only.

Files

Latest commit

 

History

History

istio-canary-gke

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 

ProductCatalog Canary Deployment (GKE / Istio)

This demo accompanies a GCP Blog Post on managing application deployments with Istio and Stackdriver.

Introduction

In this example, we will learn how to use Istio’s Traffic Splitting feature to perform a Canary deployment on Google Kubernetes Engine.

In this sample, productcatalogservice-v2 introduces a 3-second latency into all server requests. We’ll show how to use Stackdriver and Istio together to view the latency difference between the existing productcatalog deployment and the slower v2 deployment.

Setup

Google Cloud Shell is a browser-based terminal that Google provides to interact with your GCP resources. It is backed by a free Compute Engine instance that comes with many useful tools already installed, including everything required to run this demo.

Click the button below to open the demo instructions in your Cloud Shell:

Open in Cloud Shell

Create a GKE Cluster

  1. From Cloud Shell, enable the Kubernetes Engine API.
gcloud services enable container.googleapis.com
  1. Create a GKE cluster:
gcloud beta container clusters create istio-canary \
    --zone=us-central1-f \
    --machine-type=n1-standard-2 \
    --num-nodes=4
  1. Change into the Istio install directory from the root of this repository.
cd common/
  1. Install Istio on the cluster:
./install_istio.sh
  1. Once the cluster is ready, ensure that Istio is running:
$ kubectl get pods -n istio-system

NAME                                   READY   STATUS    RESTARTS   AGE
grafana-556b649566-fw67z               1/1     Running   0          5m24s
istio-ingressgateway-fc6c9d9df-nmndg   1/1     Running   0          5m30s
istio-tracing-7cf5f46848-qksxq         1/1     Running   0          5m24s
istiod-7b5d6db6b6-b457p                1/1     Running   0          5m48s
kiali-b4b5b4fb8-hwm42                  1/1     Running   0          5m23s
prometheus-558b665bb7-5v647            2/2     Running   0          5m23s

Deploy the Sample App

  1. Deploy the microservices-demo application, and add a version=v1 label to the productcatalog deployment
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/kubernetes-manifests.yaml
kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/microservices-demo/master/release/istio-manifests.yaml
kubectl delete serviceentry allow-egress-google-metadata
kubectl delete serviceentry allow-egress-googleapis
kubectl patch deployments/productcatalogservice -p '{"spec":{"template":{"metadata":{"labels":{"version":"v1"}}}}}'
  1. Using kubectl get pods, verify that all pods are Running and Ready.

At this point, ProductCatalog v1 is deployed to the cluster, along with the rest of the demo microservices. You can reach the Hipstershop frontend at the EXTERNAL_IP address output for this command:

kubectl get svc -n istio-system istio-ingressgateway

Deploy ProductCatalog v2

  1. cd into the example directory.
cd istio-canary-gke/
  1. Create an Istio DestinationRule for productcatalogservice.
kubectl apply -f canary/destinationrule.yaml
  1. Deploy productcatalog v2.
kubectl apply -f canary/productcatalog-v2.yaml
  1. Using kubectl get pods, verify that the v2 pod is Running.
productcatalogservice-v2-79459dfdff-6qdh4   2/2       Running   0          1m
  1. Create an Istio VirtualService to split incoming productcatalog traffic between v1 (75%) and v2 (25%).
kubectl apply -f canary/vs-split-traffic.yaml
  1. In a web browser, navigate again to the hipstershop frontend.
  2. Refresh the homepage a few times. You should notice that periodically, the frontend is slower to load. Let's explore ProductCatalog's latency with Stackdriver.

View traffic splitting in Kiali

  1. Open the Kiali dashboard.
istioctl dashboard kiali &
  1. Navigate to Service Graph > namespace: default

  2. Select "Versioned App Graph."

  3. In the service graph, zoom in on productcatalogservice. You should see that approximately 25% of productcatalog requests are going to v2.

kiali

Observe Latency with Stackdriver

  1. Navigate to Stackdriver Monitoring.
  2. Create a Stackdriver Workspace for your GCP project (instructions).
  3. From your new Stackdriver Workspace, navigate to Resources > Metrics Explorer. in the left sidebar.

stackdriver sidebar

  1. From Metrics Explorer, enter the following parameters on the left side of the window:

    • Resource type: Kubernetes Container
    • Metric: Server Response Latencies (istio.io/service/server/response_latencies)
    • Group by: destination_workload_name
    • Aggregator: 50th percentile
  2. In the menubar of the chart on the right, choose the Line type.

  3. Once the latency chart renders, you should see productcatalog-v2 as an outlier, with mean latencies hovering at 3 seconds. This is the value of EXTRA_LATENCY we injected into v2.

metrics explorer

You’ll also notice that other services (such as frontend) have an irregular latency spike. This is because the frontend relies on ProductCatalog, for which 25% of requests are routing through the slower v2 deployment.

v2 latency

Rollback

  1. Return 100% of productcatalog traffic to v1:
kubectl apply -f canary/rollback.yaml
  1. Finally, remove v2:
kubectl delete -f canary/productcatalog-v2.yaml

Cleanup

To avoid incurring additional billing costs, delete the GKE cluster.

gcloud container clusters delete istio-canary --zone us-central1-f

Learn More