diff --git a/README.md b/README.md index 7b913a8..3b24c07 100644 --- a/README.md +++ b/README.md @@ -3,8 +3,12 @@ For a more in depth best practices guide, go to the solution posted [here](https://cloud.google.com/solutions/jenkins-on-container-engine). ## Introduction -This guide will take you through the steps necessary to continuously deliver your software to end users by leveraging [Google Container Engine](https://cloud.google.com/container-engine/) and [Jenkins](https://jenkins.io) to orchestrate the software delivery pipeline. -If you are not familiar with basic Kubernetes concepts, have a look at [Kubernetes 101](http://kubernetes.io/docs/user-guide/walkthrough/). + +This guide will take you through the steps necessary to continuously deliver +your software to end users by leveraging [Google Container Engine](https://cloud.google.com/container-engine/) +and [Jenkins](https://jenkins.io) to orchestrate the software delivery pipeline. +If you are not familiar with basic Kubernetes concepts, have a look at +[Kubernetes 101](http://kubernetes.io/docs/user-guide/walkthrough/). In order to accomplish this goal you will use the following Jenkins plugins: - [Jenkins Kubernetes Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin) - start Jenkins build executor containers in the Kubernetes cluster when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster @@ -18,85 +22,230 @@ In order to deploy the application with [Kubernetes](http://kubernetes.io/) you - [Secrets](http://kubernetes.io/docs/user-guide/secrets/) - secure storage of non public configuration information, SSL certs specifically in our case ## Prerequisites + 1. A Google Cloud Platform Account 1. [Enable the Compute Engine, Container Engine, and Container Builder APIs](https://console.cloud.google.com/flows/enableapi?apiid=compute_component,container,cloudbuild.googleapis.com) ## Do this first -In this section you will start your [Google Cloud Shell](https://cloud.google.com/cloud-shell/docs/) and clone the lab code repository to it. + +In this section you will start your [Google Cloud Shell](https://cloud.google.com/cloud-shell/docs/) +and clone the lab code repository to it. 1. Create a new Google Cloud Platform project: [https://console.developers.google.com/project](https://console.developers.google.com/project) -1. Click the Google Cloud Shell icon in the top-right and wait for your shell to open: +1. Click the Activate Cloud Shell icon in the top-right and wait for your shell to open. - ![](docs/img/cloud-shell.png) + ![](docs/img/cloud-shell.png) - ![](docs/img/cloud-shell-prompt.png) + > If you are prompted with a _Learn more_ message, click __Continue__ to + > finish opening the Cloud Shell. -1. When the shell is open, set your default compute zone: +1. When the shell is open, use the [gcloud](https://cloud.google.com/sdk/) + command line interface tool to set your default compute zone: - ```shell - $ gcloud config set compute/zone us-east1-d - ``` + ![](docs/img/cloud-shell-prompt.png) + + ```shell + gcloud config set compute/zone us-east1-d + ``` + + Output (do not copy): + + ```output + Updated property [compute/zone]. + ``` + +1. Set an environment variable with your project: + + ```shell + export GOOGLE_CLOUD_PROJECT=$(gcloud config get-value project) + ``` + + Output (do not copy): + + ```output + Your active configuration is: [cloudshell-...] + ``` 1. Clone the lab repository in your cloud shell, then `cd` into that dir: - ```shell - $ git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git - Cloning into 'continuous-deployment-on-kubernetes'... - ... + ```shell + git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git + ``` - $ cd continuous-deployment-on-kubernetes - ``` + Output (do not copy): + + ```output + Cloning into 'continuous-deployment-on-kubernetes'... + ... + ``` + + ```shell + cd continuous-deployment-on-kubernetes + ``` + +## Create a Service Account with permissions + +1. Create a service account, on Google Cloud Platform (GCP). + + Create a new service account because it's the recommended way to avoid + using extra permissions in Jenkins and the cluster. + + ```shell + gcloud iam service-accounts create jenkins-sa \ + --display-name "jenkins-sa" + ``` + + Output (do not copy): + + ```output + Created service account [jenkins-sa]. + ``` + +1. Add required permissions, to the service account, using predefined roles. + + Most of these permissions are related to Jenkins use of _Cloud Build_, and + storing/retrieving build artifacts in _Cloud Storage_. Also, the + service account needs to enable the Jenkins agent to read from a repo + you will create in _Cloud Source Repositories (CSR)_. + + ```shell + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/viewer" + + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/source.reader" + + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/storage.admin" + + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/storage.objectAdmin" + + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/cloudbuild.builds.editor" + + gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/container.developer" + ``` + + You can check the permissions added using __IAM & admin__ in Cloud Console. + + ![](docs/img/jenkins_sa_iam.png) + +1. Export the service account credentials to a JSON key file in Cloud Shell: + + ```shell + gcloud iam service-accounts keys create ~/jenkins-sa-key.json \ + --iam-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" + ``` + + Output (do not copy): + + ```output + created key [...] of type [json] as [/home/.../jenkins-sa-key.json] for [jenkins-sa@myproject.aiam.gserviceaccount.com] + ``` + +1. Download the JSON key file to your local machine. + + Click __Download File__ from __More__ on the Cloud Shell toolbar: + + ![](docs/img/download_file.png) + +1. Enter the __File path__ as `jenkins-sa-key.json` and click __Download__. + + The file will be downloaded to your local machine, for use later. ## Create a Kubernetes Cluster -You'll use Google Container Engine to create and manage your Kubernetes cluster. Provision the cluster with `gcloud`: - -```shell -gcloud container clusters create jenkins-cd \ ---num-nodes 2 \ ---machine-type n1-standard-2 \ ---scopes "https://www.googleapis.com/auth/source.read_write,cloud-platform" \ ---cluster-version 1.12 -``` -Once that operation completes download the credentials for your cluster using the [gcloud CLI](https://cloud.google.com/sdk/): -```shell -$ gcloud container clusters get-credentials jenkins-cd -Fetching cluster endpoint and auth data. -kubeconfig entry generated for jenkins-cd. -``` +1. Provision the cluster with `gcloud`: -Confirm that the cluster is running and `kubectl` is working by listing pods: + Use Google Kubernetes Engine (GKE) to create and manage your Kubernetes + cluster, named `jenkins-cd`. Use the _service account_ created earlier. -```shell -$ kubectl get pods -No resources found. -``` -You should see `No resources found.`. + ```shell + gcloud container clusters create jenkins-cd \ + --num-nodes 2 \ + --machine-type n1-standard-2 \ + --cluster-version 1.13 \ + --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" + ``` + + Output (do not copy): + + ```output + NAME LOCATION MASTER_VERSION MASTER_IP MACHINE_TYPE NODE_VERSION NUM_NODES STATUS + jenkins-cd us-east1-d 1.13.10-gke.7 35.229.29.69 n1-standard-2 1.13.10-gke.7 2 RUNNING + ``` + +1. Once that operation completes, retrieve the credentials for your cluster. + + ```shell + gcloud container clusters get-credentials jenkins-cd + ``` + + Output (do not copy): + + ```output + Fetching cluster endpoint and auth data. + kubeconfig entry generated for jenkins-cd. + ``` + +1. Confirm that the cluster is running and `kubectl` is working by listing pods: + + ```shell + kubectl get pods + ``` + + Output (do not copy): + + ```output + No resources found. + ``` + + > You would see an error if the cluster was not created, or you did not + > have permissions. + +1. Add yourself as a cluster administrator in the cluster's RBAC so that you can + give Jenkins permissions in the cluster: + + ```shell + kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) + ``` + + Output (do not copy): + + ```output + Your active configuration is: [cloudshell-...] + clusterrolebinding.rbac.authorization.k8s.io/cluster-admin-binding created + ``` ## Install Helm -In this lab, you will use Helm to install Jenkins from the Charts repository. Helm is a package manager that makes it easy to configure and deploy Kubernetes applications. Once you have Jenkins installed, you'll be able to set up your CI/CD pipleline. +In this lab, you will use Helm to install Jenkins with a stable _chart_. Helm +is a package manager that makes it easy to configure and deploy Kubernetes +applications. Once you have Jenkins installed, you'll be able to set up your +CI/CD pipleline. 1. Download and install the helm binary ```shell - wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.1-linux-amd64.tar.gz + wget https://storage.googleapis.com/kubernetes-helm/helm-v2.14.3-linux-amd64.tar.gz ``` 1. Unzip the file to your local system: ```shell - tar zxfv helm-v2.14.1-linux-amd64.tar.gz + tar zxfv helm-v2.14.3-linux-amd64.tar.gz cp linux-amd64/helm . ``` -1. Add yourself as a cluster administrator in the cluster's RBAC so that you can give Jenkins permissions in the cluster: - - ```shell - kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) - ``` - 1. Grant Tiller, the server side of Helm, the cluster-admin role in your cluster: ```shell @@ -104,98 +253,182 @@ In this lab, you will use Helm to install Jenkins from the Charts repository. He kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller ``` -1. Initialize Helm. This ensures that the server side of Helm (Tiller) is properly installed in your cluster. + Output (do not copy): + + ```output + serviceaccount/tiller created + clusterrolebinding.rbac.authorization.k8s.io/tiller-admin-binding created + ``` + +1. Initialize Helm. This ensures that the server side of Helm (Tiller) is + properly installed in your cluster. ```shell ./helm init --service-account=tiller - ./helm update ``` -1. Ensure Helm is properly installed by running the following command. You should see versions appear for both the server and the client of ```v2.14.1```: + Output (do not copy): + + ```output + ... + Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster. + ... + ``` + +1. Update your local repo with the latest charts. + + ```shell + ./helm repo update + ``` + + Output (do not copy): + + ```output + Hang tight while we grab the latest from your chart repositories... + ...Skip local chart repository + ...Successfully got an update from the "stable" chart repository + Update Complete. + ``` + +1. Ensure Helm is properly installed by running the following command. You + should see versions `v2.14.3` appear for both the server and the client: ```shell ./helm version - Client: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} - Server: &version.Version{SemVer:"v2.14.1", GitCommit:"5270352a09c7e8b6e8c9593002a73535276507c0", GitTreeState:"clean"} ``` + Output (do not copy): + + ```output + Client: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} + Server: &version.Version{SemVer:"v2.14.3", GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"} + ``` + + > If you don't see the Server version immediately, wait a few seconds and + > try again. + ## Configure and Install Jenkins -You will use a custom [values file](https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml) to add the GCP specific plugin necessary to use service account credentials to reach your Cloud Source Repository. + +You will use a custom [values file](https://github.com/helm/charts/blob/master/stable/jenkins/values.yaml) +to add the GCP specific plugin necessary to use service account credentials to reach your Cloud Source Repository. 1. Use the Helm CLI to deploy the chart with your configuration set. ```shell - ./helm install -n cd stable/jenkins -f jenkins/values.yaml --version 1.2.2 --wait + ./helm install -n cd stable/jenkins -f jenkins/values.yaml --version 1.7.3 --wait ``` -1. Once that command completes ensure the Jenkins pod goes to the `Running` state and the container is in the `READY` state: + Output (do not copy): + + ```output + ... + For more information on running Jenkins on Kubernetes, visit: + https://cloud.google.com/solutions/jenkins-on-container-engine + ``` + +1. The Jenkins pod __STATUS__ should change to `Running` when it's ready: ```shell - $ kubectl get pods + kubectl get pods + ``` + + Output (do not copy): + + ```output NAME READY STATUS RESTARTS AGE cd-jenkins-7c786475dd-vbhg4 1/1 Running 0 1m ``` - -1. Configure the Jenkins service account to be able to deploy to the cluster. + +1. Configure the Jenkins service account to be able to deploy to the cluster. ```shell - $ kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins + kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins + ``` + + Output (do not copy): + + ```output clusterrolebinding.rbac.authorization.k8s.io/jenkins-deploy created ``` -1. Run the following command to setup port forwarding to the Jenkins UI from the Cloud Shell +1. Set up port forwarding to the Jenkins UI, from Cloud Shell: ```shell - export POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}") - kubectl port-forward $POD_NAME 8080:8080 >> /dev/null & + export JENKINS_POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}") + kubectl port-forward $JENKINS_POD_NAME 8080:8080 >> /dev/null & ``` 1. Now, check that the Jenkins Service was created properly: ```shell - $ kubectl get svc + kubectl get svc + ``` + + Output (do not copy): + + ```output NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE cd-jenkins 10.35.249.67 8080/TCP 3h cd-jenkins-agent 10.35.248.1 50000/TCP 3h kubernetes 10.35.240.1 443/TCP 9h ``` -We are using the [Kubernetes Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin) so that our builder nodes will be automatically launched as necessary when the Jenkins master requests them. -Upon completion of their work they will automatically be turned down and their resources added back to the clusters resource pool. + This Jenkins configuration is using the [Kubernetes Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Kubernetes+Plugin), + so that builder nodes will be automatically launched as necessary when the + Jenkins master requests them. Upon completion of the work, the builder nodes + will be automatically turned down, and their resources added back to the + cluster's resource pool. -Notice that this service exposes ports `8080` and `50000` for any pods that match the `selector`. This will expose the Jenkins web UI and builder/agent registration ports within the Kubernetes cluster. -Additionally the `jenkins-ui` services is exposed using a ClusterIP so that it is not accessible from outside the cluster. + Notice that this service exposes ports `8080` and `50000` for any pods that + match the `selector`. This will expose the Jenkins web UI and builder/agent + registration ports within the Kubernetes cluster. Additionally the `jenkins-ui` + services is exposed using a ClusterIP so that it is not accessible from outside + the cluster. ## Connect to Jenkins -1. The Jenkins chart will automatically create an admin password for you. To retrieve it, run: +1. The Jenkins chart will automatically create an admin password for you. To + retrieve it, run: ```shell printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo ``` -2. To get to the Jenkins user interface, click on the Web Preview button![](../docs/img/web-preview.png) in cloud shell, then click “Preview on port 8080”: +2. To get to the Jenkins user interface, click on the Web Preview + button![](../docs/img/web-preview.png) in cloud shell, then click + **Preview on port 8080**: ![](docs/img/preview-8080.png) -You should now be able to log in with username `admin` and your auto generated password. +You should now be able to log in with username `admin` and your auto generated +password. ![](docs/img/jenkins-login.png) ### Your progress, and what's next -You've got a Kubernetes cluster managed by Google Container Engine. You've deployed: + +You've got a Kubernetes cluster managed by GKE. You've deployed: * a Jenkins Deployment * a (non-public) service that exposes Jenkins to its agent containers -You have the tools to build a continuous deployment pipeline. Now you need a sample app to deploy continuously. +You have the tools to build a continuous deployment pipeline. Now you need a +sample app to deploy continuously. ## The sample app -You'll use a very simple sample application - `gceme` - as the basis for your CD pipeline. `gceme` is written in Go and is located in the `sample-app` directory in this repo. When you run the `gceme` binary on a GCE instance, it displays the instance's metadata in a pretty card: + +You'll use a very simple sample application - `gceme` - as the basis for your CD +pipeline. `gceme` is written in Go and is located in the `sample-app` directory +in this repo. When you run the `gceme` binary on a GCE instance, it displays the +instance's metadata in a pretty card: ![](docs/img/info_card.png) -The binary supports two modes of operation, designed to mimic a microservice. In backend mode, `gceme` will listen on a port (8080 by default) and return GCE instance metadata as JSON, with content-type=application/json. In frontend mode, `gceme` will query a backend `gceme` service and render that JSON in the UI you saw above. It looks roughly like this: +The binary supports two modes of operation, designed to mimic a microservice. In +backend mode, `gceme` will listen on a port (8080 by default) and return GCE +instance metadata as JSON, with content-type=application/json. In frontend mode, +`gceme` will query a backend `gceme` service and render that JSON in the UI you +saw above. It looks roughly like this: ``` ----------- ------------ ~~~~~~~~~~~~ ----------- @@ -213,160 +446,306 @@ The binary supports two modes of operation, designed to mimic a microservice. In ``` Both the frontend and backend modes of the application support two additional URLs: -1. `/version` prints the version of the binary (declared as a const in `main.go`) -1. `/healthz` reports the health of the application. In frontend mode, health will be OK if the backend is reachable. +1. `/version` prints the version of the binary (declared as a const in + `main.go`) +1. `/healthz` reports the health of the application. In frontend mode, health + will be OK if the backend is reachable. ### Deploy the sample app to Kubernetes -In this section you will deploy the `gceme` frontend and backend to Kubernetes using Kubernetes manifest files (included in this repo) that describe the environment that the `gceme` binary/Docker image will be deployed to. They use a default `gceme` Docker image that you will be updating with your own in a later section. -You'll have two primary environments - [canary](http://martinfowler.com/bliki/CanaryRelease.html) and production - and use Kubernetes to manage them. +In this section you will deploy the `gceme` frontend and backend to Kubernetes +using Kubernetes manifest files (included in this repo) that describe the +environment that the `gceme` binary/Docker image will be deployed to. They use a +default `gceme` Docker image that you will be updating with your own in a later +section. -> **Note**: The manifest files for this section of the tutorial are in `sample-app/k8s`. You are encouraged to open and read each one before creating it per the instructions. +You'll have two primary environments - +[canary](http://martinfowler.com/bliki/CanaryRelease.html) and production - and +use Kubernetes to manage them. -1. First change directories to the sample-app: +> **Note**: The manifest files for this section of the tutorial are in +> `sample-app/k8s`. You are encouraged to open and read each one before creating +> it per the instructions. - ```shell - $ cd sample-app - ``` +1. First change directories to the sample-app, back in __Cloud Shell__: + + ```shell + cd sample-app + ``` 1. Create the namespace for production: - ```shell - $ kubectl create ns production - ``` + ```shell + kubectl create ns production + ``` -1. Create the canary and production Deployments and Services: + Output (do not copy): + + ```output + namespace/production created + ``` + +1. Create the production Deployments for frontend and backend: ```shell - $ kubectl --namespace=production apply -f k8s/production - $ kubectl --namespace=production apply -f k8s/canary - $ kubectl --namespace=production apply -f k8s/services + kubectl --namespace=production apply -f k8s/production + ``` + + Output (do not copy): + + ```output + deployment.extensions/gceme-backend-production created + deployment.extensions/gceme-frontend-production created ``` -1. Scale the production service: +1. Create the canary Deployments for frontend and backend: ```shell - $ kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4 + kubectl --namespace=production apply -f k8s/canary ``` -1. Retrieve the External IP for the production services: **This field may take a few minutes to appear as the load balancer is being provisioned**: + Output (do not copy): - ```shell - $ kubectl --namespace=production get service gceme-frontend - NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE - gceme-frontend LoadBalancer 10.35.254.91 35.196.48.78 80:31088/TCP 1m - ``` + ```output + deployment.extensions/gceme-backend-canary created + deployment.extensions/gceme-frontend-canary created + ``` -1. Confirm that both services are working by opening the frontend external IP in your browser +1. Create the Services for frontend and backend: + + ```shell + kubectl --namespace=production apply -f k8s/services + ``` -1. Open a new Google Cloud Shell terminal by clicking the `+` button to the right of the current terminal's tab, and poll the production endpoint's `/version` URL. Leave this running in the second terminal so you can easily observe rolling updates in the next section: + Output (do not copy): + + ```output + service/gceme-backend created + service/gceme-frontend created + ``` + +1. Scale the production, frontend service: + + ```shell + kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4 + ``` + + Output (do not copy): + + ```output + deployment.extensions/gceme-frontend-production scaled + ``` + +1. Retrieve the External IP for the production services: + + **This field may take a few minutes to appear as the load balancer is being + provisioned** + + ```shell + kubectl --namespace=production get service gceme-frontend + ``` + + Output (do not copy): + + ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + gceme-frontend LoadBalancer 10.35.254.91 35.196.48.78 80:31088/TCP 1m + ``` + +1. Confirm that both services are working by opening the frontend `EXTERNAL-IP` + in your browser + + ![](docs/img/blue_gceme.png) + +1. Poll the production endpoint's `/version` URL. + + Open a new **Cloud Shell** terminal by clicking the `+` button to the right + of the current terminal's tab. ```shell - $ export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend) - $ while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 1; done + export FRONTEND_SERVICE_IP=$(kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].ip}" --namespace=production services gceme-frontend) + while true; do curl http://$FRONTEND_SERVICE_IP/version; sleep 3; done ``` -1. Return to the first terminal + Output (do not copy): + + ```output + 1.0.0 + 1.0.0 + 1.0.0 + ``` + + You should see that all requests are serviced by v1.0.0 of the application. + + Leave this running in the second terminal so you can easily observe rolling + updates in the next section. + +1. Return to the first terminal/tab in Cloud Shell. ### Create a repository for the sample app source -Here you'll create your own copy of the `gceme` sample app in [Cloud Source Repository](https://cloud.google.com/source-repositories/docs/). -1. Change directories to `sample-app` of the repo you cloned previously, then initialize the git repository. +Here you'll create your own copy of the `gceme` sample app in +[Cloud Source Repository](https://cloud.google.com/source-repositories/docs/). + +1. Initialize the git repository. - **Be sure to replace _REPLACE_WITH_YOUR_PROJECT_ID_ with the name of your Google Cloud Platform project** + Make sure to work from the `sample-app` directory of the repo you cloned previously. ```shell - $ cd sample-app - $ git init - $ git config credential.helper gcloud.sh - $ gcloud source repos create gceme - $ git remote add origin https://source.developers.google.com/p/REPLACE_WITH_YOUR_PROJECT_ID/r/gceme + git init + git config credential.helper gcloud.sh + gcloud source repos create gceme ``` -1. Ensure git is able to identify you: +1. Add a _git remote_ for the new repo in Cloud Source Repositories. ```shell - $ git config --global user.email "YOUR-EMAIL-ADDRESS" - $ git config --global user.name "YOUR-NAME" + git remote add origin https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme ``` -1. Add, commit, and push all the files: +1. Ensure git is able to identify you: ```shell - $ git add . - $ git commit -m "Initial commit" - $ git push origin master + git config --global user.email "YOUR-EMAIL-ADDRESS" + git config --global user.name "YOUR-NAME" ``` +1. Add, commit, and push all the files: + + ```shell + git add . + git commit -m "Initial commit" + git push origin master + ``` + + Output (do not copy): + + ```output + To https://source.developers.google.com/p/myproject/r/gceme + * [new branch] master -> master + ``` + ## Create a pipeline -You'll now use Jenkins to define and run a pipeline that will test, build, and deploy your copy of `gceme` to your Kubernetes cluster. You'll approach this in phases. Let's get started with the first. + +You'll now use __Jenkins__ to define and run a pipeline that will test, build, +and deploy your copy of `gceme` to your Kubernetes cluster. You'll approach this +in phases. Let's get started with the first. ### Phase 1: Add your service account credentials -First we will need to configure our GCP credentials in order for Jenkins to be able to access our code repository -1. In the Jenkins UI, Click “Credentials” on the left -1. Click either of the “(global)” links (they both route to the same URL) -1. Click “Add Credentials” on the left -1. From the “Kind” dropdown, select “Google Service Account from metadata” -1. Click “OK” +First, you will need to configure GCP credentials in order for Jenkins to be +able to access the code repository: -You should now see 2 Global Credentials. Make a note of the name of second credentials as you will reference this in Phase 2: +1. In the **Jenkins UI**, Click **Credentials** on the left +1. Click the **(global)** link +1. Click **Add Credentials** on the left +1. From the **Kind** dropdown, select `Google Service Account from private key` +1. Enter the **Project Name** from your project +1. Leave **JSON key** selected, and click **Choose File**. +1. Select the `jenkins-sa-key.json` file downloaded earlier, then click + **Open**. -![](docs/img/jenkins-credentials.png) + ![](docs/img/jenkins_creds_safromkey.png) +1. Click **OK** + +You should now see 1 global credential. Make a note of the name of the +credential, as you will reference this in Phase 2. + +![](docs/img/jenkins-credentials.png) ### Phase 2: Create a job -This lab uses [Jenkins Pipeline](https://jenkins.io/solutions/pipeline/) to define builds as groovy scripts. -Navigate to your Jenkins UI and follow these steps to configure a Pipeline job (hot tip: you can find the IP address of your Jenkins install with `kubectl get ingress --namespace jenkins`): +This lab uses [Jenkins Pipeline](https://jenkins.io/solutions/pipeline/) to +define builds as _groovy_ scripts. + +Navigate to your Jenkins UI and follow these steps to configure a Pipeline job +(hot tip: you can find the IP address of your Jenkins install with `kubectl get +ingress --namespace jenkins`): -1. Click the “Jenkins” link in the top left of the interface +1. Click the **Jenkins** link in the top left toolbar, of the ui 1. Click the **New Item** link in the left nav -1. Name the project **sample-app**, choose the **Multibranch Pipeline** option, then click `OK` +1. For **item name** use `sample-app`, choose the **Multibranch Pipeline** + option, then click **OK** + + ![](docs/img/sample-app.png) + +1. Click **Add source** and choose **git** + +1. Paste the **HTTPS clone URL** of your `gceme` repo on Cloud Source + Repositories into the **Project Repository** field. + It will look like: + https://source.developers.google.com/p/[REPLACE_WITH_YOUR_PROJECT_ID]/r/gceme -1. Click `Add Source` and choose `git` +1. From the **Credentials** dropdown, select the name of the credential from + Phase 1. It should have the format `PROJECT_ID service account`. -1. Paste the **HTTPS clone URL** of your `sample-app` repo on Cloud Source Repositories into the **Project Repository** field. - It will look like: https://source.developers.google.com/p/REPLACE_WITH_YOUR_PROJECT_ID/r/gceme +1. Under **Scan Multibranch Pipeline Triggers** section, check the + **Periodically if not otherwise run** box, then set the **Interval** value to + `1 minute`. -1. From the Credentials dropdown select the name of new created credentials from the Phase 1. It should have the format `PROJECT_ID service account`. + ![](docs/img/git-credentials.png) -1. Under 'Scan Multibranch Pipeline Triggers' section, check the 'Periodically if not otherwise run' box and se the 'Interval' value to 1 minute. +1. Click **Save**, leaving all other options with default values. -1. Click `Save`, leaving all other options with their defaults + A _Branch indexing_ job was kicked off to identify any branches in your + repository. - ![](docs/img/clone_url.png) +1. Click **Jenkins** > **sample-app**, in the top menu. -A job entitled "Branch indexing" was kicked off to see identify the branches in your repository. If you refresh Jenkins you should see the `master` branch now has a job created for it. + You should see the `master` branch now has a job created for it. -The first run of the job will fail until the project name is set properly in the next step. + The first run of the job will fail, until the _project name_ is set properly + in the `Jenkinsfile` next step. + + ![](docs/img/first-build.png) ### Phase 3: Modify Jenkinsfile, then build and test the app -Create a branch for the canary environment called `canary` - +1. Create a branch for the canary environment called `canary` + ```shell - $ git checkout -b canary + git checkout -b canary ``` -The [`Jenkinsfile`](https://jenkins.io/doc/book/pipeline/jenkinsfile/) is written using the Jenkins Workflow DSL (Groovy-based). It allows an entire build pipeline to be expressed in a single script that lives alongside your source code and supports powerful features like parallelization, stages, and user input. + Output (do not copy): -Modify your `Jenkinsfile` script so it contains the correct project name on line 2. + ```output + Switched to a new branch 'canary' + ``` -**Be sure to replace _REPLACE_WITH_YOUR_PROJECT_ID_ on line 2 with your project name:** + The [`Jenkinsfile`](https://jenkins.io/doc/book/pipeline/jenkinsfile/) is + written using the Jenkins Workflow DSL, which is Groovy-based. It allows an + entire build pipeline to be expressed in a single script that lives alongside + your source code and supports powerful features like parallelization, stages, + and user input. -Don't commit the new `Jenkinsfile` just yet. You'll make one more change in the next section, then commit and push them together. +1. Update your `Jenkinsfile` script with the correct **PROJECT** environment value. + + **Be sure to replace `REPLACE_WITH_YOUR_PROJECT_ID` with your project name.** + + Save your changes, but don't commit the new `Jenkinsfile` change just yet. + You'll make one more change in the next section, then commit and push them + together. ### Phase 4: Deploy a [canary release](http://martinfowler.com/bliki/CanaryRelease.html) to canary -Now that your pipeline is working, it's time to make a change to the `gceme` app and let your pipeline test, package, and deploy it. -The canary environment is rolled out as a percentage of the pods behind the production load balancer. -In this case we have 1 out of 5 of our frontends running the canary code and the other 4 running the production code. This allows you to ensure that the canary code is not negatively affecting users before rolling out to your full fleet. -You can use the [labels](http://kubernetes.io/docs/user-guide/labels/) `env: production` and `env: canary` in Google Cloud Monitoring in order to monitor the performance of each version individually. +Now that your pipeline is working, it's time to make a change to the `gceme` app +and let your pipeline test, package, and deploy it. + +The canary environment is rolled out as a percentage of the pods behind the +production load balancer. In this case we have 1 out of 5 of our frontends +running the canary code and the other 4 running the production code. This allows +you to ensure that the canary code is not negatively affecting users before +rolling out to your full fleet. You can use the +[labels](http://kubernetes.io/docs/user-guide/labels/) `env: production` and +`env: canary` in Google Cloud Monitoring in order to monitor the performance of +each version individually. -1. In the `sample-app` repository on your workstation open `html.go` and replace the word `blue` with `orange` (there should be exactly two occurrences): +1. In the `sample-app` repository on your workstation open `html.go` and replace + the word `blue` with `orange` (there should be exactly two occurrences): ```html //snip @@ -376,7 +755,8 @@ You can use the [labels](http://kubernetes.io/docs/user-guide/labels/) `env: pro //snip ``` -1. In the same repository, open `main.go` and change the version number from `1.0.0` to `2.0.0`: +1. In the same repository, open `main.go` and change the version number from + `1.0.0` to `2.0.0`: ```go //snip @@ -384,17 +764,39 @@ You can use the [labels](http://kubernetes.io/docs/user-guide/labels/) `env: pro //snip ``` -1. `git add Jenkinsfile html.go main.go`, then `git commit -m "Version 2"`, and finally `git push origin canary` your change. +1. Push the _version 2_ changes to the repo: -1. When your change has been pushed to the Git repository, navigate to your Jenkins job. Click the "Scan Multibranch Pipeline Now" button. + ```shell + git add Jenkinsfile html.go main.go + ``` - ![](docs/img/first-build.png) + ```shell + git commit -m "Version 2" + ``` + + ```shell + git push origin canary + ``` + +1. Revisit your sample-app in the Jenkins UI. + + Navigate back to your Jenkins `sample-app` job. Notice a canary pipeline + job has been created. + + ![](docs/img/sample_app_master_canary.png) -1. Once the build is running, click the down arrow next to the build in the left column and choose **Console Output**: +1. Follow the canary build output. - ![](docs/img/console.png) + * Click the **Canary** link. + * Click the **#1** link the **Build History** box, on the lower left. + * Click **Console Output** from the left-side menu. + * Scroll down to follow. -1. Track the output for a few minutes and watch for the `kubectl --namespace=production apply...` to begin. When it starts, open the terminal that's polling canary's `/version` URL and observe it start to change in some of the requests: +1. Track the output for a few minutes. + + When you see `Finished: SUCCESS`, open the Cloud Shell terminal that you + left polling `/version` of _canary_. Observe that some requests are now + handled by the _canary_ `2.0.0` version. ``` 1.0.0 @@ -409,24 +811,37 @@ You can use the [labels](http://kubernetes.io/docs/user-guide/labels/) `env: pro 1.0.0 ``` - You have now rolled out that change to a subset of users. + You have now rolled out that change, version 2.0.0, to a **subset** of users. + +1. Continue the rollout, to the rest of your users. -1. Once the change is deployed to canary, you can continue to roll it out to the rest of your users by creating a branch called `production` and pushing it to the Git server: + Back in the other Cloud Shell terminal, create a branch called + `production`, then push it to the Git server. ```shell - $ git checkout master - $ git merge canary - $ git push origin master + git checkout master + git merge canary + git push origin master ``` -1. In a minute or so you should see that the master job in the sample-app folder has been kicked off: - ![](docs/img/production.png) +1. Watch the pipelines in the Jenkins UI handle the change. + + Within a minute or so, you should see a new job in the **Build Queue** and **Build Executor**. -1. Clicking on the `master` link will show you the stages of your pipeline as well as pass/fail and timing characteristics. + ![](docs/img/master_build_executor.png) - ![](docs/img/production_pipeline.png) +1. Clicking on the `master` link will show you the stages of your pipeline as + well as pass/fail and timing characteristics. -1. Open the terminal that's polling canary's `/version` URL and observe that the new version (2.0.0) has been rolled out and is serving all requests. + You can see the failed master job #1, and the successful master job #2. + + ![](docs/img/master_two_pipeline.png) + +1. Check the Cloud Shell terminal responses again. + + In Cloud Shell, open the terminal polling canary's `/version` URL and observe + that the new version, `2.0.0`, has been rolled out and is serving all + requests. ``` 2.0.0 @@ -441,82 +856,123 @@ You can use the [labels](http://kubernetes.io/docs/user-guide/labels/) `env: pro 2.0.0 ``` -1. Look at the `Jenkinsfile` in the project to see how the workflow is written. +If you want to understand the pipeline stages in greater detail, you can +look through the `Jenkinsfile` in the `sample-app` project directory. ### Phase 5: Deploy a development branch -Often times changes will not be so trivial that they can be pushed directly to the canary environment. In order to create a development environment from a long lived feature branch -all you need to do is push it up to the Git server and let Jenkins deploy your environment. In this case you will not use a loadbalancer so you'll have to access your application using `kubectl proxy`, -which authenticates itself with the Kubernetes API and proxies requests from your local machine to the service in the cluster without exposing your service to the internet. + +Oftentimes changes will not be so trivial that they can be pushed directly to +the **canary** environment. In order to create a **development** environment, +from a long lived feature branch, all you need to do is push it up to the Git +server. Jenkins will automatically deploy your **development** environment. + +In this case you will not use a loadbalancer, so you'll have to access your +application using `kubectl proxy`. This proxy authenticates itself with the +Kubernetes API and proxies requests from your local machine to the service in +the cluster without exposing your service to the internet. #### Deploy the development branch 1. Create another branch and push it up to the Git server ```shell - $ git checkout -b new-feature - $ git push origin new-feature + git checkout -b new-feature + git push origin new-feature ``` -1. Open Jenkins in your web browser and navigate to the sample-app job. You should see that a new job called "new-feature" has been created and your environment is being created. +1. Open Jenkins in your web browser and navigate back to sample-app. + + You should see that a new job called `new-feature` has been created, + and this job is creating your new environment. + + ![](docs/img/new_feature_job.png) 1. Navigate to the console output of the first build of this new job by: - * Click the `new-feature` link in the job list. - * Click the `#1` link in the Build History list on the left of the page. - * Finally click the `Console Output` link in the left navigation. + * Click the **new-feature** link in the job list. + * Click the **#1** link in the Build History list on the left of the page. + * Finally click the **Console Output** link in the left menu. -1. Scroll to the bottom of the console output of the job, and you will see instructions for accessing your environment: +1. Scroll to the bottom of the console output of the job to see + instructions for accessing your environment: ``` - deployment "gceme-frontend-dev" created + Successfully verified extensions/v1beta1/Deployment: gceme-frontend-dev + AvailableReplicas = 1, MinimumReplicas = 1 + [Pipeline] echo To access your environment run `kubectl proxy` [Pipeline] echo - Then access your service via http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/ + Then access your service via + http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/ [Pipeline] } ``` #### Access the development branch -1. Open a new Google Cloud Shell terminal by clicking the `+` button to the right of the current terminal's tab, and start the proxy: +1. Set up port forwarding to the dev frontend, from Cloud Shell: ```shell - $ kubectl proxy + export DEV_POD_NAME=$(kubectl get pods -n new-feature -l "app=gceme,env=dev,role=frontend" -o jsonpath="{.items[0].metadata.name}") + kubectl port-forward -n new-feature $DEV_POD_NAME 8001:80 >> /dev/null & ``` -1. Return to the original shell, and access your application via localhost: +1. Access your application via localhost: ```shell - $ curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/ + curl http://localhost:8001/api/v1/proxy/namespaces/new-feature/services/gceme-frontend:80/ + ``` + + Output (do not copy): + + ```output + + + ... + +
 
+ + + ``` -1. You can now push code to the `new-feature` branch in order to update your development environment. + Look through the response output for `"card orange"` that was changed earlier. + +1. You can now push code changes to the `new-feature` branch in order to update + your development environment. -1. Once you are done, merge your `new-feature ` branch back into the `canary` branch to deploy that code to the canary environment: +1. Once you are done, merge your `new-feature ` branch back into the `canary` + branch to deploy that code to the canary environment: ```shell - $ git checkout canary - $ git merge new-feature - $ git push origin canary + git checkout canary + git merge new-feature + git push origin canary ``` -1. When you are confident that your code won't wreak havoc in production, merge from the `canary` branch to the `master` branch. Your code will be automatically rolled out in the production environment: +1. When you are confident that your code won't wreak havoc in production, merge + from the `canary` branch to the `master` branch. Your code will be + automatically rolled out in the production environment: ```shell - $ git checkout master - $ git merge canary - $ git push origin master + git checkout master + git merge canary + git push origin master ``` -1. When you are done with your development branch, delete it from the server and delete the environment in Kubernetes: +1. When you are done with your development branch, delete it from Cloud + Source Repositories, then delete the environment in Kubernetes: ```shell - $ git push origin :new-feature - $ kubectl delete ns new-feature + git push origin :new-feature + kubectl delete ns new-feature ``` ## Extra credit: deploy a breaking change, then roll back -Make a breaking change to the `gceme` source, push it, and deploy it through the pipeline to production. Then pretend latency spiked after the deployment and you want to roll back. Do it! Faster! + +Make a breaking change to the `gceme` source, push it, and deploy it through the +pipeline to production. Then pretend latency spiked after the deployment and you +want to roll back. Do it! Faster! Things to consider: @@ -525,6 +981,11 @@ Things to consider: * Is SRE really what you want to do with your life? ## Clean up -Clean up is really easy, but also super important: if you don't follow these instructions, you will continue to be billed for the Google Container Engine cluster you created. -To clean up, navigate to the [Google Developers Console Project List](https://console.developers.google.com/project), choose the project you created for this lab, and delete it. That's it. +Clean up is really easy, but also super important: if you don't follow these +instructions, you will continue to be billed for the GKE cluster you created. + +To clean up, navigate to the +[Google Developers Console Project List](https://console.developers.google.com/project), +choose the project you created for this lab, and delete it. That's it. + diff --git a/docs/img/blue_gceme.png b/docs/img/blue_gceme.png new file mode 100644 index 0000000..10b5c73 Binary files /dev/null and b/docs/img/blue_gceme.png differ diff --git a/docs/img/console.png b/docs/img/console.png deleted file mode 100644 index d41529f..0000000 Binary files a/docs/img/console.png and /dev/null differ diff --git a/docs/img/download_file.png b/docs/img/download_file.png new file mode 100644 index 0000000..a283089 Binary files /dev/null and b/docs/img/download_file.png differ diff --git a/docs/img/git-credentials.png b/docs/img/git-credentials.png new file mode 100644 index 0000000..7e7f457 Binary files /dev/null and b/docs/img/git-credentials.png differ diff --git a/docs/img/jenkins-credentials.png b/docs/img/jenkins-credentials.png index f8817c8..0eea62b 100644 Binary files a/docs/img/jenkins-credentials.png and b/docs/img/jenkins-credentials.png differ diff --git a/docs/img/jenkins_creds_safromkey.png b/docs/img/jenkins_creds_safromkey.png new file mode 100644 index 0000000..3e86c43 Binary files /dev/null and b/docs/img/jenkins_creds_safromkey.png differ diff --git a/docs/img/jenkins_sa_iam.png b/docs/img/jenkins_sa_iam.png new file mode 100644 index 0000000..2fc4db6 Binary files /dev/null and b/docs/img/jenkins_sa_iam.png differ diff --git a/docs/img/master_build_executor.png b/docs/img/master_build_executor.png new file mode 100644 index 0000000..7f241fd Binary files /dev/null and b/docs/img/master_build_executor.png differ diff --git a/docs/img/master_two_pipeline.png b/docs/img/master_two_pipeline.png new file mode 100644 index 0000000..5024695 Binary files /dev/null and b/docs/img/master_two_pipeline.png differ diff --git a/docs/img/new_feature_job.png b/docs/img/new_feature_job.png new file mode 100644 index 0000000..d135fbc Binary files /dev/null and b/docs/img/new_feature_job.png differ diff --git a/docs/img/production.png b/docs/img/production.png deleted file mode 100644 index 91ad102..0000000 Binary files a/docs/img/production.png and /dev/null differ diff --git a/docs/img/production_pipeline.png b/docs/img/production_pipeline.png deleted file mode 100644 index aace792..0000000 Binary files a/docs/img/production_pipeline.png and /dev/null differ diff --git a/docs/img/sample-app.png b/docs/img/sample-app.png new file mode 100644 index 0000000..ae3ab3a Binary files /dev/null and b/docs/img/sample-app.png differ diff --git a/docs/img/sample_app_master_canary.png b/docs/img/sample_app_master_canary.png new file mode 100644 index 0000000..8348dfe Binary files /dev/null and b/docs/img/sample_app_master_canary.png differ diff --git a/sample-app/Jenkinsfile b/sample-app/Jenkinsfile index 01d1ba5..b2bd8b6 100644 --- a/sample-app/Jenkinsfile +++ b/sample-app/Jenkinsfile @@ -68,8 +68,8 @@ spec: container('kubectl') { // Change deployed image in canary to the one we just built sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${IMAGE_TAG}#' ./k8s/canary/*.yaml") - step([$class: 'KubernetesEngineBuilder',namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) - step([$class: 'KubernetesEngineBuilder',namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/canary', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) + step([$class: 'KubernetesEngineBuilder', namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) + step([$class: 'KubernetesEngineBuilder', namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/canary', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) sh("echo http://`kubectl --namespace=production get service/${FE_SVC_NAME} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` > ${FE_SVC_NAME}") } } @@ -81,8 +81,8 @@ spec: container('kubectl') { // Change deployed image in canary to the one we just built sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${IMAGE_TAG}#' ./k8s/production/*.yaml") - step([$class: 'KubernetesEngineBuilder',namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) - step([$class: 'KubernetesEngineBuilder',namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/production', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) + step([$class: 'KubernetesEngineBuilder', namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) + step([$class: 'KubernetesEngineBuilder', namespace:'production', projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/production', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) sh("echo http://`kubectl --namespace=production get service/${FE_SVC_NAME} -o jsonpath='{.status.loadBalancer.ingress[0].ip}'` > ${FE_SVC_NAME}") } } @@ -100,8 +100,8 @@ spec: // Don't use public load balancing for development branches sh("sed -i.bak 's#LoadBalancer#ClusterIP#' ./k8s/services/frontend.yaml") sh("sed -i.bak 's#gcr.io/cloud-solutions-images/gceme:1.0.0#${IMAGE_TAG}#' ./k8s/dev/*.yaml") - step([$class: 'KubernetesEngineBuilder',namespace: "${env.BRANCH_NAME}", projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) - step([$class: 'KubernetesEngineBuilder',namespace: "${env.BRANCH_NAME}", projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/dev', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) + step([$class: 'KubernetesEngineBuilder', namespace: "${env.BRANCH_NAME}", projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/services', credentialsId: env.JENKINS_CRED, verifyDeployments: false]) + step([$class: 'KubernetesEngineBuilder', namespace: "${env.BRANCH_NAME}", projectId: env.PROJECT, clusterName: env.CLUSTER, zone: env.CLUSTER_ZONE, manifestPattern: 'k8s/dev', credentialsId: env.JENKINS_CRED, verifyDeployments: true]) echo 'To access your environment run `kubectl proxy`' echo "Then access your service via http://localhost:8001/api/v1/proxy/namespaces/${env.BRANCH_NAME}/services/${FE_SVC_NAME}:80/" } diff --git a/tests/scripts/tutorial_setup.sh b/tests/scripts/tutorial_setup.sh new file mode 100755 index 0000000..d73e1fc --- /dev/null +++ b/tests/scripts/tutorial_setup.sh @@ -0,0 +1,128 @@ +#!/bin/bash -xe + +# This script automates the setup and execution of a tutorial to show +# Jenkins, running on a GKE cluster. It uses Helm to install Jenkins +# on the GKE cluster. It stops right when the first jobs are running +# in case there are problems with plugins running jobs. + +set -o vi +export EDITOR=vim + +GKE_ZONE=us-east1-d + +# Easy access to pinned versions. +# GKE_VERSION=1.12 +GKE_VERSION=1.13 +# HELM_VERSION=2.14.1 +HELM_VERSION=2.14.3 +# JENKINS_CHART_VERSION=1.2.2 +JENKINS_CHART_VERSION=1.7.3 + +# Get the tutorial code. +git clone https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes.git +cd continuous-deployment-on-kubernetes + +# Create a service account with proper roles. This is more secure and +# preferred over passing scopes to cluster-create, or using the +# compute engine default service account. +gcloud iam service-accounts create jenkins-sa \ + --display-name "jenkins-sa" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/viewer" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/source.reader" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/storage.admin" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/storage.objectAdmin" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/cloudbuild.builds.editor" + +gcloud projects add-iam-policy-binding $GOOGLE_CLOUD_PROJECT \ + --member "serviceAccount:jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" \ + --role "roles/container.developer" + +gcloud iam service-accounts keys create ~/jenkins-sa-key.json \ + --iam-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" + +# Set up the GKE cluster. +gcloud config set compute/zone $GKE_ZONE +gcloud container clusters create jenkins-cd \ + --num-nodes 2 \ + --machine-type n1-standard-2 \ + --cluster-version $GKE_VERSION \ + --service-account "jenkins-sa@$GOOGLE_CLOUD_PROJECT.iam.gserviceaccount.com" +gcloud container clusters get-credentials jenkins-cd +kubectl get pods +kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=$(gcloud config get-value account) + +# Get and set up for Helm to install Jenkins on the cluster. +wget https://storage.googleapis.com/kubernetes-helm/helm-v$HELM_VERSION-linux-amd64.tar.gz +tar zxfv helm-v$HELM_VERSION-linux-amd64.tar.gz +cp linux-amd64/helm . + +kubectl create serviceaccount tiller --namespace kube-system +kubectl create clusterrolebinding tiller-admin-binding --clusterrole=cluster-admin --serviceaccount=kube-system:tiller +./helm init --service-account=tiller +./helm repo update +sleep 30 +./helm version +./helm install -n cd stable/jenkins -f jenkins/values.yaml --version $JENKINS_CHART_VERSION --wait +kubectl get pods +kubectl create clusterrolebinding jenkins-deploy --clusterrole=cluster-admin --serviceaccount=default:cd-jenkins +kubectl get svc + +# Set up the sample application running on GKE. +cd sample-app/ +kubectl create ns production +kubectl --namespace=production apply -f k8s/production +kubectl --namespace=production apply -f k8s/canary +kubectl --namespace=production apply -f k8s/services +kubectl --namespace=production scale deployment gceme-frontend-production --replicas=4 + +# Set up the application repo, which the Jenkins pipeline will watch. +git init +git config credential.helper gcloud.sh +gcloud source repos create gceme +git remote add origin https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme +git remote -v +git config --global user.email "$USER@qwiklabs.net" +git config --global user.name "$USER" +git config --global -l +git add . +git commit -m "Initial commit" +git push origin master + +# Waiting for the application external IP to be visible. +kubectl --namespace=production get service gceme-frontend +sleep 50 +kubectl --namespace=production get service gceme-frontend + +# Set up to access the Jenkins UI. +export POD_NAME=$(kubectl get pods -l "app.kubernetes.io/component=jenkins-master" -o jsonpath="{.items[0].metadata.name}") +kubectl port-forward $POD_NAME 8080:8080 >> /dev/null & +printf $(kubectl get secret cd-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo + +# Print the path to the repo, to be used by the Jenkins pipeline. +echo "https://source.developers.google.com/p/$GOOGLE_CLOUD_PROJECT/r/gceme" + +# Go ahead and make the first change, to the canary branch. +git checkout -b canary +sed -i -e "s/REPLACE_WITH_YOUR_PROJECT_ID/$GOOGLE_CLOUD_PROJECT/g" ./Jenkinsfile +sed -i -e "s/card blue/card orange/g" ./html.go +sed -i -e "s/1\.0\.0/2\.0\.0/g" ./main.go +git add Jenkinsfile html.go main.go +git commit -m "Version 2" +git push origin canary + +echo Done.