Skip to content

Commit

Permalink
adding project-15
Browse files Browse the repository at this point in the history
  • Loading branch information
rumeysakdogan committed Dec 11, 2022
1 parent 15a40e1 commit ebb3369
Show file tree
Hide file tree
Showing 35 changed files with 1,425 additions and 0 deletions.
164 changes: 164 additions & 0 deletions Project-15_Deploying MS Architecture with Kubernetes /README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
# Project-15: Deploying MS Architecture with Kubernetes and Terraform

![](images/Project-15.png)

## Pre-requisites

* AWS Account
* Terraform installed locally (you can also create an Ec2 instance and install terraform onto ec2) or use CFN template
* DockerHub account

### Step-1: Create K8s cluster with Terraform or CloudFormation

#### With Terraform

We will create a 1 Master Node & 1 Worker Node K8s cluster with Terraform files given under `terraform-files-to-create-K8s-cluster` directory. While we are inside of this directory, we will run below commands:
```sh
terraform init
terraform validate
terraform plan
terraform apply
```

#### With Cloudformation

If we dont want to install Terraform, we can create K8s cluster using cfn template given under `cfn-template-to-create-K8s-cluster`. We need to go to AWS Console and upload this file to create our stack. It can be also done with awscli command.

### Step-2: Write Dockerfile for WebServer

Go to `image_for_web_server` directory and create a Dockerfile with below content.
```sh
FROM python:alpine
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 80
CMD python app.py
```

### Step-3: Write Dockerfile for ResultServer

Go to `image_for_web_server` directory and create a Dockerfile with below content.
```sh
FROM python:alpine
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
EXPOSE 80
CMD python app.py
```

### Step-4: Create Docker images and push to DockerHub

#### Create Image
SSH into your K8s master node. Move the `image_for_result_server` and `image_for_web_server` folders to master node. (You can use VSCode remote SSH extension and drag/drop the files or create repo with those files and clone it )

Go to `image_for_web_server` directory, run below command
```bash
docker build -t <your_dockerhub_account_name>/phonebook-webserver .
```

Create Docker image from Dockerfile under result_server/create_image/ directory with below command
```bash
docker build -t <your_dockerhub_account_name>/phonebook-resultserver .
```

![](images/images-created.png)

#### Push images

First login to your dockerHub account
```bash
docker login
Username:
Password:
```

Then push your images to DockerHub
```bash
docker push <your_dockerhub_account_name>/phonebook-webserver
docker push <your_dockerhub_account_name>/phonebook-resultserver
```

![](images/images-pushed.png)

### Step-5: Change image names

Go to `resultserver_deployment.yml` change image name to you have pushed to DockerHub
```sh
spec:
containers:
- name: result-app
image: rumeysakdogan/phonebook-resultserver
```

Go to `webserver_deployment.yml` change image name to you have pushed to DockerHub
```sh
spec:
containers:
- name: result-app
image: rumeysakdogan/phonebook-webserver
```

### Step-6: Create secret/configMap

We will create a secret manifest to store DB_passwords. Before we create Secret we need to encode passwords by using base64.

I will use Clarusway_1 as my `mysql-admin-password` and `R1234r` `mysql-root-password`. So I need to encode it with below command, and use it in secret file.
```sh
echo -n 'CLarusway_1' | base64
> Q2xhcnVzd2F5XzE=
echo -n 'R1234r' | base64
> UjEyMzRy
```

We can also decode the secrets with below command:
```sh
echo -n 'UjEyMzRy' | base64 --decode
echo -n 'Q2xhcnVzd2F5XzE=' | base64
```

Go to `kubernetes-manifests/secrets_configMap` directory, and create secret and configmap:
```sh
kubectl apply -f mysql_secret.yaml
kubectl apply -f database_configmap.yaml
kubectl apply -f servers_configmap.yaml
```

### Step-6: Create my_sql database

Go to `kubernetes-manifests/mysql_deployment` directory,and create manifests with below command:
```sh
kubectl apply -f .
```

### Step-7: Create webserver

Go to `kubernetes-manifests/mysql_deployment` directory,and create manifests with below command:
```sh
kubectl apply -f .
```
Now we can see our deployments, services, pods and replicasets with `kubectl get all` command:

![](images/kube-apply-complete.png)

### Step-8: Add Nodeports to security group of Worker Node

Resultserver is running on NodePort: 30002
Webserver is running on NodePort: 30001

We will add these ports to SecGrp of Worker Node.

![](images/nodeports-added-to-sg.png)

### Step-9: Check your application is up and running

Check your application from browser with below urls:

* Webserver: <worker_node_public_ip>:30001

![](images/phonebook-web-server.png)

* Resultserver: <worker_node_public_ip>:30002

![](images/phonebook-resullt-server.png)
Original file line number Diff line number Diff line change
@@ -0,0 +1,229 @@
AWSTemplateFormatVersion: 2010-09-09

Description: |
This Cloudformation Template creates a Kubernetes Cluster on Ubuntu 20.04 of EC2 Instances.
Kubernetes cluster comprises of one master node and one worker node.
Once the Master node is up and running, Worker node automatically joins the Cluster.
Managers security group allows all protocols from all ports within itself and from the Workers.
Workers security group allows all protocols from all ports within itself and from the Managers.
Both Security groups allows SSH (22) connections from anywhere.
User needs to select appropriate key name when launching the template.
This template is designed for us-east-1 (N. Virginia) region. If you work on another region, do not forget to change instances imageId
Parameters:
KeyPairName:
Description: Enter the name of your Key Pair for SSH connections.
Type: AWS::EC2::KeyPair::KeyName
ConstraintDescription: Must be one of the existing EC2 KeyPair

Resources:
InstanceConnectPolicy:
Type: "AWS::IAM::ManagedPolicy"
Properties:
PolicyDocument: #required
Version: 2012-10-17
Statement:
- Effect: Allow
Action:
- ec2-instance-connect:SendSSHPublicKey
Resource:
- !Sub arn:aws:ec2:${AWS::Region}:${AWS::AccountId}:instance/*
Condition: {"StringEquals": {"ec2:osuser": "ubuntu"}}
- Effect: Allow
Action:
- ec2:DescribeInstances
Resource: "*"

EC2InstanceConnect:
Type: "AWS::IAM::Role"
Properties:
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service:
- ec2.amazonaws.com
Action:
- 'sts:AssumeRole'
ManagedPolicyArns:
- !Ref InstanceConnectPolicy
EC2ConnectProfile:
Type: "AWS::IAM::InstanceProfile"
Properties:
Roles: #required
- !Ref EC2InstanceConnect
ManagersSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable SSH for Kube Masters
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
WorkersSecurityGroup:
Type: AWS::EC2::SecurityGroup
Properties:
GroupDescription: Enable SSH for Kube Workers
SecurityGroupIngress:
- IpProtocol: tcp
FromPort: 22
ToPort: 22
CidrIp: 0.0.0.0/0
ManagersSGIngress1:
Type: "AWS::EC2::SecurityGroupIngress"
Properties:
GroupId: !GetAtt ManagersSecurityGroup.GroupId
IpProtocol: -1 #required
SourceSecurityGroupId: !GetAtt ManagersSecurityGroup.GroupId
ManagersSGIngress2:
Type: "AWS::EC2::SecurityGroupIngress"
Properties:
GroupId: !GetAtt ManagersSecurityGroup.GroupId
IpProtocol: -1 #required
SourceSecurityGroupId: !GetAtt WorkersSecurityGroup.GroupId
WorkersSGIngress1:
Type: "AWS::EC2::SecurityGroupIngress"
Properties:
GroupId: !GetAtt WorkersSecurityGroup.GroupId
IpProtocol: -1 #required
SourceSecurityGroupId: !GetAtt WorkersSecurityGroup.GroupId
WorkersSGIngress2:
Type: "AWS::EC2::SecurityGroupIngress"
Properties:
GroupId: !GetAtt WorkersSecurityGroup.GroupId
IpProtocol: -1 #required
SourceSecurityGroupId: !GetAtt ManagersSecurityGroup.GroupId

KubeMaster1:
Type: AWS::EC2::Instance
Properties:
ImageId: ami-08d4ac5b634553e16
InstanceType: t3a.medium
KeyName: !Ref KeyPairName
IamInstanceProfile: !Ref EC2ConnectProfile
SecurityGroupIds:
- !GetAtt ManagersSecurityGroup.GroupId
Tags:
-
Key: Name
Value: !Sub Kube Master 1st on Ubuntu 20.04 of ${AWS::StackName}
UserData:
Fn::Base64:
!Sub |
#! /bin/bash
apt-get update -y
apt-get upgrade -y
hostnamectl set-hostname kube-master
chmod 777 /etc/sysctl.conf
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
chmod 644 /etc/sysctl.conf
apt install -y docker.io
systemctl start docker
mkdir /etc/docker
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
usermod -aG docker ubuntu
newgrp docker
apt install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt update
apt install -y kubelet=1.25.0-00 kubeadm=1.25.0-00 kubectl=1.25.0-00
systemctl start kubelet
systemctl enable kubelet
kubeadm init --pod-network-cidr=172.16.0.0/16 --ignore-preflight-errors=All
mkdir -p /home/ubuntu/.kube
cp -i /etc/kubernetes/admin.conf /home/ubuntu/.kube/config
chown ubuntu:ubuntu /home/ubuntu/.kube/config
su - ubuntu -c 'kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml'

KubeWorker1:
Type: AWS::EC2::Instance
DependsOn:
- KubeMaster1
Properties:
ImageId: ami-08d4ac5b634553e16
InstanceType: t3a.medium
KeyName: !Ref KeyPairName
IamInstanceProfile: !Ref EC2ConnectProfile
SecurityGroupIds:
- !GetAtt WorkersSecurityGroup.GroupId
Tags:
-
Key: Name
Value: !Sub Kube Worker 1st on Ubuntu 20.04 of ${AWS::StackName}
UserData:
Fn::Base64:
!Sub |
#! /bin/bash
apt-get update -y
apt-get upgrade -y
hostnamectl set-hostname kube-worker-1
chmod 777 /etc/sysctl.conf
echo "net.ipv4.ip_forward=1" >> /etc/sysctl.conf
sysctl -p
chmod 644 /etc/sysctl.conf
apt install -y docker.io
systemctl start docker
cat <<EOF | tee /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2"
}
EOF
systemctl enable docker
sudo systemctl daemon-reload
sudo systemctl restart docker
usermod -aG docker ubuntu
newgrp docker
apt install -y apt-transport-https
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"
apt update
apt install -y kubelet=1.25.0-00 kubeadm=1.25.0-00 kubectl=1.25.0-00
systemctl start kubelet
systemctl enable kubelet
apt install -y python3-pip
pip3 install ec2instanceconnectcli
apt install -y mssh
until [[ $(mssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -r ${AWS::Region} ubuntu@${KubeMaster1} kubectl get no | awk 'NR == 2 {print $2}') == Ready ]]; do echo "master node is not ready"; sleep 3; done;
kubeadm join ${KubeMaster1.PrivateIp}:6443 --token $(mssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -r ${AWS::Region} ubuntu@${KubeMaster1} kubeadm token list | awk 'NR == 2 {print $1}') --discovery-token-ca-cert-hash sha256:$(mssh -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no -r ${AWS::Region} ubuntu@${KubeMaster1} openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | openssl dgst -sha256 -hex | sed 's/^.* //') --ignore-preflight-errors=All

Outputs:
1stKubeMasterPublicDNSName:
Description: Kube Master 1st Public DNS Name
Value: !Sub
- ${PublicAddress}
- PublicAddress: !GetAtt KubeMaster1.PublicDnsName
1stKubeMasterPrivateDNSName:
Description: Kube Master 1st Private DNS Name
Value: !Sub
- ${PrivateAddress}
- PrivateAddress: !GetAtt KubeMaster1.PrivateDnsName
1stKubeWorkerPublicDNSName:
Description: Kube Worker 1st Public DNS Name
Value: !Sub
- ${PublicAddress}
- PublicAddress: !GetAtt KubeWorker1.PublicDnsName
1stKubeWorkerPrivateDNSName:
Description: Kube Worker 1st Private DNS Name
Value: !Sub
- ${PrivateAddress}
- PrivateAddress: !GetAtt KubeWorker1.PrivateDnsName
Loading

0 comments on commit ebb3369

Please sign in to comment.