This project provides a Terraform solution for setting up a Minikube cluster and deploying web applications on it. The solution has been tested with the Docker driver, which should be installed on your system. Additionally, Minikube must be installed according to the instructions provided at Minikube Start Guide.
Please note that the configuration and deployment details may vary based on your specific requirements and system setup.
- Docker: The Docker driver is used for creating the Minikube cluster. Ensure Docker is installed and running on your system. You can download Docker from here.
- Minikube: This project uses Minikube for creating a local Kubernetes cluster. Follow the instructions here to install Minikube.
- Clone this repository to your local system.
- Navigate to the project directory.
- Run
terraform init
to initialize your Terraform workspace. - Run
terraform apply
to create your infrastructure.
For more detailed information about the modules, resources, inputs, and outputs of this Terraform project, please refer to the auto-generated documentation below.
- I chose minikube, and found a terraform provider for it.
Deploy N basic web application servers, each serving a single static page and responding with the pod name/IP address.
- I created a module for deploying web applications. The module creates a deployment and a service for each web application. The deployment uses a simple Nginx image that serves a static page with the pod name.
- I used init container to change the index.html file to include the pod name during pod startup.
- I could have used a postStart lifecycle hook, but I wanted to try the init container approach which demonstrated also the use of volumes.
Establish an endpoint accessible locally that dynamically directs traffic to different web application servers upon access. The endpoint should route traffic exclusively to pods capable of responding.
- I used a service of type NodePort to expose the web applications in the cluster.
- I used the minikube tunnel command to expose the services on localhost, but this required having minikube installed on the local machine.
- I added a readiness probe to the web application pods to ensure that only the pods capable of responding receive traffic.
- I created the stack resources as a reusable module, which can be used to deploy additional web applications with minimal additional lines of code.
- In my main configuration, I defined a default web application stack and allowed the user to define additional web application stacks in the input variables.
- I might have used a simple "count" to deploy multiple web applications, but I wanted to demonstrate the use of modules and input variables for reusability and flexibility.
- The current solution can be leveraged to use terraform workspaces to deploy multiple environments with different web applications, but the kubernetes provider credentials would need to be managed accordingly, if to deploy on the same cluster.
Name | Version |
---|---|
terraform | >= 1.5.0 |
kubernetes | >= 2.30.0 |
minikube | >= 0.3.10 |
No providers.
Name | Source | Version |
---|---|---|
minikube_cluster | ./modules/minikube | n/a |
webapp | ./modules/webapp | n/a |
No resources.
Name | Description | Type | Default | Required |
---|---|---|---|---|
create_localhost_service_endpoint | Whether to create a localhost endpoint | bool |
n/a | yes |
minikube_cluster_name | The name of the minikube cluster | string |
n/a | yes |
minikube_cluster_nodes | The number of nodes for the minikube cluster | number |
n/a | yes |
minikube_driver | The driver to use for the minikube cluster | string |
n/a | yes |
web_applications_stacks | The web applications to deploy | map(object({ |
{ |
no |
Name | Description |
---|---|
application_services_local_endpoints | The local endpoints for each application service |