This repository provides the Terraform configuration and the necessary Docker setup to deploy the MonoMap API on a cloud-based virtual machine. The setup includes an automated SSL certificate via Let's Encrypt, a reverse proxy using Nginx, the MonoMap API, and MongoDB as the database.
- Features
- Used Technologies
- Infrastructure Overview
- Prerequisites
- Setup and Deployment
- Docker Compose Services
- CI/CD Pipeline
- Terraform State Management
- Using NO-IP as a DNS Provider
- API Endpoints
- Appendix
- Cloud-Based Deployment: Provision a cloud-based virtual machine with Docker pre-installed.
- Automated Docker Setup: Run multiple containers including a reverse proxy, SSL certificate management, MonoMap API, and MongoDB.
- SSL Certification: Automatically secure the API using Let's Encrypt for SSL certificates.
- GitHub Actions Integration: CI/CD pipeline automates infrastructure deployment and destruction.
- Remote State Management: Terraform state is securely stored in an Azure Storage container.
- Terraform: Infrastructure as Code (IaC) for managing cloud resources.
- Docker: Containerization to run the MonoMap API, Nginx, Let's Encrypt, and MongoDB.
- Nginx: Reverse proxy for handling traffic and SSL termination.
- Let's Encrypt: Automatic SSL certificate generation and renewal.
- MongoDB: Database for storing API data.
- GitHub Actions: Automate deployments and destructions via CI/CD workflows.
- Azure Storage: Remote storage for Terraform state.
- NO-IP: Free dynamic DNS provider to map the virtual machine’s IP to a domain.
This infrastructure deploys a virtual machine in the cloud with Docker installed. Upon creation, it automatically runs a docker-compose.yml
file, which contains the following services:
- Nginx: A reverse proxy that routes traffic to the API.
- Let's Encrypt: Automatically manages SSL certificates.
- MonoMap API: The API for managing Monkeypox cases.
- MongoDB: A database for storing case data.
The deployment is automated using GitHub Actions, and the state of Terraform is stored remotely in Azure Storage.
Here is an explanation of the file structure within this project, detailing the purpose of each folder and file.
.github/
└── workflows/
├── deploy-dev.yml
├── destroy-dev.yml
env/
└── dev/
├── containers/
│ ├── docker-compose.yml
├── scripts/
│ ├── docker-install.tpl
├── main.tf
├── providers.tf
├── variables.tf
modules/
└── vm/
├── scripts/
│ ├── docker-install.tpl
├── main.tf
├── outputs.tf
├── providers.tf
├── variables.tf
LICENSE
README.md
.gitignore
- deploy-dev.yml: Contains the GitHub Actions workflow for deploying the development environment. It automates the process of provisioning infrastructure, configuring the environment, and applying the Terraform scripts.
- destroy-dev.yml: This workflow is responsible for tearing down or destroying the development infrastructure when needed. It is triggered manually and helps ensure clean resource management.
- containers/:
- docker-compose.yml: Defines the services that will be run inside Docker containers, including the MonoMap API, MongoDB, Nginx, and Let's Encrypt for SSL certificates. This file is critical for orchestrating these containers.
- scripts/:
- docker-install.tpl: This template file includes the script for installing Docker on the virtual machine (VM) during deployment. It ensures that Docker is available and ready for running containers after the VM is provisioned.
- main.tf: The main Terraform configuration file for the development environment. It outlines the resources to be provisioned, such as virtual machines, networking, and storage.
- providers.tf: Specifies the cloud provider (Azure in this case) and necessary configurations for interacting with the cloud provider's APIs.
- variables.tf: Contains variable definitions used in the Terraform files, allowing for reusable and customizable infrastructure configurations.
- scripts/:
- docker-install.tpl: Similar to the
env/dev/scripts/
file, this is another template for the Docker installation script that will be executed on the virtual machine.
- docker-install.tpl: Similar to the
- main.tf: The Terraform file that defines the virtual machine (VM) itself. This includes the configuration for provisioning the VM, setting up network interfaces, and installing necessary packages (e.g., Docker).
- outputs.tf: Specifies the output values from the VM provisioning process, such as IP addresses or connection details, which can be referenced by other Terraform configurations or scripts.
- providers.tf: Defines the cloud provider for the VM, typically mirroring the configuration found in the
env/dev
folder. - variables.tf: Contains variables specific to the VM module, making it easier to reuse and customize the VM provisioning across different environments.
- Contains the legal information about the software’s usage rights and permissions.
- The documentation file explaining the project setup, usage, deployment, and other critical information for developers or users.
- Lists files and directories that should be ignored by Git during version control. This typically includes sensitive files (like secrets) or files generated during the build or deployment process.
- Terraform: Install Terraform to manage infrastructure.
- Azure Subscription: Required for provisioning cloud resources.
- Docker: Install Docker if developing locally.
- GitHub Account: To manage the repository and GitHub Actions.
- Postman: For testing API endpoints.
- NO-IP Account: To manage DNS mapping between the VM's public IP and a domain name.
git clone https://github.com/CeciliaCode/infrastructure-monomap.git
cd infrastructure-monomap
The project uses Terraform variables and GitHub Secrets to securely manage the configuration for deployment. These variables include secrets like MongoDB credentials, email settings, and Azure credentials.
In the Terraform files (terraform.tfvars), you will find predefined variables that you need to configure. These variables are passed into Terraform either via the command line or through your GitHub Secrets.
In GitHub Actions, you'll need to set up secrets for sensitive information such as API keys and passwords. To create GitHub Secrets:
- Go to your GitHub repository.
- Navigate to Settings > Secrets and variables > Actions.
- Click New repository secret and add the following secrets (make sure to keep the TF_VAR prefix for them to be properly validated by Terraform) with their respective values:
Secret Name | Description |
---|---|
ARM_CLIENT_ID |
Azure Client ID used for authentication. |
ARM_CLIENT_SECRET |
Azure Client Secret for authentication. |
ARM_SUBSCRIPTION_ID |
Azure Subscription ID to deploy resources. |
ARM_TENANT_ID |
Azure Tenant ID for the subscription. |
SSH_PRIVATE_KEY |
Private SSH key to access the virtual machine. |
SSH_PUBLIC_KEY |
Public SSH key for VM configuration. |
TF_VAR_ADMIN_USERNAME |
The username for the VM's admin user. |
TF_VAR_DOMAIN |
The domain to be used with NO-IP for API access. |
TF_VAR_ENVIRONMENT |
The environment for deployment (e.g., dev, prod). |
TF_VAR_IP_NAME |
The name of the IP resource in Azure. |
TF_VAR_LOCATION |
The location for Azure resource deployment. |
TF_VAR_MAIL_SECRET_KEY |
Secret key for the email service (e.g., Gmail). |
TF_VAR_MAIL_SERVICE |
The email service provider (e.g., Gmail). |
TF_VAR_MAIL_USER |
The email address used for notifications. |
TF_VAR_MAPBOX_ACCESS_TOKEN |
Access token for Mapbox API (for geolocation). |
TF_VAR_MONGO_DB |
The MongoDB database name. |
TF_VAR_MONGO_INITDB_ROOT_PASSWORD |
Root password for MongoDB initialization. |
TF_VAR_MONGO_INITDB_ROOT_USERNAME |
Root username for MongoDB initialization. |
TF_VAR_MONGO_URL |
MongoDB connection string for local development. |
TF_VAR_MONGO_URL_DOCKER |
MongoDB connection string when running in Docker. |
TF_VAR_NIC_NAME |
The name of the network interface for the VM. |
TF_VAR_PORT |
The port on which the MonoMap API will run. |
TF_VAR_RESOURCE_GROUP |
The name of the resource group in Azure. |
TF_VAR_SECURITY_GROUP_NAME |
The name of the security group in Azure. |
TF_VAR_SERVER_NAME |
The name of the virtual machine or server. |
TF_VAR_SSH_KEY_PATH |
The file path to the SSH key. |
TF_VAR_SUBNET_NAME |
The subnet name for networking. |
TF_VAR_VNET_NAME |
The virtual network name for the infrastructure. |
Once these secrets are set up, the GitHub Actions pipeline will automatically reference them during the deployment process.
To deploy the infrastructure, push changes to the main
branch or run the GitHub Action manually. The GitHub Action will provision a virtual machine, configure Docker, and run the docker-compose.yml
.
-
Push to main:
git push origin main
-
Alternatively, trigger the workflow manually from the GitHub Actions tab.
After the VM is deployed and running, you can connect to it via SSH to monitor containers and view logs.
-
Connect to the VM via SSH:
ssh -i path/to/private_key username@your_vm_ip
-
Monitor Docker containers using
ctop
: Once inside the VM, you can usectop
(an interactive container monitoring tool) to view logs and monitor the status of the running containers.Install and use
ctop
:sudo apt-get install ctop sudo ctop
This will give you a real-time view of all the containers running on the VM, including resource usage and logs for each container.
-
Access Error Logs: To view logs directly from specific containers, use:
docker logs <container_name>
To manually destroy the infrastructure, use the GitHub Action workflow_dispatch
. This will terminate the cloud resources and clean up the deployment.
The docker-compose.yml
file defines the following services:
- Nginx:
Routes traffic to the MonoMap API and handles SSL termination.
- Let's Encrypt: Manages SSL certificates for secure API communication.
- MonoMap API: The core API service for managing Monkeypox cases.
- MongoDB: Stores data related to the API.
version: '3'
services:
nginx:
image: nginx
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
depends_on:
- app
- db
app:
image: yourdockerhubusername/monomap:latest
environment:
- MONGO_URL=${MONGO_URL_DOCKER}
ports:
- "3000:3000"
db:
image: mongo
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGO_INITDB_ROOT_USERNAME}
MONGO_INITDB_ROOT_PASSWORD: ${MONGO_INITDB_ROOT_PASSWORD}
ports:
- "27017:27017"
The project uses GitHub Actions to automate infrastructure deployment and destruction. The pipeline is configured as follows:
- deploy-dev.yml: Automatically deploys infrastructure when changes are pushed to the
main
branch. - destroy-dev.yml: Manually triggered to destroy the deployed infrastructure.
name: Deploy Dev Infrastructure
on:
push:
branches:
- main
jobs:
terraform-plan-apply:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Setup Terraform
uses: hashicorp/setup-terraform@v2
with:
terraform_version: 1.9.2
- name: Terraform Init
run: terraform -chdir=env/dev init
- name: Terraform Apply
run: terraform -chdir=env/dev apply --auto-approve
To store the Terraform state remotely and securely, you'll need to set up an Azure Storage account and container.
Follow these steps to create the necessary storage account and container in Azure:
-
Create a resource group:
az group create --name tfstateRG --location eastus2
-
Create a storage account:
az storage account create --resource-group tfstateRG --name tfstateaccount --sku Standard_LRS --location eastus2 --encryption-services blob
-
Retrieve the storage account key:
az storage account keys list --resource-group tfstateRG --account-name tfstateaccount --query "[0].value" --output tsv
-
Create a storage container:
az storage container create --name tfstatecontainer --account-name tfstateaccount --account-key <your-account-key>
Replace <your-account-key>
with the value obtained from step 3.
Once the storage account and container are created, configure the backend
block in your providers.tf
file within the path env>dev
to use the Azure Storage for state management:
terraform {
backend "azurerm" {
resource_group_name = "tfstateRG"
storage_account_name = "tfstateaccount"
container_name = "tfstatecontainer"
key = "terraform.tfstate"
}
}
This configuration ensures that your Terraform state is stored remotely and securely in Azure.
Since the VM will have a dynamic public IP address, it is recommended to use a DNS service like NO-IP to map the VM’s IP to a domain name. This will allow you to access the API easily through a custom domain, even if the IP changes.
- Create an account on NO-IP.
- Create a free dynamic DNS hostname in NO-IP.
- Map the VM's public IP to the created hostname.
- Update the DNS records whenever the VM’s IP changes.
- Use the domain in both browser and Postman for API requests.
For example:
-
Access the API through your browser:
https://yourcustomdomain.com/api/cases
-
Test the API in Postman with your domain:
https://yourcustomdomain.com/api/cases
-
Method:
POST
-
URL:
https://yourcustomdomain.com/api/cases
-
Body: (raw JSON)
{ "lat": 19.432608, "lng": -99.133209, "genre": "Male", "age": 25 }
- Method:
GET
- URL:
https://yourcustomdomain.com/api/cases
- Method:
GET
- URL:
https://yourcustomdomain.com/api/cases/:id
- Replace
:id
with the actual case ID.
- Replace
-
Method:
PUT
-
URL:
https://yourcustomdomain.com/api/cases/:id
-
Body: (raw JSON)
{ "lat": 19.432608, "lng": -99.133209, "genre": "Female", "age": 30 }
- Method:
DELETE
- URL:
https://yourcustomdomain.com/api/cases/:id
- Replace
:id
with the actual case ID.
- Replace
- Method:
GET
- URL:
https://yourcustomdomain.com/api/cases/last
Let's Encrypt is used to automatically generate SSL certificates for the MonoMap API. The certificates are renewed automatically and stored securely. The configuration is included in the docker-compose.yml
.