diff --git a/Topics/Development_Process.md b/Topics/Development_Process.md
index f0efa913a..ec10af914 100644
--- a/Topics/Development_Process.md
+++ b/Topics/Development_Process.md
@@ -70,3 +70,6 @@ This is only a simplification of what "Clean Architecture" is; the topic is so v
- A very detailed explanation of Clean Architecture by Robert C. Martin or Uncle Bob and his book
- https://www.youtube.com/watch?v=2dKZ-dWaCiU
- https://github.com/ropalma/ICMC-USP/blob/master/Book%20-%20Clean%20Architecture%20-%20Robert%20Cecil%20Martin.pdf
+
+## Code Smells
+### [Code Smells](./Development_Process/Code_Smells.md)
diff --git a/Topics/Development_Process/Code_Smells.md b/Topics/Development_Process/Code_Smells.md
new file mode 100644
index 000000000..a4d032c96
--- /dev/null
+++ b/Topics/Development_Process/Code_Smells.md
@@ -0,0 +1,37 @@
+## Code Smells
+Code smells refer to certain types of code that, while functional, will require increasing accommodation as more code begins to rely on the "smelly" code. While it is possible to ignore these types of code, the longer that they remain, the harder it becomes to fix issues they directly or indirectly cause in the future. Therefore, it's in a developer's best interest to be familiar with a broad spectrum of code smells so that they may identify and eliminate them as soon as possible.
+
+
+## A Motivating Example
+Consider the following snippet of code:
+
+```
+class Something:
+ temp_field: int
+
+ def do_something(self) -> int | None:
+ if self.temp_field == None:
+ return None
+ else:
+ return self.temp_field + 1
+```
+On its own, this code is functional, but it raises a number of questions. For example: when exactly is `temp_field` equal to `None`? When does it actually store relevant data? This answer is largely dependant on how the class `Something` is used, but the specifics may not be clear to someone reading this code.
+
+This code smell is known as a "Temporary Field", which is when classes are given attributes to be used in some of their methods, but in some cases the attribute stores null values. While adding null checks easily allows code using these attributes to function, it decreases code readability, and if the technique is abused it can easily lead to unnecessarily long code. To fix this, refactoring is required.
+
+## Refactoring
+Refactoring is a software development practice in which code is rewritten such that no new functionality is actually provided, but the code becomes cleaner and better accommodates future extensions of features. While it is generally recommended in software development that code should not be rewritten, but extended (see the [Open/Closed Principle of SOLID](../Development_Process.md#solid-principles), refactoring typically prevents more significant amounts of code rewriting that may be required in the future.
+
+Many refactoring solutions to code smells are well-established and should be drawn upon once relevant code smells are identified. One such solution for the previous example is known as "Introduce Null Object", in which attributes that may be null should be defined over a new "Null" class, which can provide default values when the aforementioned attribute would have previously been null. This contains any null checks to the new class, allowing for the removal of if-statements in other code that may cause confusion or excessive code length. Furthermore, future code that may deal with the previously temporary field will also no longer need any null checks, as the new class does it for them. Thus, refactoring improved both the readability and extendability of the former code.
+
+## Categories
+While there may be many different types of code smells, all of them fall into one of five categories that can more easily be identified when writing code. The categories are as follows:
+- Bloaters
+- Object-Oriented Abusers
+- Change Preventers
+- Dispensables
+- Couplers
+
+## More Info
+For further insight into all the different types of code smells including explanations, examples, and solutions, the following resource is highly recommended:
+https://refactoring.guru/refactoring/smells
diff --git a/Topics/Development_Process/Docker.md b/Topics/Development_Process/Docker.md
index 5eaa1487a..8ec4e26b8 100644
--- a/Topics/Development_Process/Docker.md
+++ b/Topics/Development_Process/Docker.md
@@ -3,7 +3,7 @@
## Table of Contents
### [Introduction](#introduction-1)
### [Installation](#installation-1)
-### [Getting Started](#getting-started-1)
+### [Creating Dockerfiles](#creating-dockerfiles-1)
### [Next Steps](#next-steps-1)
### [Docker Terminology](#docker-terminology-1)
@@ -11,47 +11,210 @@
## Introduction
-This article will help readers understand what Docker is, why it is used and provide resources on how to start using it. Docker is used by developers for many reasons, however, the most common reasons are for building, deploying and sharing an application quickly. Docker packages your application into something that's called a [container](#docker-terminology-1). This [container](#docker-terminology-1) is OS-agnostic meaning that developers on Mac, Windows and Linux can share their code without any worry of conflicts. Here's [Amazon's Intro to Docker](https://aws.amazon.com/docker/#:~:text=Docker%20is%20a%20software%20platform,tools%2C%20code%2C%20and%20runtime.) if you want to learn more.
+This article will help readers understand what Docker is, why it is used, and provide resources on how to start using it. Docker is used by developers for many reasons, most commonly for building, deploying, and sharing applications quickly. Docker packages your application into a [container](#docker-terminology-1), which is OS-agnostic, allowing developers on Mac, Windows, and Linux to share their code without conflicts. For more information, check out [Amazon's Intro to Docker](https://aws.amazon.com/docker/).
-----
+### Docker Terminology
-## Installation
+* **Container**: A package of code bundled by Docker that runs as an isolated process from your machine. The package of code can be pretty much anything, a single Python file, an API, a full-stack web application, etc. A container is also referred to as a containerized application.
+* **Image**: A template with a set of instructions for creating a container. Think of it as a blueprint from which multiple containers can be instantiated. Images are built from Dockerfiles and are essential for running your applications in Docker.
+* **Dockerfile**: A text document that contains all the commands a user could call on the command line to assemble an image. It's a recipe for creating Docker images.
-To start using Docker you will have to download Docker Engine. This is automatically downloaded alongside Docker Desktop (A Graphical User Interface for Docker) which I **strongly** recommend for beginners.
-[Download Docker Desktop here](https://www.docker.com/get-started/)
+For more detailed explanations, you can refer to Docker's own resources [here](https://docs.docker.com/get-started/).
+## Installation
-For detailed installation instructions based on specific operating systems click here: [Mac](https://docs.docker.com/desktop/install/mac-install/), [Windows](https://docs.docker.com/desktop/install/windows-install/), [Linux](https://docs.docker.com/desktop/install/linux-install/)
+To start using Docker you will have to download Docker Engine. This is automatically downloaded alongside Docker Desktop (A Graphical User Interface for Docker) which I **strongly** recommend for Windows and macOS users.
+[Download Docker here](https://www.docker.com/get-started/)
-----
+For detailed installation instructions based on specific operating systems click here: [Mac](https://docs.docker.com/desktop/install/mac-install/), [Windows](https://docs.docker.com/desktop/install/windows-install/), [Linux](https://docs.docker.com/desktop/install/linux-install/)
-## Getting Started
+## Creating Dockerfiles
-Once you've installed Docker, to see it in action you can follow any one of these quick tutorials:
+Once you've installed Docker, to see it in action you can follow any one of these quick tutorials on creating a Dockerfile that builds a Docker image:
-- [Dockerizing a React App](https://mherman.org/blog/dockerizing-a-react-app/) (Simple and quick tutorial for containerizing a React App, contains explanations when needed. I recommend this if you want to quickly get something running plus see what the next steps look like)
+- [Dockerizing a React App](https://mherman.org/blog/dockerizing-a-react-app/) (This simple and quick tutorial for containerizing a React App, contains explanations when needed. I recommend this if you want to quickly get something running plus see what the next steps look like)
- [Dockerize a Flask App](https://www.freecodecamp.org/news/how-to-dockerize-a-flask-app/) (Super detailed step-by-step explanations tutorial for containerizing a flask app. I recommend this if you want to understand the process in detail)
- [Docker's official tutorial for containerizing an application](https://docs.docker.com/get-started/02_our_app/) (Can't go wrong with the official tutorial.)
-----
+### Automatic Dockerfile Generation
+
+Since Docker is widely used, there is a lot of Dockerfile-related knowledge in ChatGPT's training data, and the AI is capable of generating Dockerfiles for most software architectures. If you want to easily containerize your app, you can use OpenAI's ChatGPT 3.5-turbo to generate the Dockerfile for you. To do this, you first need to gather a tree of your project directory for ChatGPT to better understand your project architecture (On Linux/macOS, run `tree -I node_modules` in your project directory). Then, you can ask ChatGPT using something similar to the following prompt:
+
+```
+Please write a Dockerfile for my project. I use the command `python3 -m projectname` to start my app. My project file structure is specified by the tree below. Please make sure that the Dockerfile is optimized with best practices from the industry and that the image size is as small as possible.
+
+.
+├── README.md
+├── backend
+│ ├── __init__.py
+│ ├── database
+│ │ ├── __init__.py
+│ ├── flow.py
+│ ├── rapidpro.py
+│ └── user.py
+├── poetry.lock
+├── pyproject.toml
+└── tests
+ ├── RapidProAPI_test.py
+ ├── __init__.py
+ └── flow_test.py
+
+I have the following runtime dependencies that might require APT packages: psycopg2
+```
+
+This method will generate something that's much more optimized than any beginner can write. For example, it will clear the APT cache for dependency installation, and use separate builder and runtime images to reduce image size, which involves understanding the intricate Docker image layering mechanism. You can learn a lot from reading and understanding the generated Dockerfile.
+
+### Clarifying Dockerfiles and Docker Images
+
+When you start using Docker, you'll come across two key terms: Dockerfile and Docker image. A Dockerfile is a text document that contains all the commands a user could call on the command line to assemble an image. Essentially, it's a set of instructions for Docker to build the image.
+
+A Docker image, on the other hand, is an executable package that includes everything needed to run an application - the code, a runtime, libraries, environment variables, and config files. You can think of an image as a blueprint for a container. Docker builds an image based on the instructions provided in a Dockerfile. Once the image is built, Docker can create a container from this image.
## Next Steps
-Congratulations! you have successfully learnt how to Dockerize an app. In the process, you have learnt what is a `Dockerfile`, how to create a `Dockerfile`, how to build a Docker Container Image and how to start a Docker Container. Now what's next?
+Congratulations! You have successfully learned how to Dockerize an app. In the process, you have learnt what is a `Dockerfile`, how to create a `Dockerfile`, how to build a Docker Container Image and how to start a Docker Container. Now what's next?
Now you might want a React Container to communicate with a containerized Flask API. How do we do this? This is where [Docker Compose](https://docs.docker.com/compose/) comes in. It allows you to define, and control multiple containers at once. Your next goal should be defining a `docker-compose.yml` for your project and see if you can get multiple services/containers to successfully communicate.
-----
+## Free Automatic Builds
-## Other Resources
+You can use various CI (Continuous Integration) tools to automatically build, push, and deploy your Docker Images. While the official [Docker Hub](https://hub.docker.com/) is a great place to store your images for free, the automated builds are only available for paid accounts. Here, we will guide you on how to use [GitHub Actions](https://github.com/features/actions) to automatically build and push your Docker Images to Docker Hub.
-Here's a [cheat sheet](https://docs.docker.com/get-started/docker_cheatsheet.pdf) of all useful Docker CLI commands and here's a [cheat sheet](https://devhints.io/docker-compose) for docker compose which should help you in your future endeavors. All the best!
+### Prerequisites
-----
+- A [Docker Hub](https://hub.docker.com/) account
+- A [GitHub](https://github.com) account and a repository
+- A Dockerfile
+
+### Steps
+
+#### Creating Access Token
+
+1. First, you need to create a Docker Hub token to allow GitHub Actions to push to your Docker Hub repository. To do this, go to your [Docker Hub Account Settings](https://hub.docker.com/settings/security) and click on "New Access Token". Give it a name and click on "Create". (Don't close this page yet, you will need it later)
+2. Open your GitHub repository and go to the "Settings" tab. Click on "Secrets" and then "New repository secret". Give it a name (e.g. `DOCKERHUB_TOKEN`) and paste the token you just created in the previous step. Click on "Add secret".
+3. Do the same for `DOCKERHUB_USERNAME`
+
+#### Creating Workflow
+
+1. In your GitHub repository, create a new folder called `.github/workflows`
+2. Paste the following file under `.github/workflows/docker.yml` (Make sure to replace `YOUR_IMAGE_NAME_HERE` with your image name)
+3. Commit and push your changes
+4. Profit!
+
+```yml
+name: Build and Push Docker Image
+
+on:
+ push:
+ branches:
+ - main # Change this to your default branch
+
+jobs:
+ build:
+ runs-on: ubuntu-latest
-## Docker Terminology
-- **Container**: A package of code bundled by Docker that runs as an isolated process from your machine. The package of code can be pretty much anything, a single python file, an API, a full stack web application etc. A container is also referred to as a **containerized application**.
+ steps:
+ - uses: actions/checkout@v4
+ - uses: docker/setup-buildx-action@v3
-- **Image**: template with a set of instructions for creating a container. *(most of the times these are pre-built so don't worry too much about making one)*
+ - name: Login to DockerHub
+ uses: docker/login-action@v3
+ with:
+ username: ${{ secrets.DOCKERHUB_USERNAME }}
+ password: ${{ secrets.DOCKERHUB_TOKEN }}
+
+ - name: Build and push Docker image
+ uses: docker/build-push-action@v5
+ with:
+ push: true
+ tags: ${{ secrets.DOCKERHUB_USERNAME }}/YOUR_IMAGE_NAME_HERE:latest
+```
+
+Note: This workflow will automatically build and push after each commit to the `main` branch. This is ideal for development, assuming that your main branch is the staging branch. However, you might want to change it or create a separate workflow with a separate image name to only build on tags (releases) for production so that the deployment is more controlled.
+
+## Deploying and Automatic Updates
+
+Now that you have a Docker Image on Docker Hub, you can deploy it to a server. There are many ways and platforms that allow you to do this. You can rent a minimal Linux VPS server or a Docker server for $4-6 per month on various platforms. One platform I recommend is DigitalOcean, as they have a very intuitive web interface and very good [documentation](https://www.digitalocean.com/docs/) for beginners. you can click the referral link (icon) below to get a free $200 credit for 60 days, what a deal!
+
+[](https://www.digitalocean.com/?refcode=86dbfd5d0266&utm_campaign=Referral_Invite&utm_medium=Referral_Program&utm_source=badge)
+
+
+
+Once you have a Linux server, the easiest way to deploy a docker image is to use [Docker Compose](https://docs.docker.com/compose/). You can define your services in a `docker-compose.yml` file and then run `docker-compose up -d` to start the containers in the background.
+
+### Deploying with Docker Compose on DigitalOcean
+
+1. Install Docker Core on your server [(official guide for Ubuntu)](https://docs.docker.com/engine/install/ubuntu)
+2. Install docker-compose plugin [(official guide)](https://docs.docker.com/compose/install/linux/#install-using-the-repository).
+3. Create a `docker-compose.yml` file on your server and paste the following (Make sure to replace `YOUR_IMAGE_NAME_HERE` with your image name)
+4. Run `docker-compose up -d` to start the containers in the background
+
+```yml
+version: "3.9"
+
+services:
+ app:
+ image: YOUR_IMAGE_NAME_HERE:latest
+ restart: always
+ ports:
+ - 80:80 # Expose any ports you need
+```
+
+If you need a database, you can add it to the composition as well! For example, if you want to use PostgreSQL, you can add the following to your `docker-compose.yml` file:
+
+```yml
+version: "3.9"
+
+services:
+ app:
+ image: YOUR_IMAGE_NAME_HERE:latest
+ restart: always
+ ports:
+ - 80:80 # Expose any ports you need
+ depends_on:
+ - db
+ environment: # In your program, use these environment variables to connect to the database
+ DB_HOST: db
+ DB_PORT: 5432
+ DB_USER: postgres
+ DB_PASSWORD: postgres
+ DB_NAME: postgres
+
+ db:
+ image: postgres:13
+ restart: always
+ environment:
+ POSTGRES_USER: postgres
+ POSTGRES_PASSWORD: postgres
+ POSTGRES_DB: postgres
+```
+
+Since the database is contained within the docker-compose network, it is perfectly secure to use the default `postgres` user and password, since it cannot be accessed through the wider internet. However, if you want to expose your database (which is not recommended), you can add the port `5432:5432` to the `db` service and use a stronger password.
+
+If you are using any other database, you can find the docker image on [Docker Hub](https://hub.docker.com/search?q=&type=image&category=Database) and follow the instructions there. Please be sure to read the docker container's documentation carefully! Most questions regarding database images can be answered by reading the documentation.
+
+### Automatic Updates
+
+To automatically update your deployment when you push a new image to Docker Hub, you can use [Watchtower](https://github.com/containrrr/watchtower). It is a simple container that monitors your other containers and updates them when a new image is available. You can add it to your `docker-compose.yml` file like this:
+
+```yml
+services:
+ # ...
+
+ watchtower:
+ image: containrrr/watchtower
+ restart: always
+ volumes:
+ - /var/run/docker.sock:/var/run/docker.sock
+ command: --interval 30
+```
+
+This will check for updates every 30 seconds. You can change this to whatever you want. You can also add a `--cleanup` flag to remove old images after updating.
+
+### Advanced Deployment
+
+If you have multiple services and want to deploy them on the same server with different domain names or set up SSL to make your services secure from MITM (man-in-the-middle) attacks, you can use [Traefik](https://doc.traefik.io/) or [Nginx](https://www.nginx.com/) as a reverse proxy. This is a more advanced topic and is out of the scope of this article. However, you can find many tutorials online on how to do this, such as this DigitalOcean article: [How To Use Traefik v2 as a Reverse Proxy for Docker Containers on Ubuntu 20.04](https://www.digitalocean.com/community/tutorials/how-to-use-traefik-v2-as-a-reverse-proxy-for-docker-containers-on-ubuntu-20-04)
+
+## Other Resources
-Explained in Docker's own words [here](https://docs.docker.com/get-started/)
\ No newline at end of file
+Here's a [cheat sheet](https://docs.docker.com/get-started/docker_cheatsheet.pdf) of all useful Docker CLI commands and here's a [cheat sheet](https://devhints.io/docker-compose) for docker-compose which should help you in your future endeavours. All the best!
diff --git a/Topics/Development_Process/kubernetes.md b/Topics/Development_Process/kubernetes.md
new file mode 100644
index 000000000..f5e643490
--- /dev/null
+++ b/Topics/Development_Process/kubernetes.md
@@ -0,0 +1,68 @@
+# Introduction to Kubernetes
+
+## What is Kubernetes?
+
+Kubernetes, also known as K8s, is an open-source platform that automates the management process for application containers. It was developed by Google and is now maintained by the Cloud Native Computing Foundation.
+
+## Key Features
+
+- **Auto Scaling:** Kubernetes automatically scales its resources dynamically to meet application's demand.
+- **Container Orchestration:** Kubernetes efficiently manages containers across multiple hosts.
+- **Self-healing:** It automatically restarts containers that fail, replaces them, and kills containers that don't respond to user-defined health checks.
+- **Load Balancing:** Kubernetes can distribute network traffic to ensure stability.
+- **Automated Rollouts and Rollbacks:** Allows changes to the application or its configuration while monitoring application health.
+
+## Components
+
+1. **Pods:** The smallest deployable units that can be created, scheduled, and managed.
+2. **Services:** An abstraction to expose an application running on a set of Pods.
+3. **Volumes:** Provides a way to store data and stateful applications.
+4. **Namespaces:** Enable multiple virtual clusters on the same physical cluster.
+
+## Why Use Kubernetes?
+
+- **Portability:** Works across various cloud and on-premises environments.
+- **Scalability:** Easily scales applications up or down based on demand.
+- **Community Support:** Strong community and ecosystem with numerous resources for learning and troubleshooting.
+
+## Kubernetes Use Cases
+
+### Real life Example - Reddit's Infrastructure Modernization
+- **Challenge**: Overcoming limitations of traditional provisioning and configuration.
+- **Solution**: Adopted Kubernetes as the core of their internal infrastructure.
+- **Outcome**: Addressed drawbacks and failures of the old system, enhancing site reliability``【oaicite:1】``.
+
+### Large Scale App Deployment
+- **Automation and Scaling:** Ideal for large applications, offering features like horizontal pod scaling and load balancing.
+- **Handling Traffic Surges:** Efficient during high-traffic periods and hardware issues.
+
+### Managing Microservices
+- **Efficient Communication:** Facilitates communication between smaller, independent services in a microservice architecture.
+- **Complex Scenario Management:** Aids in managing complex communications and resource distribution.
+
+### CI/CD Software Development
+- **Automation in Pipelines:** Enhances CI/CD processes with automation, improving resource management.
+- **Integration with DevOps Tools:** Often used alongside Jenkins, Docker, and other DevOps tools.
+
+
+## Set-up Kubernetes
+The Kubernetes command-line tool, [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/), allows you to run commands against Kubernetes clusters.
+
+You can use kubectl to deploy applications, inspect and manage cluster resources, and view logs. For more information including a complete list of kubectl operations, see the kubectl reference documentation.
+
+- [Install kubectl on Linux](https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/)
+
+- [Install kubectl on macOS](https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/)
+
+- [Install kubectl on Windows](https://kubernetes.io/docs/tasks/tools/install-kubectl-windows/)
+
+
+Visit [here](https://kubernetes.io/docs/setup/) for more general guide
+
+## Conclusion
+
+Kubernetes is a powerful tool for managing containerized applications, providing efficiency and flexibility in application deployment and management.
+
+---
+
+For more detailed information, visit the official Kubernetes website: [kubernetes.io](https://kubernetes.io).
diff --git a/Topics/Software_Engineering.md b/Topics/Software_Engineering.md
index 85f0aaed1..5888c371a 100644
--- a/Topics/Software_Engineering.md
+++ b/Topics/Software_Engineering.md
@@ -4,8 +4,10 @@ Potential topics--
1. Methodologies & Frameworks
1. Agile
- 1. Scrum
+ 1. [Scrum](./Software_Engineering/Scrum.md)
1. [User Stories](./Software_Engineering/User_Stories.md)
2. Kanban
3. XP
- 2. [Waterfall](./Software_Engineering/Waterfall.md)
\ No newline at end of file
+ 2. [Waterfall](./Software_Engineering/Waterfall.md)
+
+#### [Deploying Your Personal Website](./Software_Engineering/Deploying_Personal_Website.md)
diff --git a/Topics/Software_Engineering/Deploying_Personal_Website.md b/Topics/Software_Engineering/Deploying_Personal_Website.md
new file mode 100644
index 000000000..f96584a65
--- /dev/null
+++ b/Topics/Software_Engineering/Deploying_Personal_Website.md
@@ -0,0 +1,94 @@
+# Deploying your personal website using GitHub
+
+### Why should you create a personal website?
+
+A personal website can be a great way to build a more personalized portfolio to showcase your experience, projects, and achievements to potential employers. They give you more flexibility to present your authentic self and personality, and help you establish a personal brand, as opposed to traditional resumes and CVs. As a student, you might want to build a personal website to not only display your achievements, but also to demonstrate your web development skills and gain some practical learning experience.
+
+If you want to display the same content to every visitor of your site, and you’re looking for a cost-effective solution with lower hosting costs, you might want to opt for a static personal website. However, if you would like to add interactive features like allowing visitors to leave comments on your blog posts that require backend processing, or if you want to experiment with external APIs and databases, a dynamic personal website would be a better choice.
+
+For your reference, here is an example of a [static](https://maruan.alshedivat.com/) personal website. Here is an example of a [dynamic](https://bruno-simon.com/) personal website
+
+## Deploying Static Websites
+
+**Prerequisite:**
+Create a [GitHub](https://github.com/) account
+
+**Step 1: Develop your website**
+
+Locally develop your website on your computer. You can use static site generators like [Hugo](https://gohugo.io/), [Jekyll](https://jekyllrb.com/), [Next.js](https://nextjs.org/), or build a simple website using HTML, CSS, and Javascript. Locally test your website and view it using a development server to visualize your website before deployment.
+
+**Step 2: Create a GitHub repository**
+
+Login to GitHub and [create a new repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository) for your website. [Push](https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github) your files from your local machine to your new GitHub repository.
+
+**Step 3: Deploy to Netlify**
+
+- Login to [Netlify](https://app.netlify.com/login) with your GitHub account. Authorize Netlify to access your GitHub account.
+
+
+
+- Click `Add new site` from the Netlify dashboard. Click `Import an existing project` and select `Deploy with GitHub.`
+
+
+
+- Select the repository that you pushed your website code to. Click `Deploy Site` and Netlify will deploy your site for you.
+
+After you’ve deployed, Netlify provides a URL, but you can buy a custom domain and use that instead. You may also benefit from setting up continuous deployment so every time you push changes to your GitHub repository, they will be automatically deployed.
+
+## Deploying Dynamic Websites
+
+**Prerequisites:**
+Create a [GitHub](https://github.com/) account
+
+**Step 1: Develop your website**
+
+Build your dynamic website using a backend framework (e.g. Flask, Node.js) and a frontend framework (e.g. React) and thoroughly test your website locally using a development server.
+
+**Step 2: Create a GitHub repository**
+
+Login to GitHub and [create a new repository](https://docs.github.com/en/repositories/creating-and-managing-repositories/creating-a-new-repository) for your website. [Push](https://docs.github.com/en/migrations/importing-source-code/using-the-command-line-to-import-source-code/adding-locally-hosted-code-to-github) your files from your local machine to your new GitHub repository.
+
+**Step 3: Choose appropriate hosting services**
+
+For dynamic websites, there are hosting services that can deploy both frontend and backend components of your website. Some examples include [Heroku](https://www.heroku.com/), [Amazon Web Services](https://aws.amazon.com/), [Google Cloud Platform](https://cloud.google.com/), and others. However, choosing a separate hosting service for the backend and frontend can offer flexibility and provide you with options for your specific tech stack.
+
+For this tutorial, consider the following tech stack: Python to handle backend logic, Flask to serve JSON responses to be used by the frontend as well as to interact with MySQL, React for creating the frontend, and MySQL to store data.
+
+**Hosting the React frontend:**
+- Run `npm run build` or `yarn build` to compile your React app for production
+- You can use services like [Netlify](https://www.netlify.com/) or [Vercel](https://vercel.com/) (and many others) to deploy the frontend, but for the purposes of this tutorial, we will use Vercel
+
+1. Login to Vercel with your GitHub account.
+
+
+
+2. Click `Import Project` on the Vercel dashboard and authorize Vercel to access your GitHub repositories. Choose the repository that you pushed your project to.
+
+
+
+
+3. Vercel will automatically detect the build settings — all you have to do is click Deploy. Vercel will also automatically deploy your changes pushed to your GitHub repository. Note that Vercel provides a URL to access your website, but you can configure it with a custom domain if you have one.
+
+**Hosting the Python and Flask backend and MySQL server:**
+
+For hosting the backend, we will use [Railway](https://railway.app/). Note that Railway provides a $5 credit to use their services but you may need to sign up for a usage-based subscription to host your Flask backend and MySQL server.
+
+**Python and Flask backend:**
+- Login with GitHub.
+- In your dashboard, create a new project and choose the option to ‘Deploy from GitHub repo.’ Connect your GitHub repo that has your project.
+
+- Specify the root directory to be the directory that has your Flask app (usually a backend folder).
+
+- Specify the language to build the service as Python.
+
+
+**MySQL server:**
+- Go to the dashboard and create a new project. Choose the option to `Provision MySQL.`
+
+- This automatically sets up a MySQL database for you.
+
+- Railway will provide you with all the necessary credentials to use your server (e.g. Host, port, root username and password, etc). You can then configure your backend project settings to use the necessary environment variables that the Flask app will use to connect to the database (as it is generally not good practice to hardcode them in your backend).
+
+
+**Troubleshooting on Vercel and Railway**
+A convenient benefit of using services like Vercel and Railway for deployment is that you can review the deployment logs in their respective dashboards to view error messages. In Railway, you can view both build logs and deployment logs for your backend to trace where your issue may be.
diff --git a/Topics/Software_Engineering/Retries.md b/Topics/Software_Engineering/Retries.md
new file mode 100644
index 000000000..69e51a006
--- /dev/null
+++ b/Topics/Software_Engineering/Retries.md
@@ -0,0 +1,105 @@
+# Retries in web services
+
+Especially relevant for webserver applications, but useful for others, retries are really tricky to get right.
+Retries are the practice of _retrying_ a network request, usually over HTTP or HTTPS, when it fails. It relies
+on the assumption that most failures are intermittent, meaning only happen rarely.
+
+Retries and throttling are both terms used to talk about the _flow_ of traffic into a service. Often the
+operators/developers of that service want to make guarantees about the rate of that flow, or otherwise direct
+traffic.
+
+## Why retry at all?
+
+Intermittent failure can happen at any level. This can be within a single host if its on-host disk, memory, or
+CPU fails, or often in the communication between two hosts over a network. Networks, especially over the public
+internet using TCP/IP, are known to have periodic failures due to high load/congestion or network infrastructure
+hardware failure.
+
+Retries are a really simple, easy answer to these intermittent failures. If the error only happens rarely, then
+trying a task again is a really effective way to ensure that the message goes through. This often manifests
+itself as retrying API calls.
+
+Some services have built up language around these retries to control them. For example, calling AWS APIs returns
+metadata about the request itself. For example, there is a `$retryable` field in most of the AWS SDKv2's APIs,
+the most common client used to make AWS API calls. If this field is set `true`, the server is hinting that the
+failure was intermittent and that the client should retry. If the field is set to `false`, the server is hinting
+to the client that the failure is likely going to happen again.
+
+## What are the problems with retries?
+
+Since retries are so simple to implement and elegant, they are usually the first tool that developers reach for
+when a dependency of theirs has intermittent failures, but how can this go wrong?
+
+Consider a case where 4 distinct software teams each build products that depend on one another, in a chain like:
+```
+A -> B -> C -> D
+```
+
+That is, service A is calling service B's APIs, and so on. Since B's APIs are known to fail occasionally, A has
+configured an automatic retry count of 3. Underneath the hood, B depends on C. Service A may or may not know this
+about B. But since C has a flaky API too, B also has a retry count of 3. And the same for C.
+
+This works fine, and will usually work. If all services are sufficiently scaled up to handle the load they are
+given, there are no problems.
+
+However, imagine a case where service D is down. Though it is at the end of the chain of dependencies, in theory,
+the services should be able to stay up despite their dependencies being down. This type of engineering is called
+fault-tolerance.
+
+The next time that Service A takes a request, it forwards it to B, which forwards it to C, which tries to call D's
+API, which fails. C then, tries again 3 times before reporting a failure back to B, which also triggers a retry.
+That means C tries _another 3 times_.
+
+Retries deep into services grow multiplicatively, and a single API call to A has caused
+```
+A: 3
+B: 9
+C: 27
+```
+different API calls to fail. C is handling 27x more load than it is used to, and might start failing itself, further
+exacerbating the problem.
+
+That is, D has become a single point of failure for all other services, and even if they don't outright fail, the
+load on B, C, and D, are highly needlessly increased.
+
+## What can we do about retries?
+
+Clients calling services will nearly always have retries configured. However, internal services should rarely
+implement retries while calling other internal services, for precisely this reason.
+
+Another technique to get around excessive retries is to utilize more caching. If service C had cached the responses
+from service D, it's possible that service D going down would have affected the top-level services at all, and
+everything would have worked as normal. The downside to this approach is that caches are often trick to get right,
+and sometimes introduce modal behavior in services [1], usually a bad thing.
+
+## So should I retry?
+
+As always in software engineering, it depends. A good rule of thumb is the external/internal, where external
+dependencies are wrapped in retries, but internal dependencies aren't. It's much easier to control the behavior of
+internal dependencies, either by directly contributing to their product, or speaking to the owners of that product
+itself. Retries are a rough band-aid, and more precise solutions are often better. For example, it might be more work,
+but fixing the root-cause of intermittent failures avoids the problems with retries in the first place, and also
+produces a more stable product.
+
+Retries are also more acceptable when they aren't in the _critical path_ of a service. For an `AddTwoNumbers`
+service, having retries on dependencies within the main `AddTwoNumbers` API call might not be a good idea. However,
+for backup jobs, batch processing, or other non-performance-critical work, retries are often a simple,
+engineering-efficient way to ensure reliability.
+
+## How should I retry?
+
+For most popular programming languages, retries are built into common dependencies. For example,
+1. Rust has `tower`, a generic HTTP service abstraction that offers automatic retries: https://github.com/tower-rs/tower [2],
+2. JavaScript and Typescript have `retry`: https://www.npmjs.com/package/retry [3], and
+3. Go has `retry-go`: https://github.com/avast/retry-go [4]
+
+Each library works slightly differently, but can be used in simple or complex ways. For example, it could be as simple
+as immediately retrying the network request upon failure, or more complicated, including concepts like jitter (making sure
+many concurrent clients don't all retry at the same time), exponential backoff (clients retrying less and less over time),
+or other concepts [1].
+
+## References
+1. https://brooker.co.za/blog/2021/05/24/metastable.html
+2. https://github.com/tower-rs/tower
+3. https://www.npmjs.com/package/retry
+4. https://github.com/avast/retry-go
\ No newline at end of file
diff --git a/Topics/Software_Engineering/Scrum.md b/Topics/Software_Engineering/Scrum.md
new file mode 100644
index 000000000..088a487e6
--- /dev/null
+++ b/Topics/Software_Engineering/Scrum.md
@@ -0,0 +1,78 @@
+## Scrum Framework
+
+### Table of Contents
+- [What is scrum?](#what-is-scrum)
+- [Scrum values](#scrum-values)
+- [Members of a scrum team](#members-of-a-scrum-team)
+- [What are sprints?](#what-are-sprints)
+- [Scrum artifacts](#scrum-artifacts)
+- [Scrum ceremonies](#scrum-ceremonies)
+- [Why is scrum important?](#why-is-scrum-important)
+- [Resources](#resources)
+
+### What is scrum?
+Scrum is an agile project management framework that helps teams organize and manage their work. While most often used in software development teams, this framework applies to different sectors in HR, accounting, finance, etc. The term for this framework was coined from the 1986 Harvard Business Review article in which the authors compared high-performing teams to the scrum formation used in rugby. Scrum specifies artifacts, ceremonies/events, and roles associated with each sprint to get work done.
+
+### Scrum values
+- Commitment
+ - Team members should make sure not to overcommit to the amount of work they can complete and should be committed to their time-based tasks.
+- Courage
+ - Team members should have the courage to question processes and ask open, challenging questions about anything that hinders the ability to progress.
+- Focus
+ - The team should be focused on their selected tasks to complete the specified work within a sprint.
+- Openness
+ - There should be regular meetings, such as daily standups, to openly talk about progress and blockers.
+ - The team should be open to new ideas.
+- Respect
+ - Everyone should recognize a team member's contributions and accomplishments.
+ - Respect for one another is essential to ensure mutual collaboration and cooperation.
+
+### Members of a scrum team
+A scrum team consists of three specific roles:
+- Product owner:
+ - The product owner is the expert in understanding the business, customer, and marketing needs.
+ - They focus on ensuring the development team delivers the most value to the business.
+- Scrum master:
+ - The scrum master coaches the team and organizes/schedules resources for scrum meetings.
+ - Their goal is to optimize the flow for the scrum team to ensure maximal productivity and minimal blockers.
+- Development team:
+ - The development team is the ones who work on creating the product/working on items in the sprint, according to the specifications from the product owner.
+ - The team includes developers, UX specialists, Ops engineers, testers, and designers.
+ - With these differing skill sets, the team can cross-train each other to prevent bottlenecks.
+
+### What are sprints?
+A sprint is a short duration where the scrum team works to complete a specified amount of work. Sprints usually correspond to some set of features a team wants to add. The goal of a sprint varies from team to team, some goals being a finished product accessible to customers and others being to complete a subsection of a larger product. The usual timeline for a sprint is two weeks, but the timeline varies between teams.
+
+### Scrum artifacts
+Scrum artifacts refer to the information a scrum team uses that details information about the product in development, the tasks involved in a sprint cycle, and the end goal.
+- Product backlog:
+ - The product backlog is the primary list of work that needs to be done and is maintained and updated by the product owner or manager.
+- Sprint backlog:
+ - The sprint backlog is the list of user stories or bug fixes that should be done by the end of the current sprint cycle and chosen from the product backlog.
+- Increment (sprint goal):
+ - The increment is the end product of a sprint.
+ - The increment can mean a finished product, features usable to customers by the end of the sprint, or a completed section of a larger project.
+
+### Scrum ceremonies
+The scrum framework incorporates regular meetings and events that teams perform regularly. In scrum, there are five regularly held events:
+- Backlog organization:
+ - This is the responsibility of the product owner, who makes sure to continually update and maintain the product backlog, according to feedback from users and the development team.
+- Sprint planning:
+ - This meeting is led by the scrum master and includes the development team, where the items to be completed during the sprint are added from the product backlog per the sprint goal.
+- Sprint:
+ - This is the time period where the scrum team works to complete items in the scope of the sprint.
+- Daily standup:
+ - The standup is a regularly scheduled meeting in which members of the team will update members on their progress and mention blockers they are facing with their work.
+- Sprint review:
+ - This occurs at the end of the sprint, where the team meets to demo the end product and showcase the completed sprint backlog items.
+- Sprint retrospective:
+ - Also occurring at the end of the sprint, the retro is where the team discusses the aspects of the sprint that worked and parts that could use improvement.
+ - This builds in feedback and continual improvement of processes in the scrum framework.
+
+### Why is scrum important?
+Teams use the scrum framework since it provides an efficient and adaptable way to organize and manage teams and products. It is team-centric and self-managed and encourages creativity with the flexibility to assign work based on work styles. The framework has concrete roles, events, artifacts, and values. These aspects of scrum are incorporated into professional workplaces and can be used in CSC301 to finish the project in the short amount of time given.
+
+### Resources
+- [Atlassin - scrum](https://www.atlassian.com/agile/scrum)
+- [AWS - scrum](https://aws.amazon.com/what-is/scrum/)
+- [Techtarget - scrum](https://www.techtarget.com/searchsoftwarequality/definition/Scrum)
diff --git a/Topics/Teamwork.md b/Topics/Teamwork.md
index ee483dcc4..631474d04 100644
--- a/Topics/Teamwork.md
+++ b/Topics/Teamwork.md
@@ -11,6 +11,20 @@ Working as a team effectively requires a lot of planning and organization. Proje
## Team Communication
During the early stages of forming a team, the group should also establish general expectations for how the team plans to communicate with one another. This may include setting up a team communication channel on a specific platform (ex. discord or slack) and establishing regular check-up meetings (including when these meetings are and where they will be held - for example, in person or online). These general meetings can be used to outline general expectations and project requirements, update one another on individual progress, delegate new reponsibilities, and set project deadlines.
+### Time Management
+An important part of team communication is in communicating tasks, deadlines and progress. Each team member should have good knowledge of the general progress of the project. This will give each team member a deeper understanding of the current state of the project so that they can more adequetly allocate their time. Perhaps even more importantly, the project manager will be able to have a deeper understanding of the progress of the project, and the team member's working habits and they will be able to manage the team more adequetly.
+
+With a deeper understanding of the project progress, project managers will be able to assign more realistic dates and deadlines, and keep team members on task and on the right tasks. With a deeper understanding of their team member's working habits they can also more adequetly update stakeholders amd allocate budget. This will help avoid unforseen issues while approaching deadlines. For instance, if a project lead notices that their team is lagging behind on a task for a specific deadline, then they will be able to correctly reassess the deadline, and the deeper insight will allow him to more accurately set future deadlines.
+
+In general, when setting goals for your team as a project manager, there are many things to consider, such as the clarity of the goal, the fact that it is a realistic goal, and that the goal is relevant and important to your project. Proper communication with your team members will aid in acheiving each of these goals, and correctly following these tips will help your team reach their goals and provide a better product to your client. In this process, organization is key, and a project manager must try to leverage all the tools available to him to improve his ability to manage the team. When it comes to deadline-setting and task prioritizing, one of the most powerful tools that a manager can use is a todo list or project board/roadmap.
+
+There are many useful options for this, such as [Notion](https://notion.notion.site/Product-roadmap-c5e8829bb9644dd08c576452ee200404) and [Trello](https://trello.com/b/1x4Uql2u/project-management). Both of these tools allow a project manager and their team to create a page, that they can all access to check important tasks and notes about those tasks.
+
+Here is an example progress board given by Trello:
+
+You may notice that there are several states for each task, from 'To Do' to 'Done' or even 'Blocked.' This allows the team to understand the progress of all their tasks, and even if a task was stopped. You will also notice that there are team members assigned to some tasks which gives clear communication to the whole team about who is working on what. As you can see, there is a lot of information that a project manager can gain through this kind of tool, and it will all help them with being able to guide their team as described above.
+
+
### Getting Clarification
You always want to ensure that all team members are on the same page when working together. If you are ever unsure about something that was said during a meeting or have any confusions in general, ask your team members with specific and explicit questions. It is helpful to reiterate your understanding back to the team to confirm your understanding is correct. Some example phrases you may use (taken from [Week 8 Lecture Slide 12](https://q.utoronto.ca/courses/293515/files/25224801/download?download_frd=1)) includes:
- "Can you please clarify..."
diff --git a/Topics/Tech_Stacks.md b/Topics/Tech_Stacks.md
index 29159444a..b52785347 100644
--- a/Topics/Tech_Stacks.md
+++ b/Topics/Tech_Stacks.md
@@ -17,3 +17,4 @@
### [Learning Nodemailer](./Tech_Stacks/Nodemailer.md)
### [React Components Guide](./Tech_Stacks/React_Components.md)
### [Temporal For Workflow Orchestration](./Tech_Stacks/Temporal.md)
+### [Learning Cypress](./Tech_Stacks/Cypress.md)
diff --git a/Topics/Tech_Stacks/Apache_Superset.md b/Topics/Tech_Stacks/Apache_Superset.md
index 7ed40c503..e495065ac 100644
--- a/Topics/Tech_Stacks/Apache_Superset.md
+++ b/Topics/Tech_Stacks/Apache_Superset.md
@@ -1,19 +1,23 @@
# Apache Superset
+
## Prerequisites:
* [React](https://react.dev/) for Frontend-related knowledge.
* [Node.js](https://nodejs.org/en/about) for Backend-related knowledge.
* [PostgreSQL](https://www.postgresql.org/) for SQL and Databases knowledge as well as writing queries.
* [Python](https://www.python.org/) for writing scripts as well as changing parts of the configuration files.
+
## Introduction:
Apache Superset is a modern, enterprise-ready business intelligence web application. It is fast, lightweight, intuitive, and loaded with options that make it easy for users of all skill sets to explore and visualize their data, from simple pie charts to highly detailed deck.gl geospatial charts.
+
## Set-up:
**The easiest way to set up Apache Superset is by using Docker Desktop. To install Docker Desktop, follow the instruction [HERE](https://www.docker.com/products/docker-desktop/). You can find more details about installing Superset by Docker Desktop [HERE](https://superset.apache.org/docs/installation/installing-superset-using-docker-compose).**
+
### Potential Issues with Docker Desktop:
1. The container named `superset_init` may exit with a code one due to a thread deadlock. To fix this issue, simply kill the process using `CTRL + C`, and re-compose it again.
@@ -38,6 +42,56 @@ This message shows up due to an update in version 2.1.0 to force secure configur
1. The command `pip install apache-superset` doesn't work. This is because Apache Superset currently supports python version 3.8 and 3.9. Any python versions that's lower or higher will result in a failure.
+## Creating a Custom Plugin:
+
+**To get started on creating a custom plugin, you can follow the instruction [HERE](https://superset.apache.org/docs/contributing/creating-viz-plugins/)**
+
+Note that while MacOS or Linux systems are more suitable, Windows is also a viable option if you have docker installed.
+
+There are example plugins [example](https://github.com/preset-io/superset-plugin-chart-liquid) which you can reference. Furthermore, this [youtube tutorial](https://www.youtube.com/watch?v=LDHFY9xTzls) can help you as well.
+
+**To set up for the plug-in, you would need to have the following in your system:**
+1) apache-superset 3.0.0
+2) python 3.9.7 or above
+3) node version 16
+4) npm version 7 or 8
+
+### Potential Issues Creating a Custom Plugin:
+
+Note that one may get errors from `npm run build`, but those errors do not affect the actual building of the plugin. `npm` is a large package manager, and thus it yields irrelevant errors when trying to build the plugin. In case of version conflict errors with other tools under `npm`, it is recommended to use the `--force` flag, again due to the nature of `npm`.
+
+**IMPORTANT** : You can put the your-plugin folder anywhere in your machine EXCEPT in the superset/superset-frontend/plugins folder. The custom plugin will fail to run and cause errors if it is in that folder. This is because those are default plugins by Apache Superset and Apache Superset runs some processes on all folders in the plugin folder, which may cause errors for your plugin as your custom plugin does not have the same configurations as the default plugins provided.
+
+To add the package to Superset, go to the `superset-frontend` subdirectory in your Superset source folder (assuming both the `your-plugin` plugin and `superset` repos are in the same root directory) and run
+```
+npm i -S ../../your-plugin
+```
+
+If your Superset plugin exists in the `superset-frontend` directory and you wish to resolve TypeScript errors about `@superset-ui/core` not being resolved correctly, add the following to your `tsconfig.json` file:
+
+```
+"references": [
+ {
+ "path": "../../packages/superset-ui-chart-controls"
+ },
+ {
+ "path": "../../packages/superset-ui-core"
+ }
+]
+```
+
+You may also wish to add the following to the `include` array in `tsconfig.json` to make Superset types available to your plugin:
+
+```
+"../../types/**/*"
+```
+
+Finally, if you wish to ensure your plugin `tsconfig.json` is aligned with the root Superset project, you may add the following to your `tsconfig.json` file:
+
+```
+"extends": "../../tsconfig.json",
+```
+
## Extra Resources:
* [Installing Apache Superset on Kubernetes](https://superset.apache.org/docs/installation/running-on-kubernetes)
diff --git a/Topics/Tech_Stacks/CSS.md b/Topics/Tech_Stacks/CSS.md
index 827d57dd6..d162c775e 100644
--- a/Topics/Tech_Stacks/CSS.md
+++ b/Topics/Tech_Stacks/CSS.md
@@ -39,6 +39,8 @@ CSS grid is also a positioning alternative that provides a grid layout module, i
Native CSS can be difficult to use, so CSS frameworks have been created so developers can use pre-made styles in order to create good looking website components, navigation bars, buttons, etc. in an easier and faster way without needing to know the semantics of CSS. Two popular CSS frameworks include [Tailwind CSS](https://tailwindcss.com/) and [Bootstrap CSS](https://getbootstrap.com/docs/3.4/css/).
+For an introduction to the Tailwind framework, refer to [Getting Stylish and Responsive with Tailwind CSS](./Tailwind.md).
+
[React-Bootstrap](https://react-bootstrap.github.io/) is a Bootstrap CSS framework specifically for use on React apps. There is also the guide on our wiki [here](./Bootstrap.md) that can get you started on Bootstrap's basics.
Generally, Bootstrap is easier to use and will produce a good looking website in a shorter amount of time, while Tailwind CSS is more customizable and can create more unique looking elements, but requires more of a time investment and is a bit harder to learn and work with compared to Bootstrap.
diff --git a/Topics/Tech_Stacks/Cypress.md b/Topics/Tech_Stacks/Cypress.md
new file mode 100644
index 000000000..a620b9b1b
--- /dev/null
+++ b/Topics/Tech_Stacks/Cypress.md
@@ -0,0 +1,108 @@
+# E2E Testing with Cypress
+
+- [Cypress Introduction](#cypress-introduction)
+- [Why do end to end testing?](#why-do-end-to-end-testing-)
+- [Why Cypress?](#why-cypress-)
+- [Installation and setup:](#installation-and-setup-)
+- [The basics](#the-basics)
+- [Best Practices](#best-practices)
+
+## Cypress Introduction
+
+Cypress is mainly used for testing web applications, especially those built in javascript. It provides an interface to programatically test your application, and visually what went wrong (or right) in tests. This page will primarily focus on E2E (end-to-end) testing with cypress rather than component testing.
+
+## Why do end to end testing?
+
+[https://circleci.com/blog/what-is-end-to-end-testing/](https://circleci.com/blog/what-is-end-to-end-testing/)
+
+The above link has a good explanation on what end to end testing is and why it should be used. While other types of tests like unit tests or functional tests make sure a single component/module works as expected, an end to end test starts from the perspective of the end user and tries to mimic what an end user would do when accessing your application.
+
+Cypress very closely mimics a real user, think of it as a robot accessing your website from a browser like a human would, but you can program the robot to interact with your website however you like and programmatically check the output on the screen.
+
+## Why Cypress?
+
+There exist many different testing frameworks online, such as [Selenium](https://www.selenium.dev/), [Jest](https://jestjs.io/), [Mocha](https://mochajs.org/), and more.
+
+Cypress is most useful for UI, integration and end-to-end testing, so it can be used in tandem with unit testing frameworks like Jest.
+
+Cypress is built on top of mocha, and uses its framework for tests as well. The main difference is that cypress focuses more on improving client-side and UI tests.
+
+Selenium is often compared to Cypress, due to it being one of the most popular UI testing frameworks before Cypress was created. One of the biggest differences is that Cypress automatically retries commands while waiting for DOM elements to load properly, helping to prevent [flaky tests](https://www.jetbrains.com/teamcity/ci-cd-guide/concepts/flaky-tests/) and eliminating the need to write wait or sleep helpers that were needed in Selenium. Cypress is also faster and easier to get setup and start creating tests than Selenium. However, Selenium is more flexible, allowing for testing in multiple browsers at a time, and also for writing tests in languages other than javascript.
+
+## Installation and setup:
+
+Cypress can be automatically installed with [npm](https://www.npmjs.com/): `npm install cypress`
+
+See [https://docs.cypress.io/guides/getting-started/installing-cypress](https://docs.cypress.io/guides/getting-started/installing-cypress) for more details.
+
+To run cypress, we can use the command `npx cypress open` and follow the instructions provided on the UI.
+
+See [https://docs.cypress.io/guides/getting-started/opening-the-app](https://docs.cypress.io/guides/getting-started/opening-the-app) for more details.
+
+## The basics
+
+Cypress has an extremely detailed guide for getting started, explains how to create and run tests, and there is also a lot of information linked as well.
+
+[https://docs.cypress.io/guides/end-to-end-testing/writing-your-first-end-to-end-test](https://docs.cypress.io/guides/end-to-end-testing/writing-your-first-end-to-end-test)
+
+[https://docs.cypress.io/guides/core-concepts/introduction-to-cypress](https://docs.cypress.io/guides/core-concepts/introduction-to-cypress)
+
+I highly recommend reading through the above two links, and the entirety of the core concepts section in the documentation. It gives a thorough introduction on how cypress works and how to use it to test your application.
+
+The first link provides a detailed guide on how cypress commands work and how to read the testing UI.
+
+The second link provides a guide to most of the commonly used functions in cypress, like how to query for elements, check if they have or not have a specific property, actions such as clicking on buttons or filling out forms, and more.
+
+## Best Practices
+
+One common use case for cypress (and UI testing in general) is to test responsiveness, does the UI look like it should in different viewports?
+
+While it is possible to duplicate tests, this may cause you to need to repeat large parts of the code to select elements and fill out forms, which has nothing to do with
+
+It is much easier to use the beforeEach() hook and a cypress context() to bundle viewpoints together. As an example:
+
+```javascript
+viewports = [{“name”: “small”, “dim”: [300, 800]},
+ {“name”: “large”, “dim": [300, 800]]
+
+viewports.forEach(viewport => {
+ cy.context(“Viewport” + viewport.name, () => {
+ cy.beforeEach(() => {
+ cy.viewport(viewport.dim[0], viewport.dim[1])
+ })
+ //tests go here
+ })
+}
+```
+In tests, you can include snippets of code like
+```javascript
+if (viewport.name == ‘small’) {
+ cy.get("@somedivmobileonly").should('exist')
+} else if (viewport.name == 'large') {
+ cy.get("@somedivmobileonly").should('not.exist')
+}
+```
+
+Another common test for responsiveness is checking the alignment of items, for example testing that one element should be above another in a small viewport and beside another in a larger viewport.
+
+In this case, you should use a closure (described in the [variables and aliases](https://docs.cypress.io/guides/core-concepts/variables-and-aliases) section) to store the first element's position:
+
+```javascript
+cy.get('elem1').then($elem => {
+ cy.get('elem2').then($elem2 => {
+ let p1 = $elem.position()
+ let p2 = $elem2.position()
+ if (viewport.name == 'small') {
+ expect(p1.top).to.be.greaterThan(p2.top)
+ expect(p1.left).to.be.equal(p2.left)
+ } else {
+ expect(p1.top).to.be.equal(p1.top)
+ expect(p1.left).to.be.greaterThan(p1.left)
+ }
+ })
+})
+```
+
+Note the use of `expect` instead of `should`, since we are not chaining off of a cypress command we use an assertion instead. See [here](https://docs.cypress.io/guides/references/assertions) for other assertions.
+
+For more, Cypress provides their own list of best practices here: [https://docs.cypress.io/guides/references/best-practices](https://docs.cypress.io/guides/references/best-practices). I highly recommend reading their guide, if I had known about this before, I would have saved a lot of effort learning the hard way what not to do.
diff --git a/Topics/Tech_Stacks/Flask.md b/Topics/Tech_Stacks/Flask.md
new file mode 100644
index 000000000..0005bf487
--- /dev/null
+++ b/Topics/Tech_Stacks/Flask.md
@@ -0,0 +1,125 @@
+# Intro to Flask with Flask-SQLAlchemy
+
+
+## 1. Introduction
+### What is Flask and why is it useful?
+Flask is a lightweight and flexible web framework for Python. Developed by Armin Ronacher, it is designed to be simple and easy to use, providing the essentials for building web applications without imposing rigid structures. Flask is often referred to as a "micro" framework because it focuses on keeping the core simple and extensible, allowing developers to choose the tools and libraries they need.
+
+The inclusion of the Jinja2 templating engine, built-in development server and Flask's native support for RESTful request handling are desirable features. Its versatility in deployment and suitability for prototyping and small to medium-sized projects make Flask an ideal framework for projects where customization and control over the stack are key considerations.
+
+Here is an overview of Flask: [Flask Overview](https://flask.palletsprojects.com/en/3.0.x/#)
+
+### What is Flask-SQLAlchemy and why is it useful?
+
+Flask-SQLAlchemy is an extension for Flask that integrates SQLAlchemy, a powerful SQL toolkit and Object-Relational Mapping (ORM) library, into Flask applications. This extension simplifies database interactions by providing a convenient interface for defining models, executing queries, and managing database connections seamlessly within the Flask framework.
+
+Some of the advantageous features include seamless integration with Flask, session handling, support for Flask Script and Flask Restful, compatibility with Flask extensions and database migrations.
+
+
+
+## 2. Getting set up
+### Setting up Flask:
+First install Flask following the instructions here: [Flask installation](https://flask.palletsprojects.com/en/3.0.x/installation/)
+
+This will make sure that all dependencies are obtained, the virtual environment is created and Flask is installed.
+Here is a summary of the steps:
+Create an environment:
+```bash
+> mkdir myproject
+> cd myproject
+> py -3 -m venv .venv
+```
+Activate the environment:
+```bash
+> .venv\Scripts\activate
+```
+Install flask:
+```python
+$ pip install Flask
+```
+
+Next, the Flask application can be set up.
+This shows you how the project layout works: [Project Layout](https://flask.palletsprojects.com/en/3.0.x/tutorial/layout/)
+
+And this is how to set the application up:[Application Setup](https://flask.palletsprojects.com/en/3.0.x/tutorial/factory/)
+
+Alternatively, there is also this useful quickstart guide for getting started quickly:[Quickstart Guide](https://flask.palletsprojects.com/en/3.0.x/quickstart/)
+
+### Setting up Flask-SQLAlchemy:
+Note that Flask-SQLAlchemy is a wrapper around SQLAlchemy, so it will be useful to check out the documentation and tutorial for using SQLAlchemy linked here:
+[SQLAlchemy Documentation](https://docs.sqlalchemy.org/en/20/tutorial/index.html)
+
+Then follow [these steps](https://flask-sqlalchemy.palletsprojects.com/en/3.1.x/quickstart/#installation) to get Flask-SQLAlchemy installed, then initialize and configure extensions. It also shows how to define models and create tables.
+
+Here is a summary of the steps:
+Install Flask-SQL with:
+```bash
+$ pip install -U Flask-SQLAlchemy
+```
+Initialize the extensions:
+```python
+from flask import Flask
+from flask_sqlalchemy import SQLAlchemy
+from sqlalchemy.orm import DeclarativeBase
+
+class Base(DeclarativeBase):
+ pass
+
+db = SQLAlchemy(model_class=Base)
+```
+
+Configure the extensions:
+```python
+# create the app
+app = Flask(__name__)
+# configure the SQLite database, relative to the app instance folder
+app.config["SQLALCHEMY_DATABASE_URI"] = "sqlite:///project.db"
+# initialize the app with the extension
+db.init_app(app)
+```
+
+Define models:
+```python
+from sqlalchemy import Integer, String
+from sqlalchemy.orm import Mapped, mapped_column
+
+class User(db.Model):
+ id: Mapped[int] = mapped_column(Integer, primary_key=True)
+ username: Mapped[str] = mapped_column(String, unique=True, nullable=False)
+ email: Mapped[str] = mapped_column(String)
+```
+
+Create tables:
+```python
+with app.app_context():
+ db.create_all()
+```
+
+## 3. Basic useful features
+### Flask
+Here is the documentation with the basics needed to start developing using Flask. It assumes knowledge of Python, which I think should be safe to assume.
+[Flask Basics](https://flask.palletsprojects.com/en/3.0.x/tutorial/)
+
+
+### Flask-SQLAlchemy:
+Here are the basic useful features for using queries with Flask-SQLAlchemy. It shows the basics of things like inserting, deleting and updating in the database, selecting, and finally querying for views.
+[Flask-SQLAlchemy Basics](https://flask-sqlalchemy.palletsprojects.com/en/3.1.x/queries/)
+
+
+## 4. Conclusion
+In the dynamic landscape of web development, the tandem use of Flask and Flask-SQLAlchemy emerges as a compelling solution, seamlessly blending simplicity with robust database capabilities. Setting up a Flask application becomes a swift endeavor, marked by the ease of installation and quick configuration. Flask's minimalistic design empowers developers with the freedom to choose and integrate components, facilitating rapid prototyping and efficient development. With the added integration of Flask-SQLAlchemy, the database layer becomes an integral part of the Flask ecosystem, offering a unified and expressive interface for model definition, database querying, and session management. Ultimately, the Flask and Flask-SQLAlchemy duo empowers developers to create scalable, maintainable, and feature-rich web applications.
+
+
+## 5. Additional Resources
+[Here](https://flask.palletsprojects.com/en/3.0.x/) is the overview for the Flask documentation.
+
+
+[Here](https://flask-sqlalchemy.palletsprojects.com/en/3.1.x/) is an overview for the Flask-SQLAlchemy documentation.
+
+
+[Here](https://www.youtube.com/watch?v=uZnp21fu8TQ&t=1s&ab_channel=TechWithTim) is a useful video for learning about Flask-SQLAlchemy:
+
+[Here](https://flask.palletsprojects.com/en/2.3.x/errorhandling/) is a link to some common errors that users run into with Flask.
+
+
+
diff --git a/Topics/Tech_Stacks/JsonParsing.md b/Topics/Tech_Stacks/JsonParsing.md
new file mode 100644
index 000000000..89b4e0ab7
--- /dev/null
+++ b/Topics/Tech_Stacks/JsonParsing.md
@@ -0,0 +1,287 @@
+# Introduction to JSON Parsing
+
+
+
+## Introduction to JSON
+
+JSON (JavaScript Object Notation), is a lightweight and human-readable data interchange format. It serves as a standard data format for transmitting and exchanging data between a server and a web application, as well as between different parts of an application. JSON is language-agnostic, meaning it can be easily understood and used by various programming languages.
+
+## JSON Applications
+
+JSON plays a crucial role in APIs, simplifying data transmission between different programming languages. Its readability and simplicity make it an ideal choice for storing configuration settings and handling complex data structures, especially in NoSQL databases like MongoDB. In web development, JSON facilitates seamless communication between servers and clients, and it is a natural fit for JavaScript applications. Serialization and deserialization processes leverage JSON to convert data into string formats and back. Beyond web development, JSON is employed in logging systems, real-time communication protocols, and IoT applications, showcasing its adaptability across diverse domains. The format is also integral to security measures, as evident in its use within JSON Web Tokens (JWT).
+
+
+## JSON Data Types
+
+### Primitive Data Types
+
+JSON supports several primitive data types. These primitive data types are the basic building blocks used to represent values within a JSON structure. The primary primitive data types in JSON are:
+
+ * String: Represents a sequence of characters enclosed in double quotation marks (")
+
+ Example: "Hello, World!"
+
+
+ * Number: Represents numeric values, including integers and floating-point numbers.
+
+ Examples: 42, 3.14, -17
+
+ * Boolean: Represents a logical value, either true or false.
+
+ Examples: true, false
+
+ * Null: Represents an empty value or the absence of a value.
+
+ Example: null
+
+These primitive data types can be used alone or combined to create more complex JSON structures such as objects and arrays. For example, an object may contain key-value pairs where the values can be strings, numbers, booleans, null, or even nested objects and arrays.
+
+### Complex Data Types
+
+JSON allows for the construction of more complex data structures beyond primitive data types by using objects and arrays.
+
+ * Objects: An object in JSON is an unordered collection of key-value pairs. Key-value pairs are separated by commas and enclosed in curly braces {}. Keys must be strings, and values can be strings, numbers, booleans, null, objects, or arrays.
+ Example:
+
+```{python}
+{
+ "name": "John Doe",
+ "age": 30,
+ "isStudent": false,
+ "address": {
+ "city": "Exampleville",
+ "zipcode": "12345"
+ }
+}
+```
+In this example, "name", "age", "isStudent", and "address" are keys, and their corresponding values are strings, numbers, boolean, and another object, respectively.
+
+
+ * Array: An array in JSON is an ordered list of values. Values are separated by commas and enclosed in square brackets []. Values can be strings, numbers, booleans, null, objects, or other arrays.
+ Example:
+
+```{python}
+ [
+ "apple",
+ "banana",
+ "orange",
+ {
+ "color": "red",
+ "quantity": 5
+ }
+]
+```
+In this example, the array contains strings ("apple", "banana", "orange") and an object with keys "color" and "quantity".
+
+JSON structures often combine objects and arrays to represent more complex data hierarchies. For instance, an array of objects can represent a collection of similar entities, where each object has multiple key-value pairs.
+
+```{python}
+[
+ {
+ "name": "Alice",
+ "age": 25,
+ "isStudent": true
+ },
+ {
+ "name": "Bob",
+ "age": 30,
+ "isStudent": false
+ },
+ {
+ "name": "Charlie",
+ "age": 22,
+ "isStudent": true
+ }
+]
+```
+
+In this example, the array contains three objects, each representing a person with attributes such as name, age, and student status.
+
+
+## JSON Parsing in Different Programming Languages
+
+### Python Parse JSON
+
+#### Parse JSON String in Python
+
+Python has a built in module that allows you to work with JSON data. At the top of your file, you will need to import the json module.
+
+```{python}
+import json
+```
+
+
+If you need to parse a JSON string that returns a dictionary, then you can use the json.loads() method.
+```{python}
+import json
+
+# assigns a JSON string to a variable called jess
+jess = '{"name": "Jessica Wilkins", "hobbies": ["music", "watching TV", "hanging out with friends"]}'
+
+# parses the data and assigns it to a variable called jess_dict
+jess_dict = json.loads(jess)
+
+# Printed output: {"name": "Jessica Wilkins", "hobbies": ["music", "watching TV", "hanging out with friends"]}
+print(jess_dict)
+
+```
+
+#### Parse and Read JSON File in Python
+
+Suppose we have a JSON file called fcc.json. If we want to read that file, we first need to use Python's built-in `open()` function with the mode of read. We are using the `with` keyword to make sure that the file is properly closed.
+
+```{python}
+with open('fcc.json', 'r') as fcc_file:
+```
+We can then parse the file using the `json.load()` method and assign it to a variable called fcc_data.
+
+```{python}
+fcc_data = json.load(fcc_file)
+```
+The final step would be to print the results.
+
+```{python}
+print(fcc_data)
+```
+
+This is what the entire code would look like:
+```{python}
+import json
+
+with open('fcc.json', 'r') as fcc_file:
+ fcc_data = json.load(fcc_file)
+ print(fcc_data)
+```
+
+
+
+### JavaScript Parse JSON
+
+#### Parse JSON String in JavaScript
+
+The `JSON.parse()` static method parses a JSON string, constructing the JavaScript value or object described by the string.
+
+```{python}
+const json = '{"result":true, "count":42}';
+const obj = JSON.parse(json);
+
+console.log(obj.count);
+# Expected output: 42
+
+console.log(obj.result);
+# Expected output: true
+
+```
+#### Parse JSON File in JavaScript
+
+Suppose we have a json file called sample.json under your the current directory.
+
+We can use the `fetch()` method: Open the JavaScript file, In the `fetch()` method pass the address of the file, use the `.json` method to parse the document and display the content on the console
+
+```{python}
+function Func() {
+ fetch("./sample.json")
+ .then((res) => {
+ return res.json();
+ })
+ .then((data) => console.log(data));
+}
+```
+
+We can also use the `require` method using require module: Create a script.js and use the require method of the node to import the JSON file.
+
+```{python}
+const sample = require('./sample.json');
+console.log(sample);
+```
+
+To run the application, we can open the current folder in the terminal and type the following command
+
+```{python}
+node script.js
+```
+
+### Parse JSON in Java
+
+#### Read JSON File in Java
+
+To read the JSON file in Java, `FileReader()` method is used to read given JSON file.
+
+Example:
+
+```{python}
+{
+
+ "name" : "Kotte",
+ "college" : "BVRIT"
+
+}
+```
+
+The above code is the file that is used to read. we use the `json.simple` library.
+
+```{python}
+// program for reading a JSON file
+
+import org.json.simple.JSONArray;
+import org.json.simple.JSONObject;
+import org.json.simple.parser.*;
+
+public class JSON
+{
+ public static void main(Strings args[])
+ {
+
+ // file name is File.json
+ Object o = new JSONParser().parse(new FileReader(File.json));
+
+ JSONObject j = (JSONObject) o;
+
+ String Name = (String) j.get("Name");
+ String College = (String) j.get("College");
+
+ System.out.println("Name :" + Name);
+ System.out.println("College :" +College);
+
+ }
+
+}
+
+```
+
+Output:
+
+```{python}
+Name: Kotte
+
+College: BVRIT
+```
+
+In the above program, the `JSONParser().parse()` is used, which is present in the `org.json.simple.parser.*` to parse the File.json file.
+
+## Parsing Optimization
+
+JSON parse optimization is crucial for achieving optimal performance, resource efficiency, and a seamless user experience in applications that handle JSON data. It becomes particularly relevant in scenarios involving large datasets, real-time updates, and applications with high concurrency and scalability requirements. Performance optimization in the context of JSON involves strategic measures to enhance the efficiency of handling and transmitting JSON data. This includes focusing on two key aspects:
+
+Streaming and Incremental Processing: Implementing streaming and incremental processing techniques can be beneficial for large JSON datasets. This approach allows for parsing or serializing data incrementally, reducing memory overhead and improving overall processing speed. For example, `msgspec` could be a useful library to schema-based decoding and encoding for JSON. `msgspec` allows you to define schemas for the records you’re parsing. msgspec has significantly lower memory usage, and it is by far the fastest solution.
+
+
+Minimizing JSON Payload Size: Implementing data compression techniques, such as gzip or deflate, before transmitting JSON data over the network can significantly reduce payload size. This not only conserves bandwidth but also expedites data transfer. Using GZIP for JSON involves compressing JSON data before transmitting it over the network and decompressing it on the receiving end. This compression technique helps minimize the payload size, reducing the amount of data that needs to be transferred and improving overall network efficiency. [Here is an eternal website which demonstrates how to use GZip for JSON](https://www.baeldung.com/json-reduce-data-size#:~:text=Compressing%20with%20gzip&text=That's%20why%20gzip%20is%20our,and%20compress%20it%20with%20gzip).
+
+
+
+## Reference & External Resources
+
+* [performance comparison](https://www.adaltas.com/en/2021/03/22/performance-comparison-of-file-formats/#:~:text=For%20row%20based%20format%20bzip,snappy%2095%25%20and%2091%25)
+* [json reduce data size](https://www.baeldung.com/json-reduce-data-size#:~:text=Compressing%20with%20gzip&text=That's%20why%20gzip%20is%20our,and%20compress%20it%20with%20gzip)
+* [JavaScript Json parse](https://www.w3schools.com/js/js_json_parse.asp)
+* [openAI](https://www.openai.com/research/chatgpt)
+* [Java Json parse](https://www.geeksforgeeks.org/parse-json-java/)
+* [Python Json parse](https://www.freecodecamp.org/news/python-parse-json-how-to-read-a-json-file/)
+* [GZip Json](https://jcristharif.com/msgspec/)
+
+
+
+
+
+
diff --git a/Topics/Tech_Stacks/NoSQL_databases_JSON_interactions.md b/Topics/Tech_Stacks/NoSQL_databases_JSON_interactions.md
new file mode 100644
index 000000000..252f8aa46
--- /dev/null
+++ b/Topics/Tech_Stacks/NoSQL_databases_JSON_interactions.md
@@ -0,0 +1,212 @@
+# Introduction to interactions between NoSQL database and JSON
+## NoSQL database overview
+NoSQL databases represent a broad category of database management systems that differ from traditional relational databases in their data model, query languages, and consistency models. The term "NoSQL" stands for "Not Only SQL" or “non-relational”, reflecting the fact that these databases may not use relational tables with predefined schemas to store data. NoSQL databases use flexible data models that can adapt to changes in data structures and are capable of scaling easily with large amounts of data and high user loads.
+
+_Note: We assume you have the basic knowledge about JSON, please refer to this [link](https://www.w3schools.com/js/js_json_intro.asp) if you want to learn more about JSON._
+
+## Types of NoSQL databases
+Behind the big category of NoSQL databases, there are four major types of databases that are broadly used nowadays.
++ **Key-value databases**
+The simplest form of NoSQL databases, they store data as a collection of key-value pairs. The key is a unique identifier, and each of them corresponds to a value, which is the data associated with it.
+
++ **Wide-Column Stores**
+These databases store data in tables, rows, and dynamic columns. Unlike relational databases, the schema can vary from row to row in the same table.
+
++ **Graph Databases**
+Graph databases use graph structures with nodes, edges, and properties to represent and store data. The relationships are first-class entities in these databases.
+
++ **Document databases**
+These databases store data in documents, which are typically JSON-like structures. Each document contains pairs of fields and values. The values can typically be a variety of types including things like strings, numbers, booleans, arrays, or objects.
+
+We will focus on the Document-Oriented Databases here as the documents are very similar to JSON objects. Further, we will discuss the interaction between document databases and JSON including importing, storing, querying and indexing JSON data with the advantages of JSON document databases.
+
+## Document Database - JSON interaction
+
+In this guide, we'll use MongoDB as an example, but the process is similar for other document databases. MongoDB stores data records as documents, which are gathered together in collections. A database stores one or more collections of documents. You can see here about the details about how to create databases and collections.
+
+
+### Importing JSON Data
+After installing the MongoDB Community Edition and MongoDB Compass, you can use the mongoimport command-line tool, which is part of the MongoDB server installation.
+
+Navigate to the directory containing your JSON file, then open your command line or terminal, run the `mongoimport` command with the necessary parameters. For example:
+
+```
+mongoimport --db mydatabase --collection mycollection --file mydata.json --jsonArray
+```
+
++ `--db mydatabase`: Specifies the database name.
++ `--collection mycollection`: Specifies the collection.
++ `--file mydata.json`: Specifies the path to your JSON file.
++ `--jsonArray`: Indicates that the JSON file contains an array of documents.
+
+To verify if JSON files have been imported correctly, use MongoDB Compass or the MongoDB shell to connect to your database and check if the data has been imported correctly.
+
+Moreover, you can always use programming languages like Java and Python to import JSON files, or using MongoDB Compass directly, see [here](https://www.mongodb.com/compatibility/json-to-mongodb)for instructions.
+
+### Storing JSON Data
+Storing JSON data in MongoDB is a seamless process due to its native support for JSON-like structures. There are mainly two ways to store JSON data in a Document-Oriented database:
+
+Store the whole object in a single document.
+Example:
+
+```
+book {
+ title: 'Moby Dick',
+ author: {
+ name: 'Herman Melville',
+ born: 1819
+ }
+}
+```
+
+Here, the author details are inside the book document itself. This technique is also known as embedding because the author subdocument is embedded in the book document.
+
+Store parts of objects separately and link them using the unique identifiers (referencing).
+Example:
+```
+author {
+ _id: ObjectId(1),
+ name: 'Herman Melville',
+ born: 1819
+}
+
+book {
+ _id: ObjectId(55),
+ title: 'Moby Dick',
+ author: ObjectId(1)
+}
+```
+One author may write multiple books. So, to avoid duplicating data inside all the books, we can create separate author documents and refer to it by its `_id` field.
+
+After making sure how you want to store your data, use `insertOne` or `insertMany` methods to insert your document to desired collections:
+
+```
+db.mycollection.insertOne({/* JSON-structure object */});
+```
+or
+```
+db.mycollection.insertMany([/* array of JSON documents */]);
+```
+
+### Querying JSON Data
+MongoDB queries are expressed as JSON-like structures, allowing you to easily query fields within documents using the build-in find method with following parameters.
+```
+db.collection.find( , , );
+```
+``: specify the search criteria for the query. It's essentially a filter that selects which documents to include based on their fields' values.
+If you want to look for something with some specific values, just put `{ “field1”: , “field2”: , ...}`
+For example, to find documents where name is "John", you can simply use:
+```
+db.collection.find({ "name": "John" })
+```
+
+You can also use comparison operators like `$gt` (greater than), `$lt` (less than), `$eq` (equal), etc. and logical operators like `$and,` `$or`, and `$not` to build complex queries.
+For example, to find documents where the age field is greater than 25, use `$gt`.
+```
+db.collection.find({ "age": { "$gt": 25 } });
+```
+To find documents where age is greater than 25 and name is "John":
+```
+db.collection.find({ "$and": [{ "age": { "$gt": 25 } }, { "name": "John" }] })
+```
+You can query nested fields by using dot notation
+```
+“.”
+```
+For example, to find documents where the city in the address is "Anytown":
+```
+db.collection.find({ "address.city": "Anytown" })
+```
+
+To query for array elements
+For example, If a document has a field tags that is an array, to find documents containing a tag "mongodb":
+```
+db.collection.find({ "tags": "mongodb" })
+```
+
+
+``: Specifies the fields to return in the documents that match the query filter. The parameter contains either include or exclude specifications, not both, unless the exclude is for the `_id` field.
+
+For example, to return only the name and age fields of documents
+```
+db.collection.find({}, { "name": 1, "age": 1, "_id": 0 })
+```
+Here, 1 indicates inclusion of the field, and _id is explicitly excluded with 0. Note unless the _id field is explicitly excluded in the projection document _id: 0, the _id field is returned.
+
+``: Specifies additional options for the query. These options modify query behavior and how results are returned.
+This parameter is not passed directly into the `find()` method like the `` and `` parameters. Instead, `` are specified through method chaining, where you append methods to `find()` that correspond to the various options you want to apply, such as sorting, limiting, and skipping documents.
+
+For example, you can use `sort()` to sort the results based on one or more fields.
+```
+db.collection.find().sort({ "age": 1 })
+```
+Here, we sort documents by age in ascending order.
+
+To restrict the number of documents returned, use limit().
+```
+db.collection.find().limit(5)
+```
+We limit the query to return only 5 documents here.
+
+To get a more specific result, you can combine the option method together, for example combine `limit()` and `skip()` for pagination.
+```
+db.collection.find().skip(5).limit(5)
+```
+We will get the second set of 5 documents in this case.
+
+There are many more available options, visit here for more details.
+
+Note every parameter above is optional, to return all documents in a collection, omit all parameters or pass an empty document ({}).
+
+### Indexing JSON Data
+Indexing JSON data in a NoSQL database like MongoDB is crucial for optimizing query performance. An index in a MongoDB database is a special data structure that stores a small portion of the collection's data in an easy-to-traverse form. The index stores the value of a specific field or set of fields, ordered by the value of the field as specified in the index.
+
+The purpose of indexing is that it supports the efficient execution of queries. Without indexes, MongoDB must perform a collection scan, i.e., scan every document in a collection, to select those documents that match the query statement.
+
+There are many type of indexes in NoSQL databases:
++ Single Field: Apart from the _id field, which is automatically indexed, you can create indexes on any field in a document.
++ Compound Index: Indexes multiple fields in a single index.
++ Multikey Index: Automatically created for fields that hold arrays; used to index array elements.
++ Text Index: Created to enable text search on string content.
++ Hashed Index: Indexes the hash of the value of a field.
+Indexes have the property to be unique, which ensures that two documents cannot have the same value for the indexed field. They can be created with a specified order (ascending or descending).
+We will give some examples for some common types of indexes using createIndex method:
+
+Single Field Index: To create an index on a single field
+```
+db.collection.createIndex({ "fieldname": 1 }) // 1 for ascending order
+```
+Compound Index: To index multiple fields, specify each field and its sort order
+```
+db.collection.createIndex({ "field1": 1, "field2": -1 }) // 1 for ascending, -1 for descending
+```
+Text Index: To enable text search
+```
+db.collection.createIndex({ "fieldname": "text" })
+```
+
+There are a few considerations for indexing JSON data, while indexes can significantly speed up read queries, it does also consume memory. The larger the index, the more memory it requires. Further, more selective indexes (where index entries correspond to fewer documents) are more efficient. Thus, only by carefully selecting and creating indexes based on the specific needs and query patterns of your application, the query response times and overall application efficiency can be significantly improved.
+
+
+## Advantages of JSON Structure Database
+### Better schema flexibility
+The best part of a JSON document database is the schema flexibility, unlike relational databases, JSON databases allow for a flexible and dynamic schema, meaning the structure of data can be changed without impacting existing data.They easily store and manage complex data types, including nested documents and arrays.
+
+### Faster and have more storage flexibility
+NoSQL databases, in general, have more storage flexibility and offer better indexing methods. In a document database, each document is handled as an individual object, and there is no fixed schema, so you can store each document in the way it can be most easily retrieved and viewed. Additionally, you can evolve your data model to adapt it to your changing application requirements.
+
+### Better suited for big data analytics
+JSON structure databases have a flexible schema and are often designed to scale out horizontally, distributing data across multiple servers, which is beneficial for handling large volumes of data. Further, these databases can easily pass data to popular data analysis programming languages like Python and R, without additional coding steps.
+
+## Reference
+https://www.mongodb.com/nosql-explained
+https://www.w3schools.com/js/js_json_intro.asp
+https://www.mongodb.com/docs/manual/core/document/
+https://www.mongodb.com/json-and-bson
+https://www.mongodb.com/docs/manual/core/databases-and-collections/
+https://www.mongodb.com/docs/manual/reference/method/db.collection.find/
+https://mongodb.github.io/node-mongodb-native/4.0//interfaces/findoptions.html
+https://www.mongodb.com/docs/manual/reference/method/db.collection.find/#std-label-method-find-projection
+OpenAI. (2023, Nov 22). ChatGPT: A Language Model by OpenAI. Retrieved Nov 22, 2023 from https://www.openai.com/research/chatgpt
+
+
diff --git a/Topics/Tech_Stacks/StripeAPI.md b/Topics/Tech_Stacks/StripeAPI.md
new file mode 100644
index 000000000..bbea1f7a4
--- /dev/null
+++ b/Topics/Tech_Stacks/StripeAPI.md
@@ -0,0 +1,120 @@
+# Setting Up Stripe API in a JS Environment
+
+
+## 1. Introduction to Stripe
+Stripe is a powerful payment processing platform that allows developers to seamlessly integrate payment functionality into their applications. With Stripe, you can handle online transactions securely and efficiently. Here are some key aspects to consider when working with Stripe:
+
+**Pros:**
+- **Ease of Use:** Stripe provides a developer-friendly interface, making it easy to implement payment solutions.
+- **Versatility:** It supports various payment methods, including credit cards, digital wallets, and more.
+- **Security:** Stripe takes care of PCI compliance, reducing the burden on developers to handle sensitive payment information securely.
+- **Developer Resources:** Extensive documentation, community support, and a range of client libraries make integration smooth.
+
+**Cons:**
+- **Transaction Fees:** While convenient, using Stripe comes with transaction fees, which may impact the cost-effectiveness of your solution.
+- **Learning Curve:** For beginners, there might be a learning curve in understanding advanced features and customization options.
+- **Dependency on Internet Connection:** As an online service, Stripe's functionality is dependent on a stable internet connection.
+
+- Watch the [Introduction Video](https://www.youtube.com/watch?v=7edR32QVp_A).
+- Explore the [Stripe API documentation](https://stripe.com/docs/development/get-started).
+
+## 2. Create a Stripe Account
+- [Sign up](https://stripe.com/docs/development/get-started) for a Stripe account.
+
+## 3. Obtain API Keys
+Obtaining API keys is like getting the keys to the payment processing kingdom. They're your credentials to interact with Stripe's services. Emphasizing their significance and the need to keep them secure is crucial. It's like having the crown jewels—you wouldn't want them falling into the wrong hands!
+- In your [Stripe Dashboard](https://dashboard.stripe.com/), go to "Developers" > "API keys" to find your keys.
+
+## 4. Install Stripe Library
+- In your Node.js project, install the Stripe npm package:
+ ```bash
+ npm install stripe
+ ```
+
+## 5. Implement Payment Integration (React.js)
+- Install the Stripe React library:
+ ```bash
+ npm install @stripe/react-stripe-js @stripe/stripe-js
+ ```
+- Follow the guide on [Accept a payment](https://stripe.com/docs/development/quickstart) for React.
+
+## 6. Handle Webhook Events (Node.js)
+### What's a Webhook?
+
+A webhook is like a messenger that lets one application send real-time information to another. In the context of Stripe, it's how Stripe tells your application about events related to payments, subscriptions, and more.
+
+### Why Handle Webhook Events?
+
+Imagine you're running an online store. You don't want to sit there refreshing your order page to see if a payment went through. That's where webhooks come in. They notify your server immediately when something important happens in your Stripe account.
+
+### Example Scenario:
+
+Let's say a customer successfully completes a payment on your website. Without webhooks, your app might not know about this until it checks Stripe for updates. With webhooks, Stripe can instantly notify your server about the successful payment.
+
+
+
+
+- Create a server-side route using Express and the Stripe package to handle webhook events. This ensures that your application responds to events triggered by Stripe.
+
+## 7. Implement Subscription Logic (If Needed)
+- Follow the [Stripe Subscriptions guide](https://stripe.com/docs/billing/subscriptions/overview).
+
+## 8. Secure Your Integration
+- Ensure your React.js app uses HTTPS.
+- Keep API keys secure; never expose them on the client side.
+
+## 9. Test Transactions
+- Simulate transactions using [Stripe test card numbers](https://stripe.com/docs/testing).
+
+## 10. Documentation
+- Document your integration, including setup instructions, API usage, and error handling.
+
+## 11. Set Up Stripe CLI
+- Install the [Stripe CLI](https://stripe.com/docs/development/quickstart#set-up-stripe-cli).
+
+## 12. Authenticate Stripe CLI
+- Run `stripe login` in the command line and follow the authentication process.
+
+## 13. Confirm Setup
+- Use the Stripe CLI to create a sample product and price to confirm setup.
+
+## 14. Install Node.js SDK
+- Initialize Node.js in your project and install the Stripe Node.js server-side SDK:
+ ```bash
+ npm init
+ npm install stripe --save
+ ```
+
+## 15. Run First SDK Request
+- Create a subscription product and attach a price using the Node.js SDK. Save the following code in a file, e.g., `create_price.js`:
+ ```javascript
+ const stripe = require('stripe')('sk_test_Hrs6SAopgFPF0bZXSN3f6ELN');
+
+ stripe.products.create({
+ name: 'Starter Subscription',
+ description: '$12/Month subscription',
+ }).then(product => {
+ stripe.prices.create({
+ unit_amount: 1200,
+ currency: 'usd',
+ recurring: {
+ interval: 'month',
+ },
+ product: product.id,
+ }).then(price => {
+ console.log('Success! Product ID: ' + product.id);
+ console.log('Success! Price ID: ' + price.id);
+ });
+ });
+ ```
+- Run the following command:
+ ```bash
+ node create_price.js
+ ```
+- Save the product and price identifiers for future use.
+
+## 16. Save Identifiers
+- Save identifiers generated during setup for future use.
+
+## 17. Explore Further
+- Refer to the official [Stripe documentation](https://stripe.com/docs) for in-depth information.
diff --git a/Topics/Tech_Stacks/Tailwind.md b/Topics/Tech_Stacks/Tailwind.md
new file mode 100644
index 000000000..d307ebb92
--- /dev/null
+++ b/Topics/Tech_Stacks/Tailwind.md
@@ -0,0 +1,130 @@
+# Getting Stylish and Responsive with Tailwind CSS
+
+## Table of Contents
+### [Introduction](#introduction)
+### [Installation](#installation)
+### [Usage](#usage)
+### [Advantages and Limitations](#advantages-and-limitations)
+### [Resources](#resources)
+
+
+## Introduction
+Tired of the repetitive nuances of CSS? Tailwind is a utility-first CSS framework, which allows developers to use premade styling classes without the need to write CSS classes from scratch. Tailwind provides small, single-purpose, reusable classes for spacing, lettering, and colours. These classes can then be used in HTML elements, or components from frontend libraries and frameworks.
+
+
+## Installation
+1. When using node, install and execute Tailwind:
+```bash
+npm install -D tailwindcss
+npx tailwindcss init
+```
+
+2. The generated `tailwind.config.js` configuration file contains the default setup. Give Tailwind access to files by listing the template paths in the content section. For React/TypeScript web app, it can look like this:
+```js
+content: [
+ './pages/**/*.{ts,tsx}',
+ './components/**/*.{ts,tsx}',
+ './app/**/*.{ts,tsx}',
+ './src/**/*.{ts,tsx}',
+ ]
+```
+
+3. Add the following to the global CSS file:
+```css
+@tailwind base;
+@tailwind components;
+@tailwind utilities;
+```
+
+Now we're set!
+
+> **TIP:** For VSCode users, consider installing the [Tailwind CSS IntelliSense](https://marketplace.visualstudio.com/items?itemName=bradlc.vscode-tailwindcss) extension, which offers autocomplete suggestions, linting, and previews when hovering over classes. It's especially helpful for those starting out with the framework.
+
+
+## Usage
+### The Basics
+Each Tailwind class is focused on a specific responsibility. To use a class, add it to the `class` or `className` (when using React) attribute of the element/component. For example, use the following Tailwind classes to create a div that
+- occupies the full width of the screen: `w-screen`
+- allows scrolling within: `overflow-scroll`
+- contains a sky blue background: `bg-sky-400`
+- has large text: `text-lg`
+
+Putting it all together:
+```jsx
+
+ {/* Some content */}
+
+```
+
+Start [here](https://tailwindcss.com/docs/aspect-ratio) to learn about all Tailwind classes.
+
+When needed, add custom styles by modifying the `theme` section of `tailwind.config.js`. Here, we configure the `sm` breakpoint from 480px to 500px.
+```js
+module.exports = {
+ theme: {
+ screens: {
+ // sm: '480px',
+ sm: '500px',
+ md: '768px',
+ lg: '976px',
+ xl: '1440px',
+ }
+ }
+}
+```
+
+To add custom styles inline, use square bracket `[]` notation. For example, if we want exactly 9px of padding, we can use:
+```jsx
+
+ {/* Some content */}
+
+```
+
+Refer to [Custom Styles](https://tailwindcss.com/docs/adding-custom-styles) for more information.
+
+### Responsive Design
+Tailwind is powerful for creating a suitable experience across different devices.
+
+#### Targeted Screen Sizes
+To set a style for specific screen sizes, Tailwind offers breakpoints. These apply a style only if the screen width exceeds the breakpoint's minimum width. Breakpoints are an alternative to the traditional CSS `@media` queries.
+
+Breakpoints:
+- `sm` - 640px
+- `md` - 768px
+- `lg` - 1024px
+- `xl` - 1280px
+- `2xl` - 1536px
+
+To denote a breakpoint, add the prefix before a class name to conditionally apply: `:`
+
+For example, to make the height of a div 100%, on screens more than 1024 pixels wide:
+```jsx
+
+ {/* Some content */}
+
+```
+
+> **NOTE:** A common mistake is to use 'sm' to target mobile devices. Mobile sizes have the smallest widths, so their styles should **not** have a breakpoint prefix.
+
+#### Flexbox
+For dynamically-sized layouts, Tailwind also provides classes for the standard [flexbox](https://tailwindcss.com/docs/flex) options, and controls for [flex-basis](https://tailwindcss.com/docs/flex-basis), on par with CSS.
+
+## Advantages and Limitations
+### Advantages
+- When using Tailwind CSS classes, developers are not restricted to a component structure and inherited styles that don't match their project. This can be problematic when using templates from component libraries such as [Bootstrap](https://getbootstrap.com). This flexibility makes Tailwind more customizable for different types of projects.
+- Since each Tailwind class has atomic reponsibilities, developers avoid issues where a component inherits unknown CSS styles.
+- Overall, Tailwind minimizes redundant CSS code and shortens CSS files.
+
+### Limitations
+- If accustomed to writing CSS, it can take some time to get used to using the equivalent Tailwind classes.
+- Some projects may not need Tailwind. Using the framework requires additional overhead, which may not be worth it for small projects.
+- It may take more time to complete projects, since Tailwind prioritizes styling control over pre-made components.
+
+
+## Resources
+- [Learning Software Engineering: CSS](./CSS.md)
+- [Official Tailwind Site](https://tailwindcss.com)
+- [Extended Installation](https://tailwindcss.com/docs/installation)
+- [A Friendly Video Introduction](https://www.youtube.com/watch?v=pfaSUYaSgRo)
+- [Tailwind UI: A Component Library Styled with Tailwind CSS](https://tailwindui.com)
+- [Defining States (Hover, Focus, and more)](https://tailwindcss.com/docs/hover-focus-and-other-states)
diff --git a/Topics/Tech_Stacks/VueJS.md b/Topics/Tech_Stacks/VueJS.md
new file mode 100644
index 000000000..36f0801f8
--- /dev/null
+++ b/Topics/Tech_Stacks/VueJS.md
@@ -0,0 +1,103 @@
+# Vue.js
+
+## Introduction
+
+Vue.js is an open-source JavaScript framework for building user interfaces and single-page applications. It is known for its ease of integration into projects with other libraries or existing projects, and its capability to power sophisticated Single-Page Applications (SPA) when used in combination with modern tooling and supporting libraries.
+
+---
+
+## Core Features
+
+1. **Reactive Data Binding:** Vue.js offers a reactive and composable data binding system. It provides a straightforward template syntax to declare the state-driven DOM (Document Object Model) rendering.
+
+2. **Components:** Vue's component system allows you to build encapsulated reusable custom elements, which can be composed into complex applications.
+
+3. **Transition Effects:** Vue provides various ways to apply transition effects when items are inserted, updated, or removed from the DOM.
+
+4. **Virtual DOM:** It utilizes a virtual DOM to render UI, ensuring optimal performance by minimizing direct DOM manipulation.
+
+5. **Easy to Learn:** Vue.js is considered one of the easiest frameworks to learn, especially for those who are already familiar with HTML, CSS, and JavaScript.
+
+6. **Ecosystem:** Vue.js has a rich ecosystem supporting routing (Vue Router), state management (Vuex), and build tooling (Vue CLI).
+
+7. **Single File Components (SFC):** A Vue SFC is a file with a `.vue` extension that encapsulates HTML, JavaScript, and CSS in a single file. This approach makes components more modular and easier to maintain. Below is what inside a `.vue` file:
+ ```html
+
+
+
+
+
+
+
+ ```
+
+
+
+## Setting Up
+
+### Prerequisites
+- Basic knowledge of HTML, CSS, and JavaScript
+- Node.js and npm installed on your system
+
+### Installation(one of the following methods)
+- **Using Vue CLI (Recommended):**
+ ```
+ npm install -g @vue/cli
+ vue create my-vue-app
+ cd my-vue-app
+ npm run serve
+ ```
+
+- **Direct `
+ ```
+
+### Create a Basic Vue instance
+- In `.js` or `