Skip to content

Commit

Permalink
Merge branch 'main' into samuel-liu-jsonrpc
Browse files Browse the repository at this point in the history
  • Loading branch information
liu-samuel authored Nov 27, 2023
2 parents bd50923 + c252995 commit d7c2900
Show file tree
Hide file tree
Showing 57 changed files with 4,653 additions and 118 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
.idea/
.vscode/
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,6 +25,8 @@ Potential Topics--

7. React Native
1. Set up
8. Unity
1. Introduction to Unity Basics

- Software Tools
1. Git
Expand Down Expand Up @@ -92,6 +94,6 @@ Potential Topics--
3. Helpful Courses
4. User Experience Orientated Games
- Product Management
1. Beginner's guide to product management and becoming a successful product manager
1. Beginner's guide to product management and becoming a successful product manager with case studies.
- Other useful resources
- Teamwork
8 changes: 6 additions & 2 deletions Topics/Development_Process.md
Original file line number Diff line number Diff line change
@@ -1,12 +1,13 @@
## Resources for Development Process
## Git
### [Learning Git](./Development_Process/Git/Git.md)

### [Trunk-Based Development](./Development_Process/Trunk_Development.md)
### [Django Project Deployment: AWS, Vercel, and Railway](./Development_Process/Django_Deployment_AWS_Railway_Vercel.md)
### [Automated Frontend Deployment with Vercel](./Development_Process/Frontend_Automated_Deployment_Vercel.md)
### [Flask Application Deployment on Heroku](./Development_Process/Flask_App_Deployment_Heroku.md)
### [Quality Assurance Testing](./Development_Process/QA_testing.md)
- [Automated Testing](./Development_Process/Automated_Testing.md)

- [Large Language Model (LLM) for Testing and Debugging](./Development_Process/LLM_Testing_Debugging.md)
### [Getting Started With Docker](./Development_Process/Docker.md)
### [Getting Started With WSL 2](./Development_Process/WSL.md)

Expand Down Expand Up @@ -69,3 +70,6 @@ This is only a simplification of what "Clean Architecture" is; the topic is so v
- A very detailed explanation of Clean Architecture by Robert C. Martin or Uncle Bob and his book
- https://www.youtube.com/watch?v=2dKZ-dWaCiU
- https://github.com/ropalma/ICMC-USP/blob/master/Book%20-%20Clean%20Architecture%20-%20Robert%20Cecil%20Martin.pdf

## Code Smells
### [Code Smells](./Development_Process/Code_Smells.md)
22 changes: 21 additions & 1 deletion Topics/Development_Process/Automated_Testing.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,24 @@ Integration testing, in contrast to unit testing, is meant to verify that your c
- **Incremental Approach**:
- Educative: [What is incremental testing?](https://www.educative.io/edpresso/what-is-incremental-testing)
- **Sandwich/Hybrid Approach**:
- Educative: [What is the hybrid testing approach?](https://www.educative.io/answers/what-is-hybrid-integration-testing)
- Educative: [What is the hybrid testing approach?](https://www.educative.io/answers/what-is-hybrid-integration-testing)

#### API Testing

Another important area of automated testing is API, or Application Program Interface, testing. API testing can be helpful to integrate into your automated tests as it ensures that the requests that your software sends are correctly received and in turn, you receive the right output. API testing is incredibly powerful and can be used to not only check that the right status is received but also to check that the data returned is of a particular form or has certain attributes, amongst other things. Errors returned in automated API tests can be a sign that there is an error in your code in the requests you send or that there has been a change to the API that you are accessing. Either way, these tests can be a great method of catching these issues early and resolving them before your code is deployed. In larger software systems, there are actually a number of applications of API testing beyond the two mentioned above which can include aspects such as penetration testing or security testing and a more extensive overview of these kinds of tests can be found at [https://blog.hubspot.com/website/api-testing](https://blog.hubspot.com/website/api-testing).

##### Testing in Postman

A common tool for testing APIs during the development phase is Postman, which allows you to send a variety of requests such as GET and POST requests and examine status codes and outputs. However, Postman can actually also be used to create automated API tests that can be integrated with your CI/CD pipeline. These automated tests can then be configured to run on certain actions, such as a push to main. In order to understand how to set up this automated testing, we first need to understand how testing works in Postman. To automate your API tests and integrate them into your CI/CD pipeline, Postman requires that your tests be capable of being run through the command line interface, or CLI. This, however, is not the only way that Postman supports testing; rather, there are 3 methods:

1. Manually: After setting up your Postman tests, you can run them manually through the application.
2. Scheduled: You can schedule your tests to be run at regular intervals from the Postman cloud, as determined by you (e.g. once a day, once a month, etc.).
3. Through the CLI: After setting up your tests in Postman, you can run them through a command line interface such as Terminal by generating an API key that lets you log in to Postman from your Terminal and then execute a command to run your collection. This command is provided to you by Postman.

##### Setting up Automated Testing in Postman

Having understood the 3 methods of testing in Postman, I will now delve into the last one in more detail to allow you to set up automated API testing. In the Postman app, you should navigate to "APIs" in the left sidebar. Then, create a new API by pressing the + sign and give it an informative name. Inside this API you want to create a new request for every endpoint or feature that you want to test. This can be done by right-clicking and selecting "Add request". Inside the request, add any necessary parameters or headers as you normally would inside Postman. Then, you can click on the "Tests" tab and create your own Postman tests here. To learn more about the syntax for doing so you can use the quick help feature which describes how to write these tests in more detail. To set up automated testing you can then right-click your API and select "Run collection". In the sidebar that opens on the right, you will now see the 3 options discussed above. Click on "Automate runs via CLI" (although it can be a good idea to run your tests manually once to confirm they work as a sanity check) and then press "Configure command" under the "Run on CI/CD" header. This will open a new page that allows you to select your CI/CD provider and the operating system environment for the pipeline. You can copy and paste the generated commands into your configuration file. Make sure you generate an API key and add it to your repository's secrets. For reference on how to do this in GitHub, see [https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions](https://docs.github.com/en/actions/security-guides/using-secrets-in-github-actions). You now have automated API testing set up through the Postman CLI!

##### Postman Automated Testing Involving Authorization

Note that there are some extra steps that you must take if your API requests require authorization in the form of a bearer token to be properly processed since if you hard-code the token into the request it will eventually expire and cause the automated tests to fail. This can be the case for a lot of APIs but does not mean that you can't incorporate automated API testing. Instead, what you must do is add a request inside your API collection that gets these authorization details. You can then create an environment by clicking on "Environments" in the left sidebar underneath "APIs". Create a new environment and define environment variables that will store all the necessary authorization details from your request. You can now navigate back to your API and select the environment you just created from the dropdown on the top right of your screen. Go to the "Tests" tab of the authorization request you added and add a line of code to scrape the authorization token and other details from the request's output and store it in the environment variables you created. Once again, the Postman quick help contains information on how to write this line of code under "Set an environment variable". Then, for all the other requests in the API, you can add the relevant authorization details by clicking on the "Authorization" tab and filling out the fields using your environment variables. Lastly, you must ensure that when you create your test collection you re-order the tests by dragging and dropping so that the request that collects authorization details is run first, enabling the rest of your tests to work correctly with renewed authorization details. Using refresh tokens or the form of scraping described above is incredibly beneficial for automated API testing as it ensures that you, as the developer, will not have to manually change the tokens of your requests every time you want to commit to a branch or main and have these automated tests run. Rather, you can rest assured that these tests are truly automated now and do not require any manual changes from you in order to be run.
37 changes: 37 additions & 0 deletions Topics/Development_Process/Code_Smells.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
## Code Smells
Code smells refer to certain types of code that, while functional, will require increasing accommodation as more code begins to rely on the "smelly" code. While it is possible to ignore these types of code, the longer that they remain, the harder it becomes to fix issues they directly or indirectly cause in the future. Therefore, it's in a developer's best interest to be familiar with a broad spectrum of code smells so that they may identify and eliminate them as soon as possible.


## A Motivating Example
Consider the following snippet of code:

```
class Something:
temp_field: int
def do_something(self) -> int | None:
if self.temp_field == None:
return None
else:
return self.temp_field + 1
```
On its own, this code is functional, but it raises a number of questions. For example: when exactly is `temp_field` equal to `None`? When does it actually store relevant data? This answer is largely dependant on how the class `Something` is used, but the specifics may not be clear to someone reading this code.

This code smell is known as a "Temporary Field", which is when classes are given attributes to be used in some of their methods, but in some cases the attribute stores null values. While adding null checks easily allows code using these attributes to function, it decreases code readability, and if the technique is abused it can easily lead to unnecessarily long code. To fix this, refactoring is required.

## Refactoring
Refactoring is a software development practice in which code is rewritten such that no new functionality is actually provided, but the code becomes cleaner and better accommodates future extensions of features. While it is generally recommended in software development that code should not be rewritten, but extended (see the [Open/Closed Principle of SOLID](../Development_Process.md#solid-principles), refactoring typically prevents more significant amounts of code rewriting that may be required in the future.

Many refactoring solutions to code smells are well-established and should be drawn upon once relevant code smells are identified. One such solution for the previous example is known as "Introduce Null Object", in which attributes that may be null should be defined over a new "Null" class, which can provide default values when the aforementioned attribute would have previously been null. This contains any null checks to the new class, allowing for the removal of if-statements in other code that may cause confusion or excessive code length. Furthermore, future code that may deal with the previously temporary field will also no longer need any null checks, as the new class does it for them. Thus, refactoring improved both the readability and extendability of the former code.

## Categories
While there may be many different types of code smells, all of them fall into one of five categories that can more easily be identified when writing code. The categories are as follows:
- Bloaters
- Object-Oriented Abusers
- Change Preventers
- Dispensables
- Couplers

## More Info
For further insight into all the different types of code smells including explanations, examples, and solutions, the following resource is highly recommended:
https://refactoring.guru/refactoring/smells
138 changes: 138 additions & 0 deletions Topics/Development_Process/Deploy_Node.js_Docker_AWS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# Node.js Deployment through Docker and AWS

## Table of Contents
### [Overview](#overview-1)
### [Tech Stack](#tech-stack-1)
### [Deployment Process](#deployment-process-1)
### [External Links](#external-links-1)

## Overview
In the realm of modern software development, containerization has become a standard practice for deploying applications. Docker simplifies this process by packaging applications and their dependencies into containers, ensuring consistency across various environments. Node.js, a popular JavaScript runtime, is often used to build scalable and efficient server-side applications.

AWS (Amazon Web Services) provides a suite of cloud services that enable developers to deploy and manage applications easily. ECS (Elastic Container Service) and ECR (Elastic Container Registry) are two fundamental services offered by AWS to manage containers and container images, respectively.

Running your Node.js application on an EC2 instance will allow this to be accessed on a public domain hosted by AWS. Containerizing your Node.js application through Docker allows for easy deployment via running the application in an isolated environment. Combining these together allows your application to run inside a container while inside a virtual machine which is hosted on the cloud.

Here is a rough visualization of what the process of deploying the application will be like:
<p align="center">
<img src="https://i.postimg.cc/1XJ1dtJS/node-docker-ecr.png" style="width: 50%; height: auto;">
</p>


This diagram shows containerizing a Node.js application by building an image of it and pushing it to an Amazon ECR Repository

<p align="center">
<img src="https://i.postimg.cc/JnbQMYD7/ecr-ecs-ec2.png" style="width: 50%; height: auto;">
</p>
This followup diagram shows how an ECR Repository connects to Amazon ECS and is then deployed to an EC2 instance

## Tech Stack

Docker: Docker is a platform that allows you to package an application and its dependencies into a standardized unit called a container. It provides isolation, portability, and scalability for applications. It allows for easy deployment as it essentially creates a separate virtual machine to run the application on.

Node.js: Node.js is a JavaScript runtime environment which is built on Chrome's V8 JavaScript engine. It enables developers to run JavaScript code on the server-side, making it ideal for building scalable network applications.

Amazon ECS (Elastic Container Service): ECS is a fully-managed container orchestration service provided by AWS. It allows you to run, stop, and manage Docker containers on a cluster of EC2 instances easily.

Amazon ECR (Elastic Container Registry): ECR is a managed Docker container registry provided by AWS. It allows you to store, manage, and deploy Docker container images, making it easy to integrate with ECS for container deployments.

Amazon EC2 (Elastic Compute Cloud): EC2 is AWS's resizable cloud computing service offering virtual machines (instances) for running applications. It provides flexibility to configure and scale computing resources based on demand.

## Deployment Process
This guide assumes you have already created your Node.js application and are using the Bash Unix shell.

### Containerize your Node.js application:
Using Dockerfile allows one to create a container for the app.
1) Create a Dockerfile in the root directory of your Node.js application.
2) Write instructions to build your Node.js app within the Dockerfile.
3) Build the Docker image locally using docker build -t <image-name> .
4) Test the image locally to ensure it works as expected: docker run -p 8080:80 <image-name>
* You can use any port numbers but we will use 8080:80 as the example
* The first number 8080 is the host port and 80 is the container port.
5) If it is running correctly, you can stop and remove the container using this command (Assuming there are no other containers to be kept).
```bash
$ docker container prune
```

### Create an ECR repository:

1) Log in to the AWS Management Console.
2) Go to the Amazon ECR service.
3) Create a new repository to store your Docker image.
4) Copy the Image URI
5) Push your Docker image to ECR:

Log in to ECR using the AWS CLI:
```bash
$ aws ecr get-login-password --region <region> | docker login --username AWS --password-stdin <aws-account-id>.dkr.ecr.<region>.amazonaws.com
```
Tag your Docker image with the ECR repository URL:
```bash
$ docker tag <image-name> <aws-account-id>.dkr.ecr.<region>.amazonaws.com/<repository-name>:<tag>
```
Push your Docker image to the ECR repository:
```bash
$ docker push <aws-account-id>.dkr.ecr.<region>.amazonaws.com/<repository-name>:<tag>
```
- Replace \<image-name> with the name you want to label your image with your desired name for the image
- Replace \<aws-account-id>, \<region>, \<repository-name>, \<tag> with your correct credentials and your ECR name URL.

You can also instead press the “View push commands” button and follow those instructions.


### Create an ECS Task Definition:
Go to the Amazon ECS service in the AWS Management Console.
2) Click on "Task Definitions" in the left-hand navigation pane.
3) Click “Create a new task definition”
4) Specify your container image details
- Copy the Image URI from the ECR dashboard
- The container port mapping you established in the previous step
- How much memory it requires (CPU, GPU)
- Click the “Create” button at the bottom

### Create an ECS Cluster:
Inside the AWS Management Console of ECS cluster
1) Click create an ECS cluster.
2) Configure the ECS cluster settings and select launch type EC2
3) Select the EC2 instance type
- This is yours to decide how much memory your virtual machine should have. The most common is the t2.micro type which is eligible for the free tier.
4) Click the “Create” button

### Create an ECS Service:
Inside the same dashboard click on the ECS cluster you created
1) Click on the “Create service” button
2) Ensure the instance type is EC2 instead of Fargate and that it is a **service** not a task
3) Under “Select a task family”, select your created task definition in the previous step
4) Define any other desired number of tasks, network configuration, load balancer settings, etc.
5) After finalizing settings, create the service and run the service

### Expose the EC2 IP Address to External Connections
Go to the AWS Management Console for EC2
1) Find the EC2 instance linked to your ECS cluster and click on the security group
2) Press edit inbound rules and add a two new rules
3) Set the type to all traffic and all ports from any IPV4
4) For the other rule set it to also accept any traffic from all ports from any IPV6
5) Click save rules

### Access your Node.js application
Go to the EC2 Management Console and find the same EC2 instance
1) Find its public IPV4 address or DNS and add a colon with the port number at the end
2) Use your browser to access it or any other service (Postman, Insomnia, etc.)
3) You should see either a Cannot GET message or your expected endpoint result

Note: Set up a test endpoint to confirm that the Node.js application is running

## External Links
A more detailed version of my article with more in depth steps is available here if you need more help, posted by Raphael Mansuy:
* https://dev.to/raphaelmansuy/deploy-a-docker-app-to-aws-using-ecs-3i1g

Here are some extra links that will help you incorporate other AWS services with Node.js like an RDS database or a S3 bucket:

Amazon S3:
* https://medium.com/codebase/using-aws-s3-buckets-in-a-nodejs-app-74da2fc547a6
* https://www.jsowl.com/how-to-download-a-file-from-aws-s3-in-javascript-node-js/

Amazon RDS:
* https://medium.com/@Anas.shahwan/how-to-connect-aws-rds-mysql-nodejs-application-in-5-minutes-40d6fbf09b66
* https://stackabuse.com/using-aws-rds-with-node-js-and-express-js/

Loading

0 comments on commit d7c2900

Please sign in to comment.