Skip to content

Commit

Permalink
Slight adjustments to the wording
Browse files Browse the repository at this point in the history
  • Loading branch information
kustikov112 committed Aug 12, 2024
1 parent baa2e94 commit 4a9ebeb
Show file tree
Hide file tree
Showing 9 changed files with 125 additions and 8 deletions.
6 changes: 4 additions & 2 deletions aws-developer/01_cloud_introduction/aws_account_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@
To successfully complete this course, you'll need an AWS account.

While most of the labs in this course can be completed using the AWS free tier without any charges,
there are a few tasks that require the creation of paid resources such as NAT Gateway, Route53 zone, etc.
These resources will incur some costs, but they are generally minimal.
there is a task that require the creation of paid resource, which is RDS in the task-8.
Even though RDS is free tier compatible, making it publicly accessible(which is not quite a good practice)
would cause you to be charged ~3$/month for public IP addres. You may consider deploying this RDS in
a private subnet and deploying Lambda function in the VPC as well.
Before creating any type of resources, make sure you read and understand the appropriate pricing policy.
Each AWS resource type has its own pricing page (e.g., [Route53 pricing](https://aws.amazon.com/route53/pricing/)).

Expand Down
1 change: 1 addition & 0 deletions aws-developer/01_cloud_introduction/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,3 +71,4 @@ By the end of the program your application must be able to do:
- Command example that needs to work from your terminal: `aws iam get-user --user-name=MyUser`

4. Wait for the next task to be announced and help others in the chat if they have any issues
5. Even though you'll probably use Free Tier account, it's usually a good idea to keep in mind approximate prices for resources [Link](https://aws.amazon.com/pricing/)
8 changes: 8 additions & 0 deletions aws-developer/03_serverless_api/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,14 @@
- **Install** the latest version of AWS CDK (https://docs.aws.amazon.com/cdk/v2/guide/getting_started.html).
- **Configure** credentials for AWS to make them accessible by AWS CLI & CDK.
- **Create** your **own public Github repository** for all future backend work (you might call it how you would like). You will have 2 repos - 1 for frontend, 1 for backend till the end of course.
Desired working tree should look somehow like this:
├── backend
│   ├── authorization_service <- auth service repo
│   ├── import_service <- import service repo
│   └── product_service <- product service repo
├── cart_service <- cart service repo
│   ├── nodejs-aws-cart-api
└── frontend <- frontend repo

_NOTE: You should create branch from master and work in branch (f.e. branch name - task-3) in BE (backend) and in FE (frontend) repositories._

Expand Down
4 changes: 2 additions & 2 deletions aws-developer/05_integration_with_s3/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,10 +41,10 @@ Find the entire program architecture: [here](../Architecture.pdf).

1. Create a lambda function called `importProductsFile` under the Import Service which will be triggered by the HTTP GET method.
2. The requested URL should be `/import`.
3. Implement its logic so it will be expecting a request with a name of CSV file with products and creating a new **Signed URL** with the following key: `uploaded/${fileName}`.
3. Implement its logic so it will be expecting a request with a name of CSV file with products and creating a new [**Signed URL**](https://docs.aws.amazon.com/AmazonS3/latest/userguide/example_s3_Scenario_PresignedUrl_section.html) with the following key: `uploaded/${fileName}`.
4. The name will be passed in a _query string_ as a `name` parameter and should be described in the AWS CDK Stack as a _request parameter_.
5. Update AWS CDK Stack with policies to allow lambda functions to interact with S3.
6. The response from the lambda should be the created **Signed URL**.
6. The response from the lambda should be clean **Signed URL**, as a string.
7. The lambda endpoint should be integrated with the frontend by updating `import` property of the API paths configuration.

### Task 5.3
Expand Down
1 change: 1 addition & 0 deletions aws-developer/07_authorization/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@ Find the entire program architecture: [here](../Architecture.pdf).

3. This `basicAuthorizer` lambda should take _Basic Authorization_ token, decode it and check that credentials provided by token exist in the lambda environment variable.
4. This lambda should return 403 HTTP status if access is denied for this user (invalid `authorization_token`) and 401 HTTP status if Authorization header is not provided.
5. In case of successfull authorizations, lambda should return IAM policy, which is enabling the invocation of desired method [Documentation](https://docs.aws.amazon.com/apigateway/latest/developerguide/apigateway-use-lambda-authorizer.html)

_NOTE: Do not send your credentials to the GitHub. Use `.env` file and `dotenv` package to add environment variables to the lambda. Add `.env` file to `.gitignore` file._

Expand Down
8 changes: 8 additions & 0 deletions aws-developer/08_integration_with_sql_database/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,14 @@ Find the entire program architecture: [here](../Architecture.pdf).

</details>

## NOTE

As for the task description, RDS should be deployed with access from public network, which is not a good practice.
Furthermore, you'll be charged ~3$/month for public IP address attached to that RDS.
A better option is to deploy RDS in your VPC's private subnet with the lambda deployed in that VPC as well.
BUT this means that you won't have an access to RDS from your local PC. So to reach DB you'll need to
create temporary EC2 instance with public access to reach your database if it's necessary for debug purposes.

## Tasks

---
Expand Down
5 changes: 3 additions & 2 deletions aws-developer/09_containerization/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

- [AWS EB CLI](https://docs.aws.amazon.com/elasticbeanstalk/latest/dg/eb-cli3-install.html) must be installed
- [Docker](https://docs.docker.com/get-docker/) must be installed
- The Lambda deployment from the previous step should be removed. EBS deployment will replace that Lambda and serve the same purpose

## Architecture

Expand Down Expand Up @@ -41,7 +42,7 @@ Find the entire program architecture: [here](../Architecture.pdf).
- Minimize docker image size to be `less than 500 MB`.
- Optimize image build times. Dockerfile commands that run npm install should not depend on typescript files.
- _OPTIONAL: add more folders to `.dockerignore` with explanations_
- _OPTIONAL: Minimize docker image size to `about 100 MB`._
- _OPTIONAL: Minimize docker image size to `about 140 MB`._
- _OPTIONAL: Optimize build times by utilizing multistage builds._
- _OPTIONAL: Lint Dockerfile._

Expand All @@ -51,7 +52,7 @@ Find the entire program architecture: [here](../Architecture.pdf).

- **Use** a `Dockerfile` from previous subtask to deploy your Cart Service using AWS Beanstalk CLI.
- **Initiate** an Elastic Beanstalk application using the `eb init` command. Application name must follow the following convention `{yours_github_account_login}-cart-api`.
- **Create** a new environment using the `eb create` command. An environment name must be short _but not less then four signs_ (e.g _develop_, _test_, _prod_, etc). Use the `--cname` option `{yours_github_account_login}-cart-api-{environment_name}` so that Elastic Beanstalk will use it to create a proper domain name. Use the `--single` option to not use any Load Balancer for this environment.
- **Create** a new environment using the `eb create` command. An environment name must be short _but not less then four signs_ (e.g _develop_, _test_, _prod_, etc). Use the `--cname` option `{yours_github_account_login}-cart-api-{environment_name}` so that Elastic Beanstalk will use it to create a proper domain name. Use the `--single` option to not use any Load Balancer for this environment. Use --envvars flag to pass environment variables to your deploment, which is necessary to create DataBase connection

2. Deploy Cart Service with Elastic Beanstalk

Expand Down
5 changes: 3 additions & 2 deletions aws-developer/10_backend_for_frontend/task.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,7 @@ Find the entire program architecture: [here](../Architecture.pdf).
product-service
import-service
bff-service
authorization-service
```

2. Create an application in this folder, that listens for all requests and redirects those requests to the appropriate services based on variables provided by the `.env` file.
Expand Down Expand Up @@ -97,7 +98,7 @@ By this point your application must be able to do:

1. Products representation on Home page should be based on Product Service API.
2. Products are coming from Product DB.
3. Product images are not randomly generated on client side. Product image, same as another product model information should be stored on BE side in Product DB.
3. Product images are not randomly generated on client side. Product image links, same as another product model information should be stored on BE side in Product DB.
4. Products might be created through CSV product file import from client side.
5. Cart might be created with appropriate product set.
6. Auth logic should be in place
Expand All @@ -118,7 +119,7 @@ By this point your application must be able to do:

---

- **-50** - JS/TS Only. Express is used
- **-50** - Express package is used in JS/TS.

## Description Template for PRs

Expand Down
95 changes: 95 additions & 0 deletions aws-developer/CLEANUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# Cleaning Up Resources After Passing the Course

After completing the course, it's a good idea to clean up all the resources you have created to avoid unnecessary spendings.

## Table of Contents
1. [Removing CDK Deployments from All Repos](#removing-cdk-deployments-from-all-repos)
2. [Remove RDS via Console](#remove-rds-via-console)
3. [Remove DynamoDB via Console](#remove-dynamodb-via-console)
4. [Clean and Delete S3 Buckets](#clean-and-delete-s3-buckets)
5. [Ensure CloudFormation is Clean](#ensure-cloudformation-is-clean)
6. [Remove All LogGroups from CloudWatch](#remove-all-loggroups-from-cloudwatch)
7. [Check if Lambda, EC2, CloudFront, SNS, SQS, API Gateway, and EBS Pages are Empty](#check-if-lambda-ec2-cloudfront-sns-sqs-api-gateway-and-ebs-pages-are-empty)

## Removing CDK Deployments from All Repos

1. Navigate to the directories of your CDK stacks.
2. Run the following command to destroy the stack:
```sh
cdk destroy --all
```
3. Confirm the destruction when prompted.

Repeat these steps for each CDK project repository.

## Remove RDS via Console

1. Open the [Amazon RDS Console](https://console.aws.amazon.com/rds/).
2. In the navigation pane, choose **Databases**.
3. Select the database instance you want to delete.
4. Choose **Actions** and then **Delete**.
5. Remove the checkbox from Final backup creation.
5. Follow the prompts to delete the database instance.

## Remove DynamoDB via Console

1. Open the [Amazon DynamoDB Console](https://console.aws.amazon.com/dynamodb/).
2. In the navigation pane, choose **Tables**.
3. Select the table you want to delete.
4. Choose **Actions** and then **Delete Table**.
5. Confirm the deletion.

## Clean and Delete S3 Buckets

1. Open the [Amazon S3 Console](https://console.aws.amazon.com/s3/).
2. Select the bucket you want to delete.
3. Empty the bucket by deleting all objects and folders within it.
4. Once the bucket is empty, select the bucket again.
5. Choose **Delete bucket** and confirm the deletion.

## Ensure CloudFormation is Clean

1. Open the [AWS CloudFormation Console](https://console.aws.amazon.com/cloudformation/).
2. In the navigation pane, choose **Stacks**.
3. Review the list of stacks. If there are any stacks you no longer need, select them.
4. Choose **Delete** and confirm the deletion.

## Remove All LogGroups from CloudWatch

1. Open the [Amazon CloudWatch Console](https://console.aws.amazon.com/cloudwatch/).
2. In the navigation pane, choose **Log groups**.
3. Select the log groups you want to delete.
4. Choose **Actions** and then **Delete log group(s)**.
5. Confirm the deletion.

## Check if Lambda, EC2, CloudFront, SNS, SQS, API Gateway, and EBS Pages are Empty

1. **Lambda**:
- Open the [AWS Lambda Console](https://console.aws.amazon.com/lambda/).
- Ensure there are no functions listed. If there are, delete them.

2. **EC2**:
- Open the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
- Ensure there are no running instances, volumes, or other resources. Terminate any that are still active.

3. **CloudFront**:
- Open the [Amazon CloudFront Console](https://console.aws.amazon.com/cloudfront/).
- Ensure there are no distributions. If there are, disable and delete them.

4. **SNS**:
- Open the [Amazon SNS Console](https://console.aws.amazon.com/sns/).
- Ensure there are no topics or subscriptions. Delete any that exist.

5. **SQS**:
- Open the [Amazon SQS Console](https://console.aws.amazon.com/sqs/).
- Ensure there are no queues. Delete any that exist.

6. **API Gateway**:
- Open the [Amazon API Gateway Console](https://console.aws.amazon.com/apigateway/).
- Ensure there are no APIs. Delete any that exist.

7. **EBS**:
- Open the [Amazon EC2 Console](https://console.aws.amazon.com/ec2/).
- In the navigation pane, choose **Elastic Block Store** > **Volumes**.
- Ensure there are no volumes. Delete any that exist.

0 comments on commit 4a9ebeb

Please sign in to comment.