Skip to content

Commit

Permalink
CREATE DOCS - add in basic docs to help devs setup and test
Browse files Browse the repository at this point in the history
  • Loading branch information
JohnVonNeumann committed Nov 13, 2024
1 parent 63b56c5 commit caf046a
Show file tree
Hide file tree
Showing 6 changed files with 314 additions and 0 deletions.
80 changes: 80 additions & 0 deletions docs/MIGRATIONS.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Integrating Database Migrations with Railway's CLI Tool

Here's how you can integrate database migrations with Railway's CLI tool. I'll provide the key steps and concepts, as the exact commands might slightly vary based on your project setup.

## Assumptions

- You have your Alembic project structure set up (`alembic.ini`, migrations directory, etc.).
- You have a `migrate.py` script as outlined previously.

## Steps

1. **Install the Railway CLI:** Follow Railway's documentation on installing their CLI tool.

2. **Authenticate the CLI:**
- Run `railway login` and follow the prompts to authenticate with your Railway account.

3. **Link Project:**
- From your project's directory, run `railway link` to link the current project to the CLI.

4. **Custom Migration Commands:**
- **Local Migrations:**
`DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres python migrate.py`
This targets the locally deployed Docker hosted pg instance running.
- **Dev Migrations:**
`railway run --environment development python migrate.py`
This targets your development environment and executes the migration script.
- **Production Migrations:**
`railway run --environment production python migrate.py`
This targets your production environment.

## Additional Notes

- **Environment Variables:** Railway manages environment variables. Ensure your `migrate.py` script can access the correct database connection strings based on the `--environment` flag.
- **Deployment Integration:** Explore if Railway allows setting up these `railway run ...` commands as part of your automated deployment process after code updates.
- **Railway Specifics:** The Railway CLI might have more specialized features for database migrations. Consult their in-depth documentation.

## Example Usage

1. Make changes to your SQLAlchemy models.
2. Generate a new migration:
`alembic revision --autogenerate -m "description"`
3. Apply to development:
`railway run --environment development python migrate.py`
4. Test changes thoroughly in the development environment.
5. Apply to production:
`railway run --environment production python migrate.py`

**Important:** Always test migrations in a staging environment that mirrors production before applying them to your production database.


## Resetting the database entirely in the early stages

Pre beta/having the database models more solidified/normalised, it's probably not a terrible idea to simply wipe the database and start off fresh instead of having migrations that are all over the place. In this case:

1. **Purge the database**: This will involve dropping whatever data is in there.
2. **Downgrade the database to base**: `railway run alembic downgrade base`
3. **Make sure the database is where you need it**: Just go and eyesight check it via railway. Or, use the local database to test.
4. **Delete all of the files in versions**: Go into `alembic/versions` and delete the migration files.
5. **Remove the Enums from the database manually**: For some reason, they don't get removed by the downgrade base, so you've got to remove them by hand, use the script below.
6. **Regenerate the migration**: `railway run alembic revision --autogenerate -m "initialise base database structure"`
7. **Apply the migration**: `railway run --environment dev python migrate.py`

**Script for removing enums**:
```sql
DROP TYPE productfootprintstatus;
DROP TYPE declaredunit;
DROP TYPE characterizationfactors;
DROP TYPE regionorsubregion;
DROP TYPE biogenicaccountingmethodology;
DROP TYPE productorsectorspecificruleoperator;
DROP TYPE crosssectoralstandard;
```

**For local development environment:**
```
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres alembic downgrade base
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres alembic revision --autogenerate -m "initialise base database structure"
DATABASE_URL=postgresql://postgres:postgres@localhost:5432/postgres python migrate.py
```

53 changes: 53 additions & 0 deletions docs/POSTGRES_LOCAL_DOCKER_SETUP.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
Yes, using Docker for local testing with PostgreSQL can be a great idea, especially as your application grows. It can help you maintain consistency between your local and production environments.

The Docker image you mentioned, `ghcr.io/railwayapp-templates/timescale-postgis-ssl:pg13-ts2.12`, is a good option. It includes PostgreSQL 13, TimescaleDB 2.12, and PostGIS.

Here's a basic guide on how to set it up:

1. Install Docker on your local machine if you haven't already.

2. Create a new directory for your Docker setup and navigate into it.

3. Create a new file called `Dockerfile` in this directory. This file will contain the instructions for building your Docker image.

4. Open the `Dockerfile` and add the following content:

```
FROM ghcr.io/railwayapp-templates/timescale-postgis-ssl:pg13-ts2.12
# Set environment variables
ENV POSTGRES_USER=postgres
ENV POSTGRES_PASSWORD=postgres
ENV POSTGRES_DB=postgres
# Expose the PostgreSQL port
EXPOSE 5432
```

5. Save and close the `Dockerfile`.

6. Now, you can build your Docker image by running the following command in the terminal:

```
docker build -t my-postgres .
```

7. Once the image is built, you can run a container from it with the following command:

```
docker run -d -p 5432:5432 --name my-postgres-container my-postgres
```

8. This will start a new container based on your Docker image and map the container's port 5432 to your local machine's port 5432.

9. You can now connect to your PostgreSQL database using the connection string `postgresql://postgres:postgres@localhost:5432/postgres`.

10. To stop the container, use the following command:

```
docker stop my-postgres-container
```

Remember to replace the `localhost` in the connection string with the appropriate IP address if you're running Docker in a virtual machine or a remote server.

This is a basic setup and you may need to adjust it according to your specific requirements. You can also use Docker Compose to manage your Docker containers if you have multiple services that need to be run together.
1 change: 1 addition & 0 deletions docs/TESTING_FROM_CLI.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
curl --location 'https://localhost:8000/2/auth/token' --header 'Content-Type: application/x-www-form-urlencoded' -d "grant_type=client_credentials&client_id=${CLIENT_ID}&client_secret=${CLIENT_SECRET}"
36 changes: 36 additions & 0 deletions docs/create_super_user.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import os
import getpass
from sqlalchemy.orm import Session
from core.hashing import Hasher
from db.models.user import User
from schemas.user import UserCreate

def create_superuser(db: Session):
username = input("Enter username: ")
email = input("Enter email: ")
password = getpass.getpass("Enter password: ")
confirm_password = getpass.getpass("Confirm password: ")

if password != confirm_password:
print("Passwords do not match")
return

user = User(
username=username,
email=email,
hashed_password=Hasher.get_password_hash(password=password),
is_active=True,
is_superuser=True,
)

db.add(user)
db.commit()
db.refresh(user)

print("Superuser created successfully")

if __name__ == "__main__":
from db.session import engine

with Session(engine) as db:
create_superuser(db)
59 changes: 59 additions & 0 deletions docs/generate_product_footprints.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,59 @@
import argparse
import json

import requests

environments = {
"local": "http://localhost:8000",
#"dev": "https://apppoc-dev.up.railway.app",
#"prod": "https://simpleapp-production.up.railway.app"
}

client_creds = { # Update with your credentials
"client_id": "",
"client_secret": ""
}

def populate_database(base_json_file, base_uuid, num_items=20, base_host="http://localhost:8000", endpoint_url="/2/footprints/create-product-footprint/"):
with open(base_json_file, 'r') as f:
base_item = json.load(f)

auth_response = requests.post(
f"{base_host}/auth/token",
data={
"grant_type": "",
"scope": "",
"client_id": client_creds["client_id"],
"client_secret": client_creds["client_secret"]
},
)
auth_response.raise_for_status()

access_token = auth_response.json()["access_token"]
headers = {"Authorization": f"Bearer {access_token}"}

for i in range(1, num_items+1):
item = base_item.copy() # Shallow copy might suffice
item['id'] = f"{base_uuid[:-2]}{i:02d}"

response = requests.post(f"{base_host}{endpoint_url}", json=item, headers=headers)
response.raise_for_status()


if __name__ == "__main__":
parser = argparse.ArgumentParser(description="Populate a database with test data")
parser.add_argument("--base_json", default="valid_test_product_footprint.json", help="Path to the base JSON file")
parser.add_argument("--base_uuid", default="3fa85f64-5717-4562-b3fc-2c963f66afa6", help="Base UUID for generating items")
parser.add_argument("--num_items", type=int, default=20, help="Number of items to create")
parser.add_argument("--endpoint", default="/2/footprints/create-product-footprint/", help="API endpoint for item creation")
parser.add_argument("--base_host", default=environments["local"], help="Base host URL for the API (e.g., http://localhost:8000)")

args = parser.parse_args()

populate_database(
base_json_file=args.base_json,
base_uuid=args.base_uuid,
num_items=args.num_items,
endpoint_url=args.endpoint,
base_host=args.base_host
)
85 changes: 85 additions & 0 deletions docs/valid_test_product_footprint.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
{
"id": "3fa85f64-5717-4562-b3fc-2c963f66afa6",
"specVersion": "string",
"precedingPfids": [
"3fa85f64-5717-4562-b3fc-2c963f66af10"
],
"version": 1,
"created": "2023-06-18T22:38:02.331Z",
"updated": "2023-06-18T22:38:02.331Z",
"status": "Active",
"statusComment": "string",
"validityPeriodStart": "2023-06-18T22:38:02.331Z",
"validityPeriodEnd": "2023-06-18T22:38:02.331Z",
"companyName": "Clean Product Company",
"companyIds": [
"urn:epc:id:sgln:0614141.00002.0"
],
"productDescription": "string",
"productIds": [
"urn:epc:id:gtin:0614141.011111.0"
],
"productCategoryCpc": "22222",
"productNameCompany": "string",
"comment": "string",
"pcf": {
"declaredUnit": "kilogram",
"unitaryProductAmount": 100,
"pCfExcludingBiogenic": 10,
"pCfIncludingBiogenic": 12,
"fossilGhgEmissions": 8,
"fossilCarbonContent": 5,
"biogenicCarbonContent": 4,
"dLucGhgEmissions": 2,
"landManagementGhgEmissions": 3,
"otherBiogenicGhgEmissions": 1,
"iLucGhgEmissions": 2,
"biogenicCarbonWithdrawal": -1,
"aircraftGhgEmissions": 0.5,
"characterizationFactors": "AR6",
"crossSectoralStandardsUsed": [
"GHG Protocol Product standard"
],
"productOrSectorSpecificRules": [
{
"operator": "PEF",
"ruleNames": [
"EN15804+A2"
]
},
{
"operator": "Other",
"ruleNames": [
"CFS Guidance for XYZ Sector"
],
"otherOperatorName": "CFS"
}
],
"biogenicAccountingMethodology": "PEF",
"boundaryProcessesDescription": "Description of boundary processes",
"referencePeriodStart": "2023-06-18T22:38:02.331Z",
"referencePeriodEnd": "2023-06-18T22:38:02.331Z",
"geographyRegionOrSubregion": "Australia and New Zealand",
"secondaryEmissionFactorSources": [
{
"name": "ecoinvent",
"version": "3.9.1"
}
],
"exemptedEmissionsPercent": 2.5,
"exemptedEmissionsDescription": "Description of exempted emissions",
"packagingEmissionsIncluded": "true",
"packagingGhgEmissions": 0.5,
"allocationRulesDescription": "Description of allocation rules",
"uncertaintyAssessmentDescription": "Description of uncertainty assessment",
"primaryDataShare": 50,
"dqi": {
"key1": "value1",
"key2": "value2"
},
"assurance": {
"key1": "value1",
"key2": "value2"
}
}
}

0 comments on commit caf046a

Please sign in to comment.