Skip to content

CheckerNetwork/spark-api

Folders and files

NameName
Last commit message
Last commit date

Latest commit

a9b1ebb Β· Feb 28, 2025
Feb 26, 2025
Feb 28, 2025
May 30, 2024
Jan 19, 2025
May 28, 2024
May 15, 2024
Feb 17, 2025
Feb 28, 2025
Nov 6, 2023
Jan 20, 2024
Feb 28, 2025
May 30, 2024
May 16, 2023
Feb 27, 2025
Sep 4, 2023
Jan 19, 2025
Feb 28, 2025
Feb 28, 2025
Feb 28, 2025

Repository files navigation

spark-api

SPARK API

CI

Routes

POST /retrievals

Start a new retrieval.

Body:

{
  sparkVersion: String,
  zinniaVersion: String
}

Response:

{
  id: String,
  cid: String,
  providerAddress: String,
  protocol: 'graphsync'|'bitswap'|'http'
}

PATCH /retrievals/:id

Parameters:

  • id: Request ID (from POST /retrievals)

Body:

{
  participantAddress: String,
  timeout: Boolean,
  startAt: String,       // ISO 8601
  statusCode: Number,
  firstByteAt: String,   // ISO 8601
  endAt: String,         // ISO 8601
  byteLength: Number,
  attestation: String,
  stationId: String
}

Dates should be formatted as ISO 8601 strings.

Response:

OK

GET /miner/:minerId/deals/eligible/summary

Parameters:

  • minerId - a miner id like f0814049

Response:

Number of deals grouped by client IDs.

{
  "minerId": "f0814049",
  "dealCount": 13878,
  "clients": [
    { "clientId": "f02516933", "dealCount": 6880 },
    { "clientId": "f02833886", "dealCount": 3126 }
  ]
}

GET /client/:clientId/deals/eligible/summary

Parameters:

  • clientId - a client id like f0215074

Response:

Number of deals grouped by miner IDs.

{
  "clientId": "f0215074",
  "dealCount": 38977,
  "providers": [
    { "minerId": "f01975316", "dealCount": 6810 },
    { "minerId": "f01975326", "dealCount": 6810 }
  ]
}

GET /allocator/:allocatorId/deals/eligible/summary

Parameters:

  • allocatorId - an allocator id like f03015751

Response:

Number of deals grouped by client IDs.

{
  "allocatorId": "f03015751",
  "dealCount": 4088,
  "clients": [
    { "clientId": "f03144229", "dealCount": 2488 },
    { "clientId": "f03150656", "dealCount": 1600 }
  ]
}

Development

Database

Set up PostgreSQL with default settings:

  • Port: 5432
  • User: your system user name
  • Password: blank
  • Database: same as user name

Alternatively, set the environment variable $DATABASE_URL with postgres://${USER}:${PASS}@${HOST}:${POST}/${DATABASE}.

The Postgres user and database need to already exist, and the user needs full management permissions for the database.

You can also the following command to set up the PostgreSQL server via Docker:

docker run -d --name spark-db \
  -e POSTGRES_HOST_AUTH_METHOD=trust \
  -e POSTGRES_USER=$USER \
  -e POSTGRES_DB=$USER \
  -p 5432:5432 \
  postgres

When working on multiple Spark-related services, we recommend to use the following commands to create or reset the Postgres instance:

docker rm -f meridian-db && docker run --name meridian-db -e POSTGRES_HOST_AUTH_METHOD=trust -e POSTGRES_USER=$USER -e POSTGRES_DB=$USER -p 5432:5432 -d postgres && sleep 1; psql postgres://localhost:5432/ -c "CREATE DATABASE spark_evaluate" && psql postgres://localhost:5432/ -c "CREATE DATABASE spark_stats" && psql postgres://localhost:5432/ -c "CREATE DATABASE spark"

api

Start the API service:

npm start --workspace api

Run tests and linters:

npm test --workspace api
npm run lint --workspace api

Deployment

Pushes to main will be deployed automatically.

Perform manual devops using Fly.io:

$ fly deploy api