Skip to content
This repository has been archived by the owner on Dec 5, 2024. It is now read-only.

Phoenix

Davide Berdin edited this page Jun 10, 2020 · 5 revisions

Phoenix is a system to serve the recommendations data. The objective of Phoenix is to keep things simple and fetch the information as fast as possible.

It uses Redis as main database. In our experience, recommendation data can be seen as Key/Value format, hence the usage of a fast Key/Value storage seemed a natural choice.

The Project is divided in three main parts:

  • Public APIs
  • Internal APIs
  • Worker

The Public APIs is the service responsible for delivering the recommendations. It doesn't apply any logic or any sort of data modification.

The Internal APIs is the service responsible for creating the container and models. Those two concepts are explained in details later. Models will contain the actual data (the recommendations) while Containers will be used to associate models for knowing where the model is actually used in a product. Plus, it will help in blending multiple models together for better results.

The Worker is responsible to accept batch requests from the API and queue them. The idea is that the system will perform 1 batch upload to Redis at the time. In our tests we noticed that we were overloading the database when performing the upload. In this way we guarantee only 1 upload at the time.

How to start

Assuming that you have go, docker and docker-compose installed in your machine, run docker-compose up -d to spin up Redis and localstack (for local S3).

After having the services up and running, assuming that you have your Go environment in your PATH, you should be able to start directly with go run main.go --help. This command should print the help message.

Proceed by running go run main.go internal for the internal APIs (or go run main.go public for the public APIs)

If you need to upload some files to the local S3, use the following commands after localstack has been created:

  • aws --endpoint-url=http://localhost:4572 s3 mb s3://test to create a bucket in local S3
  • aws --endpoint-url=http://localhost:4572 s3api put-bucket-acl --bucket test --acl public-read to set up a policy for testing with local s3
  • aws --endpoint-url=http://localhost:4572 s3 cp ~/Desktop/data.csv s3://test/content/20190713/ to copy a file to local S3

How to run tests

To run all the tests, use the following command:

$: go clean -testcache && go test -race ./...

The first part is to avoid that Go will cache the result of the tests. This could lead to some evaluation errors if you change some tests. Better without cache.

Architecture

The picture below shows the architecture of the project

architecture

One thing worth mentioning is that the Public service has a caching mechanism to avoid overloading Redis. We decided to use allegro big cache but we made a convenient interface in case you want to extend it with another type.

Clone this wiki locally