-
Notifications
You must be signed in to change notification settings - Fork 18
Local Dev ‐ hub frontend and backend
We use Node.js and various packages on NPM for building napari hub. For package management, we use yarn.
It's recommended you use NVM so you don't have to manage multiple Node.js versions yourself:
When you have NVM setup, run the following commands:
# Installs Node.js version defined in `.nvmrc`
nvm install
# Uses project defined Node.js version
nvm use
# Install yarn globally
npm -g install yarn
# Install project dependencies
yarn install
The frontend authenticates as a GitHub OAuth app to allow an increase of the API rate limit. If you encounter API rate limits locally, you may need to create an OAuth app on your personal account:
- a) Application name: Can be whatever you want (example:
napari hub dev
) - b) Homepage URL: http://localhost:8080
- c) Application description: Leave empty
- d) Authorization callback URL: http://localhost:8080
- Copy
.env.example
to.env
. - Change
GITHUB_CLIENT_ID
to the actual GitHub client ID. - Change
GITHUB_CLIENT_SECRET
to the actual GitHub client secret.
To use Split.io locally, you can set the ENV
and SPLIT_IO_SERVER_KEY
environment variables in the .env
file. The ENV
variable can be set to dev
value for SPLIT_IO_SERVER_KEY
can be found in 1Password.
ENV=dev
SPLIT_IO_SERVER_KEY=<token>
If the SPLIT_IO_SERVER_KEY
environment variable is not defined, then all
feature flags will be enabled by default.
Make sure that you are in the napari-hub/frontend
directory, and run the following command:
yarn dev
This will load the frontend with fixture data. To run the frontend connected to backend data, follow the steps to set up the backend (https://github.com/chanzuckerberg/napari-hub/wiki/Local-development-guide#backend) in a separate tab, and run the frontend as instructed in https://github.com/chanzuckerberg/napari-hub/wiki/Local-development-guide#running-frontend--backend-e2e.
Any changes to the code will fast refresh the browser UI.
The frontend currently has 4 types of testing:
- Unit tests
- Integration tests:
- Snapshot tests
- E2E tests
Unit and integration tests are treated the same, the only difference is the semantics of the test. Unit tests test individual units of code in isolation, while integration tests test multiple units working together. To run unit / integration tests, use the command:
yarn test
Watch mode is a subcommand that re-runs tests that are updated so that you only
yarn test:watch
It’s also possible to target specific tests by specifying a pattern to match the filename:
yarn test Accordion # Test any components that start with Accordion
yarn test src/components # Test all components
yarn test:watch utils/ # Test all utils in watch mode
Snapshot tests are used for comparing the HTML structure of a component to a prior snapshot. Snapshot tests are automatically run when running unit / integration tests, so running yarn test
should be sufficient.
If you create a new component or update the UI, you’ll need to update the old snapshots. Otherwise, the tests will never pass because the snapshots are always different:
yarn test:update
E2E tests are used for testing the UI against a mock server with fixture data. This allows us to test the UI in a browser environment similar to what we would see in production. To run the E2E tests, use the yarn e2e
command. This is similar to yarn test
, so all the same variations of the command above also work for E2E tests:
yarn e2e # Run E2E tests
yarn
E2E tests will run for multiple screens since features may behave differently depending on the screen size. Sometimes you want to check the results of a test for a specific screen, so you can target testing a screen using the SCREEN
environment variable:
SCREEN=300 yarn e2e
Right now we only test on a subset of screens to speed up E2E testing. Specifically, we focus on screen sizes that represent mobile, table, and desktop devices:
- 300px
- 600px
- 875px
- 1150px
- 1425px
Sometimes you may want to debug the code running an E2E test. To do this, you can use the PWDEBUG environment variable to open up the Playwright inspector so you can place breakpoints and inspect variables:
PWDEBUG=1 yarn e2e
-
Install
brew
if not already installed: https://docs.brew.sh/Installation. -
Set up and configure
awscli
using these instructions.
If the above steps have been completed successfully, you should have a ~/.aws/config file with a profile for sci-imaging
. That entry should start out with [profile sci-imaging]
. We will be referencing this profile later in order to load particular AWS credentials into our local environment.
Set up a Python virtual environment with the required dependencies:
mkdir [directory for virtual environment]
cd [directory for virtual environment]
python3 -m venv napari-hub-env
source napari-hub-env/bin/activate
When your virtual environment is activated, your shell prompt should be updated with the environment's name on the left e.g.:
(napari-hub-env) user@computer napari-hub %
Now that your environment is activated, navigate to your napari-hub
folder and install the Python dependencies.
pip install --upgrade pip
cd backend
pip install -r requirements.txt
If you use an IDE, you should be able to configure the Python interpreter for your napari-hub
project to use this virtual environment now.
-
DEV_ENV.md
andREMOTE_DEV.md
are deprecated and no longer being maintained.
To run the application completely locally, you will have to set up your local dynamo.
Please reference the detailed guide here for instructions to set up your local dynamo.
Make sure that you are in the napari-hub/backend
directory and in a virtual environment (see Set up Python environment for more information on how to set up your virtual environment in the IDE).
To connect the backend server to local dynamo, set the following environment variables:
export STACK_NAME=local
export LOCAL_DYNAMO_HOST=http://localhost:8000
Update the STACK_NAME
and LOCAL_DYNAMO_HOST
values to any relevant values if not using the default values.
To connect the backend server to dev-shared dynamo, instead, set the following environment variables:
export STACK_NAME=dev-shared
export AWS_PROFILE=sci-imaging
To start the server, run the command:
python -m api.app
You should see the following output:
* Serving Flask app 'app'
* Debug mode: off
WARNING: This is a development server. Do not use it in a production deployment.
Use a production WSGI server instead.
* Running on http://127.0.0.1:12345 (http://127.0.0.1:12345/)
Open a web browser and go to http://127.0.0.1:12345
, and you should be able to see the napari hub API interface. To run the frontend connected to the backend, follow one more step outlined at https://github.com/chanzuckerberg/napari-hub/wiki/Local-Dev-%E2%80%90-hub-frontend-and-backend
For any feature that you develop locally, you can now leverage this local development process to validate that the feature is functioning correctly. This process is especially useful, as sometimes the GitHub Actions
either take a long time (more than 10 minutes) to complete their runs.
As a result, engineers can iterate on features more quickly by following the local development process, and troubleshoot for errors more easily without the issues of external dependencies. Once you feel like the pr is in a good state, then you can proceed onto the next stages of the development process.
Make sure that you are in the napari-hub/data-workflows
directory and in a virtual environment (see Set up Python environment for more information on how to set up your virtual environment in the IDE).
To connect the data-workflows server to local dynamo, set the following environment variables:
export STACK_NAME=local
export LOCAL_DYNAMO_HOST=http://localhost:8000
Update the STACK_NAME
and LOCAL_DYNAMO_HOST
values to any relevant values if not using the default values.
To run the activity workflows that involves querying the snowflake, you would also need access to the snowflake credentials in your environment variables. You can reach out to the team to get the credentials.
export SNOWFLAKE_USER=<snowflake_user>
export SNOWFLAKE_PASSWORD=<snowflake_password>
You can trigger the workflow by executing the handle method of the handler.
python -c 'from handler import handle; handle({"Records":[]}, None)'
To trigger the plugin metadata data fetch, you can pass the following record: {"body": {"type":"plugin"}}
.
The call to the handle method would look like this, as the json body will have to be an escaped string:
python -c 'from handler import handle; handle({"Records":[{"body": "{\"type\":\"plugin\"}"}]}, None)'
To trigger the plugin aggregation, you can pass the following record:
{"dynamodb": {"Keys": {"name": {"S": "napari-demo"}, "version_type": {"S": "0.0.1"}}}}
The value of "napari-demo" should be replaced with relevant plugin name, and "0.0.1" with the corresponding version. This represents the essential parts of a dynamo stream record.
python -c 'from handler import handle; handle({"Records":[{"dynamodb": {"Keys": {"name": {"S": "napari-demo"}, "version_type": {"S": "0.0.1"}}}}]}, None)'
Prior to running unit tests, you should install pytest
by running the below command into your napari hub environment. As a general note, all of the packages that are installed should be in the virtual environment you created for the napari hub:
pip install -r [<folder_name>/test-requirements.txt]
The backend
, data-workflows
, napari-hub-commons
and plugins
directories contain unit tests that can be run with the below commands. In general, for cases in which the tests rely on the responses of the API or external data (AWS, Snowflake, etc), we rely on mocked objects to avoid external dependencies.
python -m pytest [backend/file_path]
python -m pytest [plugins/file_path]
python -m pytest [data-workflows/file_path]
python -m pytest [napari-hub-commons/file_path]
To run behavioural tests, please follow the instructions specified here.
Run the backend according to the instructions above.
In a separate tab, make sure you are in the napari-hub/frontend
directory and have followed the instructions for setting up the frontend above. To connect the frontend to the local backend, run the frontend with an API_URL
pointing to the localhost URL of the backend, like so:
MOCK_SERVER=false API_URL=http://localhost:12345 yarn dev
You can then navigate to http://localhost:8080/ and should see real plugin data loaded.