Skip to content

Commit

Permalink
Guarin lig 2601 add black and isort (#1093)
Browse files Browse the repository at this point in the history
* Add black and isort dev dependencies.
* Add `make format` and `make format-check` commands.
* Add CI action running `make format-check`, failing checks do not block from merging.
* Format files with black and isort
* Update contribution guidelines
  • Loading branch information
guarin authored Mar 8, 2023
1 parent 24a815a commit 17ece93
Show file tree
Hide file tree
Showing 288 changed files with 6,514 additions and 6,398 deletions.
34 changes: 34 additions & 0 deletions .github/workflows/test_code_format.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,34 @@
name: Code Format Check

on:
push:
pull_request:
workflow_dispatch:

jobs:
test:
name: Check
runs-on: ubuntu-latest

steps:
- name: Checkout Code
uses: actions/checkout@v3
- name: Hack to get setup-python to work on nektos/act
run: |
if [ ! -f "/etc/lsb-release" ] ; then
echo "DISTRIB_RELEASE=18.04" > /etc/lsb-release
fi
- name: Set up Python
uses: actions/setup-python@v4
with:
python-version: "3.10"
- uses: actions/cache@v2
with:
path: ${{ env.pythonLocation }}
key: cache_v2_${{ env.pythonLocation }}-${{ hashFiles('requirements/**') }}
- name: Install Dependencies and lightly
run: pip install -e '.[all]'
- name: Run Format Check
run: |
export LIGHTLY_SERVER_LOCATION="localhost:-1"
make format-check
10 changes: 7 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,13 +89,17 @@ Follow these steps to start contributing:

5. Develop the features on your branch.

As you work on the features, you should make sure that the test suite
passes:
As you work on the features, you should make sure that the code is formatted and the
test suite passes:

```bash
$ make test
$ make format
$ make all-checks
```

If you get an error from isort or black, please run `make format` again before
running `make all-checks`.

If you're modifying documents under `docs/source`, make sure to validate that
they can still be built. This check also runs in CI.

Expand Down
14 changes: 14 additions & 0 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,17 @@ clean-out:
clean-tox:
rm -fr .tox

# format code with isort and black
format:
isort .
black .

# check if code is formatted with isort and black
format-check:
@echo "⚫ Checking code format..."
isort --check-only --diff .
black --check .

# check style with flake8
lint: lint-lightly lint-tests

Expand All @@ -49,6 +60,9 @@ lint-tests:
test:
pytest tests --runslow

# run format checks and tests
all-checks: format-check test

## build source and wheel package
dist: clean
python setup.py sdist bdist_wheel
Expand Down
50 changes: 27 additions & 23 deletions docs/source/conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,23 +12,24 @@
#
import os
import sys
sys.path.insert(0, os.path.abspath('../..'))

sys.path.insert(0, os.path.abspath("../.."))

import sphinx_rtd_theme
import lightly

import lightly

# -- Project information -----------------------------------------------------

project = 'lightly'
copyright_year = '2020'
project = "lightly"
copyright_year = "2020"
copyright = "Lightly AG"
website_url = 'https://www.lightly.ai/'
author = 'Philipp Wirth, Igor Susmelj'
website_url = "https://www.lightly.ai/"
author = "Philipp Wirth, Igor Susmelj"

# The full version, including alpha/beta/rc tags
release = lightly.__version__
master_doc = 'index'
master_doc = "index"


# -- General configuration ---------------------------------------------------
Expand All @@ -44,13 +45,16 @@
"sphinx_tabs.tabs",
"sphinx_copybutton",
"sphinx_design",
'sphinx_reredirects'
"sphinx_reredirects",
]

sphinx_gallery_conf = {
'examples_dirs': ['tutorials_source/package', 'tutorials_source/platform'],
'gallery_dirs': ['tutorials/package', 'tutorials/platform'], # path to where to save gallery generated output
'filename_pattern': '/tutorial_',
"examples_dirs": ["tutorials_source/package", "tutorials_source/platform"],
"gallery_dirs": [
"tutorials/package",
"tutorials/platform",
], # path to where to save gallery generated output
"filename_pattern": "/tutorial_",
}

napoleon_google_docstring = True
Expand All @@ -67,7 +71,7 @@
napoleon_type_aliases = None

# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
templates_path = ["_templates"]

# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
Expand All @@ -80,28 +84,28 @@
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
#
html_theme = 'sphinx_rtd_theme'
html_theme = "sphinx_rtd_theme"

html_theme_options = {
'collapse_navigation': False, # set to false to prevent menu item collapse
'logo_only': True
"collapse_navigation": False, # set to false to prevent menu item collapse
"logo_only": True,
}

# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
html_static_path = ["_static"]

html_favicon = 'favicon.png'
html_favicon = "favicon.png"

html_logo = '../logos/lightly_logo_crop_white_text.png'
html_logo = "../logos/lightly_logo_crop_white_text.png"

# Exposes variables so that they can be used by django
# Exposes variables so that they can be used by django
html_context = {
'copyright_year': copyright_year,
'website_url': website_url,
"copyright_year": copyright_year,
"website_url": website_url,
}

redirects = {
"docker/advanced/active_learning": "../../docker/getting_started/selection.html"
}
"docker/advanced/active_learning": "../../docker/getting_started/selection.html"
}
Original file line number Diff line number Diff line change
@@ -1,17 +1,20 @@
from collections import OrderedDict

import torch

import lightly

def load_ckpt(ckpt_path, model_name='resnet-18', model_width=1, map_location='cpu'):

def load_ckpt(ckpt_path, model_name="resnet-18", model_width=1, map_location="cpu"):
ckpt = torch.load(ckpt_path, map_location=map_location)

state_dict = OrderedDict()
for key, value in ckpt['state_dict'].items():
if ('projection_head' in key) or ('backbone.7' in key):
# drop layers used for projection head
for key, value in ckpt["state_dict"].items():
if ("projection_head" in key) or ("backbone.7" in key):
# drop layers used for projection head
continue
state_dict[key.replace('model.backbone.', '')] = value
state_dict[key.replace("model.backbone.", "")] = value

resnet = lightly.models.ResNetGenerator(name=model_name, width=model_width)
model = torch.nn.Sequential(
lightly.models.batchnorm.get_norm_layer(3, 0),
Expand All @@ -23,28 +26,28 @@ def load_ckpt(ckpt_path, model_name='resnet-18', model_width=1, map_location='cp
model.load_state_dict(state_dict)
except RuntimeError:
raise RuntimeError(
f'It looks like you tried loading a checkpoint from a model that is not a {model_name} with width={model_width}! '
f'Please set model_name and model_width to the lightly.model.name and lightly.model.width parameters from the '
f'configuration you used to run Lightly. The configuration from a Lightly worker run can be found in output_dir/config/config.yaml'
f"It looks like you tried loading a checkpoint from a model that is not a {model_name} with width={model_width}! "
f"Please set model_name and model_width to the lightly.model.name and lightly.model.width parameters from the "
f"configuration you used to run Lightly. The configuration from a Lightly worker run can be found in output_dir/config/config.yaml"
)
return model


# loading the model
model = load_ckpt('output_dir/lightly_epoch_X.ckpt')
model = load_ckpt("output_dir/lightly_epoch_X.ckpt")


# example usage
image_batch = torch.rand(16, 3, 224, 224)
out = model(image_batch)
print(out.shape) prints: torch.Size([16, 512])
print(out.shape) # prints: torch.Size([16, 512])


# creating a classifier from the pre-trained model
num_classes = 10
classifier = torch.nn.Sequential(
model,
torch.nn.Linear(512, num_classes) # use 2048 instead of 512 for resnet-50
model, torch.nn.Linear(512, num_classes) # use 2048 instead of 512 for resnet-50
)

out = classifier(image_batch)
print(out.shape) # prints: torch.Size(16, 10)
print(out.shape) # prints: torch.Size(16, 10)
Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
import json

import lightly
from lightly.openapi_generated.swagger_client.models.dataset_type import DatasetType
from lightly.openapi_generated.swagger_client.models.datasource_purpose import DatasourcePurpose

from lightly.openapi_generated.swagger_client.models.datasource_purpose import (
DatasourcePurpose,
)

# Create the Lightly client to connect to the API.
client = lightly.api.ApiWorkflowClient(token="YOUR_TOKEN")

# Create a new dataset on the Lightly Platform.
client.create_dataset('pedestrian-videos-datapool',
dataset_type=DatasetType.VIDEOS)
client.create_dataset("pedestrian-videos-datapool", dataset_type=DatasetType.VIDEOS)

# Azure Blob Storage
# Input bucket
client.set_azure_config(
container_name='my-container/input/',
account_name='ACCOUNT-NAME',
sas_token='SAS-TOKEN',
purpose=DatasourcePurpose.INPUT
container_name="my-container/input/",
account_name="ACCOUNT-NAME",
sas_token="SAS-TOKEN",
purpose=DatasourcePurpose.INPUT,
)
# Output bucket
client.set_azure_config(
container_name='my-container/output/',
account_name='ACCOUNT-NAME',
sas_token='SAS-TOKEN',
purpose=DatasourcePurpose.LIGHTLY
container_name="my-container/output/",
account_name="ACCOUNT-NAME",
sas_token="SAS-TOKEN",
purpose=DatasourcePurpose.LIGHTLY,
)

Original file line number Diff line number Diff line change
@@ -1,29 +1,29 @@
import json

import lightly
from lightly.openapi_generated.swagger_client.models.dataset_type import DatasetType
from lightly.openapi_generated.swagger_client.models.datasource_purpose import DatasourcePurpose

from lightly.openapi_generated.swagger_client.models.datasource_purpose import (
DatasourcePurpose,
)

# Create the Lightly client to connect to the API.
client = lightly.api.ApiWorkflowClient(token="YOUR_TOKEN")

# Create a new dataset on the Lightly Platform.
client.create_dataset('pedestrian-videos-datapool',
dataset_type=DatasetType.VIDEOS)
client.create_dataset("pedestrian-videos-datapool", dataset_type=DatasetType.VIDEOS)

# Google Cloud Storage
# Input bucket
client.set_gcs_config(
resource_path="gs://bucket/input/",
project_id="PROJECT-ID",
credentials=json.dumps(json.load(open('credentials_read.json'))),
purpose=DatasourcePurpose.INPUT
credentials=json.dumps(json.load(open("credentials_read.json"))),
purpose=DatasourcePurpose.INPUT,
)
# Output bucket
client.set_gcs_config(
resource_path="gs://bucket/output/",
project_id="PROJECT-ID",
credentials=json.dumps(json.load(open('credentials_write.json'))),
purpose=DatasourcePurpose.LIGHTLY
credentials=json.dumps(json.load(open("credentials_write.json"))),
purpose=DatasourcePurpose.LIGHTLY,
)

Original file line number Diff line number Diff line change
@@ -1,31 +1,31 @@
import json

import lightly
from lightly.openapi_generated.swagger_client.models.dataset_type import DatasetType
from lightly.openapi_generated.swagger_client.models.datasource_purpose import DatasourcePurpose

from lightly.openapi_generated.swagger_client.models.datasource_purpose import (
DatasourcePurpose,
)

# Create the Lightly client to connect to the API.
client = lightly.api.ApiWorkflowClient(token="YOUR_TOKEN")

# Create a new dataset on the Lightly Platform.
client.create_dataset('pedestrian-videos-datapool',
dataset_type=DatasetType.VIDEOS)
client.create_dataset("pedestrian-videos-datapool", dataset_type=DatasetType.VIDEOS)

# AWS S3
# AWS S3
# Input bucket
client.set_s3_config(
resource_path="s3://bucket/input/",
region='eu-central-1',
access_key='S3-ACCESS-KEY',
secret_access_key='S3-SECRET-ACCESS-KEY',
purpose=DatasourcePurpose.INPUT
region="eu-central-1",
access_key="S3-ACCESS-KEY",
secret_access_key="S3-SECRET-ACCESS-KEY",
purpose=DatasourcePurpose.INPUT,
)
# Output bucket
client.set_s3_config(
resource_path="s3://bucket/output/",
region='eu-central-1',
access_key='S3-ACCESS-KEY',
secret_access_key='S3-SECRET-ACCESS-KEY',
purpose=DatasourcePurpose.LIGHTLY
region="eu-central-1",
access_key="S3-ACCESS-KEY",
secret_access_key="S3-SECRET-ACCESS-KEY",
purpose=DatasourcePurpose.LIGHTLY,
)

Loading

0 comments on commit 17ece93

Please sign in to comment.