Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implements SSL-EY #378

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 57 additions & 53 deletions README.md

Large diffs are not rendered by default.

5 changes: 5 additions & 0 deletions docs/source/solo/losses/ssley.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
SSL-EY
-------

.. autofunction:: solo.losses.ssley.ssley_loss_func
:noindex:
26 changes: 26 additions & 0 deletions docs/source/solo/methods/ssley.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
SSL-EY
=======


.. automethod:: solo.methods.ssley.SSLEY.__init__
:noindex:

add_model_specific_args
~~~~~~~~~~~~~~~~~~~~~~~
.. automethod:: solo.methods.ssley.SSLEY.add_model_specific_args
:noindex:

learnable_params
~~~~~~~~~~~~~~~~
.. autoattribute:: solo.methods.ssley.SSLEY.learnable_params
:noindex:

forward
~~~~~~~
.. automethod:: solo.methods.ssley.SSLEY.forward
:noindex:

training_step
~~~~~~~~~~~~~
.. automethod:: solo.methods.ssley.SSLEY.training_step
:noindex:
1 change: 1 addition & 0 deletions docs/source/start/available.rst
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@ Methods available
* `SwAV <https://arxiv.org/abs/2006.09882>`_
* `VICReg <https://arxiv.org/abs/2105.04906>`_
* `W-MSE <https://arxiv.org/abs/2007.06346>`_
* `SSL-EYE <https://arxiv.org/abs/2310.01012>`_

************
Extra flavor
Expand Down
45 changes: 45 additions & 0 deletions scripts/linear/imagenet-100/ssley.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
defaults:
- _self_
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled

# disable hydra outputs
hydra:
output_subdir: null
run:
dir: .

name: "ssley-imagenet100-linear"
pretrained_feature_extractor: None
backbone:
name: "resnet18"
pretrain_method: "ssley"
data:
dataset: imagenet100
train_path: "./datasets/imagenet-100/train"
val_path: "./datasets/imagenet-100/val"
format: "dali"
num_workers: 4
optimizer:
name: "sgd"
batch_size: 256
lr: 0.3
weight_decay: 0
scheduler:
name: "step"
lr_decay_steps: [60, 80]
checkpoint:
enabled: True
dir: "trained_models"
frequency: 1
auto_resume:
enabled: True

# overwrite PL stuff
max_epochs: 100
devices: [0]
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
precision: 16
80 changes: 80 additions & 0 deletions scripts/pretrain/cifar/ssley.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
defaults:
- _self_
- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled

# disable hydra outputs
hydra:
output_subdir: null
run:
dir: .

name: "ssley-cifar10" # change here for cifar100
method: "ssley"
backbone:
name: "resnet18"
method_kwargs:
proj_hidden_dim: 2048
proj_output_dim: 2048
data:
dataset: cifar10 # change here for cifar100
train_path: "./datasets"
val_path: "datasets/imagenet100/val"
format: "image_folder"
num_workers: 4
augmentations:
- rrc:
enabled: True
crop_min_scale: 0.2
crop_max_scale: 1.0
color_jitter:
enabled: True
brightness: 0.4
contrast: 0.4
saturation: 0.2
hue: 0.1
prob: 0.8
grayscale:
enabled: True
prob: 0.2
gaussian_blur:
enabled: False
prob: 0.0
solarization:
enabled: True
prob: 0.1
equalization:
enabled: False
prob: 0.0
horizontal_flip:
enabled: True
prob: 0.5
crop_size: 32
num_crops: 2
optimizer:
name: "lars"
batch_size: 256
lr: 0.3
classifier_lr: 0.1
weight_decay: 1e-4
kwargs:
clip_lr: True
eta: 0.02
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
dir: "trained_models"
frequency: 1
auto_resume:
enabled: True

# overwrite PL stuff
max_epochs: 1000
devices: [0]
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
precision: 16-mixed
81 changes: 81 additions & 0 deletions scripts/pretrain/imagenet-100/ssley.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
defaults:
- _self_
- augmentations: ssley.yaml
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You already have the augmentations below, either remove this or move the augmentations to a separate file (both are fine)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good spot - a heads up I think this is also the case for scripts/pretrain/imagenet-100/vicreg.yaml in that case.

This method should plug in wherever VICReg is used. In many ways the benefit of the loss function is it's got a lot of the nice properties of VICReg but without needing to tune the three loss function parameters.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No idea why VICReg is like that haha. I'm fine with having a separate file for the augmentations or having it merged with the main file. Should also prob fix VICReg after that.

- wandb: private.yaml
- override hydra/hydra_logging: disabled
- override hydra/job_logging: disabled

# disable hydra outputs
hydra:
output_subdir: null
run:
dir: .

name: "ssley-imagenet100"
method: "ssley"
backbone:
name: "resnet18"
method_kwargs:
proj_hidden_dim: 2048
proj_output_dim: 2048
data:
dataset: imagenet100
train_path: "datasets/imagenet100/train"
val_path: "datasets/imagenet100/val"
format: "dali"
num_workers: 4
augmentations:
- rrc:
enabled: True
crop_min_scale: 0.2
crop_max_scale: 1.0
color_jitter:
enabled: True
brightness: 0.4
contrast: 0.4
saturation: 0.2
hue: 0.1
prob: 0.8
grayscale:
enabled: True
prob: 0.2
gaussian_blur:
enabled: True
prob: 0.5
solarization:
enabled: True
prob: 0.1
equalization:
enabled: False
prob: 0.0
horizontal_flip:
enabled: True
prob: 0.5
crop_size: 224
num_crops: 2
optimizer:
name: "lars"
batch_size: 128
lr: 0.3
classifier_lr: 0.1
weight_decay: 1e-4
kwargs:
clip_lr: True
eta: 0.02
exclude_bias_n_norm: True
scheduler:
name: "warmup_cosine"
checkpoint:
enabled: True
dir: "trained_models"
frequency: 1
auto_resume:
enabled: True

# overwrite PL stuff
max_epochs: 400
devices: [0, 1]
sync_batchnorm: True
accelerator: "gpu"
strategy: "ddp"
precision: 16-mixed
2 changes: 2 additions & 0 deletions solo/losses/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,7 @@
from solo.losses.vibcreg import vibcreg_loss_func
from solo.losses.vicreg import vicreg_loss_func
from solo.losses.wmse import wmse_loss_func
from solo.losses.ssley import ssley_loss_func

__all__ = [
"barlow_loss_func",
Expand All @@ -49,4 +50,5 @@
"vibcreg_loss_func",
"vicreg_loss_func",
"wmse_loss_func",
"ssley_loss_func"
]
53 changes: 53 additions & 0 deletions solo/losses/ssley.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# Copyright 2023 solo-learn development team.

# Permission is hereby granted, free of charge, to any person obtaining a copy of
# this software and associated documentation files (the "Software"), to deal in
# the Software without restriction, including without limitation the rights to use,
# copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the
# Software, and to permit persons to whom the Software is furnished to do so,
# subject to the following conditions:

# The above copyright notice and this permission notice shall be included in all copies
# or substantial portions of the Software.

# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED,
# INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
# PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE
# FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
# OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
# DEALINGS IN THE SOFTWARE.

import torch
import torch.nn.functional as F
from solo.utils.misc import gather


def ssley_loss_func(
z1: torch.Tensor,
z2: torch.Tensor,
) -> torch.Tensor:
"""Computes SSL-EY's loss given batch of projected features z1 from view 1 and
projected features z2 from view 2.

Args:
z1 (torch.Tensor): NxD Tensor containing projected features from view 1.
z2 (torch.Tensor): NxD Tensor containing projected features from view 2.

Returns:
torch.Tensor: VICReg loss.
"""

sim_loss = invariance_loss(z1, z2)

N, D = z1.size()
B = torch.cov(torch.hstack((z1, z2)).T)

if dist.is_available() and dist.is_initialized():
dist.all_reduce(B)
world_size = dist.get_world_size()
B /= world_size

A = B[:D, D:] + B[D:, :D]
B = B[:D, :D] + B[D:, D:]

return -torch.trace(2 * A - B @ B)
3 changes: 3 additions & 0 deletions solo/methods/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -37,6 +37,7 @@
from solo.methods.vibcreg import VIbCReg
from solo.methods.vicreg import VICReg
from solo.methods.wmse import WMSE
from solo.methods.ssley import SSLEY

METHODS = {
# base classes
Expand All @@ -61,6 +62,7 @@
"vibcreg": VIbCReg,
"vicreg": VICReg,
"wmse": WMSE,
"ssley": SSLEY,
}
__all__ = [
"BarlowTwins",
Expand All @@ -83,4 +85,5 @@
"VIbCReg",
"VICReg",
"WMSE",
"SSLEY",
]
Loading
Loading