Examples of usage can be found in folder examples
.
Pretraining models:
- Colorization
- CPC
- MoCo
- SimCLR
- SwAV
- DINO
- BYOL
- SimSiam
- Barlow Twins
-
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale (ViT)
-
Momentum Contrast for Unsupervised Visual Representation Learning (MoCo)
-
A Simple Framework for Contrastive Learning of Visual Representations (SimCLR)
-
Unsupervised Learning of Visual Features by Contrasting Cluster Assignments (SwAV)
-
Emerging Properties in Self-Supervised Vision Transformers (DINO)
-
Bootstrap your own latent: A new approach to self-supervised Learning (BYOL)
-
Barlow Twins: Self-Supervised Learning via Redundancy Reduction (Barlow Twins)