Question about training ViT models #512
Replies: 2 comments
-
Hi @moonlitt you could follow the timm training scripts below to train ViT from scratch. https://rwightman.github.io/pytorch-image-models/scripts/ There's also documentation on ViT here https://rwightman.github.io/pytorch-image-models/models/vision-transformer/. A simple script to train the smallest ViT model python train.py <path_to_imagenet_folder> --model vit_base_patch16_224 You can refer to the training scripts for more examples, for example, to train with AutoAugment python train.py <path_to_imagenet_folder> --model vit_base_patch16_224 --aa v0 |
Beta Was this translation helpful? Give feedback.
-
@moonlitt in addition to @amaarora comment, you can search some past discussion/issues in this repo to find some hparams. However, training from scratch on in1k isn't trivial. There have been better results than my original hparams, see the FB DeiT repo. It is based on the models and some of the code here and has better setup for training from scratch (even without the distillation part) https://github.com/facebookresearch/deit |
Beta Was this translation helpful? Give feedback.
-
Thanks for your time !
I want to train a ViT model on ImageNet (or ImageNet21k) without pretrain, could you please release the script of training a ViT model (or parameters only) ?
Beta Was this translation helpful? Give feedback.
All reactions