You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I encountered and resolved several issues while working with the EfficientAD implementation. I'd like to share these fixes to help improve the codebase.
Issues Found:
ImageNetDataset return format mismatch causing tensor dimension errors
Inconsistent variable naming (train_iter vs iteration) in distillation training
Data loading format inconsistency between teacher and student training
Changes Made to Resolve:
Modified ImageNetDataset to return consistent dictionary format:
Fixed variable naming in distillation_training.py:
# Changed self.train_iter to self.iteration for consistencyscheduler=torch.optim.lr_scheduler.StepLR(
optimizer, step_size=int(0.95*self.iteration), gamma=0.1)
Updated data loading to handle dictionary format:
batch_sample=next(dataloader)['image'].cuda()
In the last step, replace the following:
s_imagenet_out=student(image_p[0].cuda())
with:
s_imagenet_out=student(image_p['image'].cuda())
These changes allow the training pipeline to run successfully while maintaining code consistency. I've tested these modifications with both the distillation training and student training phases
The text was updated successfully, but these errors were encountered:
Hi,
I encountered and resolved several issues while working with the EfficientAD implementation. I'd like to share these fixes to help improve the codebase.
Issues Found:
Changes Made to Resolve:
Fixed variable naming in distillation_training.py:
Updated data loading to handle dictionary format:
In the last step, replace the following:
with:
These changes allow the training pipeline to run successfully while maintaining code consistency. I've tested these modifications with both the distillation training and student training phases
The text was updated successfully, but these errors were encountered: