diff --git a/Topics/Tech_Stacks/fine-tuning-ml/fine-tuning.md b/Topics/Tech_Stacks/fine-tuning-ml/fine-tuning.md index b8dcd1000..4494a9f93 100644 --- a/Topics/Tech_Stacks/fine-tuning-ml/fine-tuning.md +++ b/Topics/Tech_Stacks/fine-tuning-ml/fine-tuning.md @@ -71,7 +71,7 @@ There is a sidebar containing code that can be useful for learning how others ha ## Setting up Training Before training, it is important to note that if a GPU is available, using it would speed up training time significantly. -![GPU testing](./Screenshot%202024-03-17%20225750.png) +![GPU testing](./DetectingGPU.png) To send a PyTorch tensor or PyTorch Module to the GPU, use the method ".to(device)". ![Freezing and Overwriting](./OverWritingLastWeights.png) @@ -92,4 +92,4 @@ In this document, we have explored the process of fine-tuning a pre-trained mach We delved into the specifics of setting up the training process, including freezing the weights of the pre-trained model to preserve its feature extraction capabilities, and replacing the final layer of the model to adapt it to the new task. -Having learned what model fine-tuning is and its applications, I hope any reader of this can add this useful skill to their arsenal of software engineering/machine learning skills. \ No newline at end of file +Having learned what model fine-tuning is and its applications, I hope any reader of this can add this useful skill to their arsenal of software engineering/machine learning skills.