-
-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Unusual behavior of performance metrics #988
Comments
👋 Hello @Josephts1, thank you for raising this issue about Ultralytics HUB 🚀! We appreciate the detailed explanation and inclusion of code and images—it’s very helpful! Please visit our HUB Docs to explore additional resources that might provide some insights:
Regarding the unusual behavior you’ve noticed in your performance metrics 🧐:
Finally, to facilitate debugging, you might want to consult the Projects section for tips on managing and analyzing results effectively. This is an automated response 🤖, but rest assured that an Ultralytics engineer will look into your issue shortly. Thank you for your patience as we work to assist you! 🙏 |
@Josephts1 thank you for providing a detailed explanation and code! Let's break this down to better understand the behavior you're observing with your performance metrics ( Why the Metrics Behave This Way
Recommendations to Improve the ResultsHere are some steps to refine your training process and potentially stabilize the metrics: 1. Adjust Freezing Strategy
2. Increase Dataset Size and Diversity
3. Adjust Hyperparameters
4. Use Validation Metrics
5. Enable Advanced Augmentations
6. Monitor Training Logs
Additional ResourcesFor more insights on training and validation with YOLO, refer to:
SummaryYour metrics' unusual behavior is likely due to a combination of dataset size/quality, freezing strategy, and hyperparameter choices. By fine-tuning these aspects, you should see improved stability and performance in your metrics. Let us know how it goes or if you encounter further issues—happy to help! 😊 |
Search before asking
Question
I recently created a new discussion exposing the behavior of the performance graphs thrown by model.train (see image 1, see large decrease in the first epochs). I was advised to increase my dataset and review the quality of my labels. I did and increased the number of images to 256. I was also more detailed with the labels (see image 2) and this is what the predictions from model.train look like (see image 3). Even with all this, I do not understand the behavior of the mAP50 and mAP50-95 performance metrics. Could you explain to me why this is and if possible, provide me with a solution?
I appreciate your help
I attach the code used
from google.colab import
drive.mount('/content/drive')
import yaml
data={ 'path': '/content/drive/MyDrive/Proyecto_de_grado/data', 'train': 'images/train', 'val': 'images/val', 'names': { 0: 'fruta' } }
with open('/content/drive/MyDrive/Proyecto_de_grado/data/data.yaml', 'w') as file:
yaml.dump(data, file,default_flow_style=False,sort_keys=False)
pip install -U ultralytics
from ultralytics import YOLO
model=YOLO('yolo11s.pt')
Frez_layers=24
freeze = [f"model.{x}." for x in range(0,Frez_layers)]
print(freeze)
frozen_params={}
for k, v in model.named_parameters():
#print(k)
v.requires_grad = True # train all layers
frozen_params[k] = v.data.clone()
if any(x in k for x in freeze):
print(f"freezing {k}")
v.requires_grad = False
result=model.train(data="/content/drive/MyDrive/Proyecto_de_grado/data/data.yaml", epochs=100,patience=50,batch=8,plots=True,optimizer="auto",lr0=1e- 4,seed=42,project="/content/drive/MyDrive/Proyecto_de_grado/runs/freeze_layers/todo_congelado_11s")
Additional
No response
The text was updated successfully, but these errors were encountered: