Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about Training Results Discrepancy #13

Open
Halleyawoo opened this issue Jan 29, 2025 · 0 comments
Open

Question about Training Results Discrepancy #13

Halleyawoo opened this issue Jan 29, 2025 · 0 comments

Comments

@Halleyawoo
Copy link

Dear Author,

I followed your setup exactly and preprocessed the LA dataset as instructed. Then, I ran the following command:

bash train.sh -c 0 -e diffusion -t la_0.05 -i 'experiment1' -l 0.01 -w 10 -n 300 -d true

However, the final results I obtained are:

Final Dice of each class: [78.8]  
Final Jaccard of each class: [70.4]  
Final HD95 of each class: [12.5]  
Final ASD of each class: [2.3]  

Final Avg Dice: 78.77±0.0  
Final Avg Jaccard: 70.41±0.0  
Final Avg HD95: 12.54±0.0  
Final Avg ASD: 2.27±0.0  

There is a significant gap between my results and those reported in the paper. According to the training script, the early stopping occurred at epoch 168 when the Dice score was already quite high. However, during validation, the results still differ considerably from the ones in the paper.

Could you please help me understand what might be causing this discrepancy? Did I make any mistakes in the setup?

For reference, I used an A100 GPU and got the following training log:

Evaluation epoch 168, dice: 0.9429011344909668, [0.94290113]
Best eval dice is 0.9532027840614319 in epoch 118
Early stop.

The checkpoint was loaded from:

./logs/Exp_SSL_LA_0.05/diffusionexperiment1/fold1/
load checkpoint from ./logs/Exp_SSL_LA_0.05/diffusionexperiment1/fold1/ckpts/best_model.pth

I would really appreciate any guidance you could provide!

Best regards

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant