Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Same values returned from different confusion_matrix #7243

Closed
al3ms opened this issue Nov 16, 2023 Discussed in #7216 · 4 comments
Closed

Same values returned from different confusion_matrix #7243

al3ms opened this issue Nov 16, 2023 Discussed in #7216 · 4 comments

Comments

@al3ms
Copy link

al3ms commented Nov 16, 2023

Thanks for great framework Monai.

I have a problem when using the function ConfusionMatrixMetric. I want to calculate F1, precision, and sensitivity. However the function return the same output for all.

This is the code:

`

post_pred = Compose([AsDiscrete(argmax=True, to_onehot=2)])
post_label = Compose([EnsureType(),AsDiscrete(to_onehot=2)])

dice_metric = DiceMetric(include_background=True, reduction="mean")
meanIoU = MeanIoU(reduction="mean")
surfaceDistanceMetric = SurfaceDistanceMetric(include_background=True, symmetric=True, reduction="mean")
hausdorffDistanceMetric = HausdorffDistanceMetric(include_background=True, percentile=95, reduction="mean")

**f1 = ConfusionMatrixMetric(metric_name="F1", reduction="mean")
precision = ConfusionMatrixMetric(metric_name="precision", reduction="mean")
sensitivity = ConfusionMatrixMetric(metric_name="sensitivity", reduction="mean")**


imgNum=1

with torch.no_grad():

    for t in test_loader:

        test_volume = t["vol"]
        test_label = t["seg"]
        test_label = test_label != 0
        test_volume, test_label = (test_volume.to(device), test_label.to(device),)
        test_outputs = model(test_volume)
        test_loss = loss_function(test_outputs, test_label)
        test_outputs2 = torch.sigmoid(test_outputs)
        test_outputs2 = [post_pred(i) for i in decollate_batch(test_outputs)]
        test_label2 = [post_label(i) for i in decollate_batch(test_label)]
        dice_metric(test_outputs2, test_label2)
        surfaceDistanceMetric(test_outputs2, test_label2)
        hausdorffDistanceMetric(test_outputs2, test_label2)
        meanIoU(test_outputs2,test_label2)


        **f1(test_outputs2,test_label2)
        precision(test_outputs2,test_label2)
        sensitivity(test_outputs2,test_label2)**

        cd = dice_metric.aggregate().item()
        hd = hausdorffDistanceMetric.aggregate().item()
        sd = surfaceDistanceMetric.aggregate().item()
        iou = meanIoU.aggregate().item()

        dice_metric.reset()
        hausdorffDistanceMetric.reset()
        surfaceDistanceMetric.reset()
        meanIoU.reset()

        **f1Metric = f1.aggregate()[0]
        f1.reset()
        prec = precision.aggregate()[0]
        precision.reset()
        sen = sensitivity.aggregate()[0]
        sensitivity.reset()**

        print(path_test_segmentation[imgNum-1])
        print("Test loss: ",test_loss.item())
        print("Image ",imgNum)
        print("Test DSC: ",cd)
        print("Test HD: ",hd)
        print("Test SD: ",sd)
        print("Test IoU: ",iou)

        **print('fScore:',f1Metric)
        print('precision:',prec)
        print('sensitivity:',sen)**

        imgNum = imgNum+1

`

This is the output for the first two images:

/content/drive/MyDrive/data/******.nii.gz
Test loss: 0.2086484134197235
Image 1
Test DSC: 0.7914928197860718
Test HD: 6.062177658081055
Test SD: 1.5217657089233398
Test IoU: 0.7050783038139343
fScore: tensor([0.9975], device='cuda:0')
precision: tensor([0.9975], device='cuda:0')
sensitivity: tensor([0.9975], device='cuda:0')

/content/drive/MyDrive/data/******.nii.gz
Test loss: 0.3882635235786438
Image 2
Test DSC: 0.6110410094261169
Test HD: 12.599105834960938
Test SD: 4.4174981117248535
Test IoU: 0.5602081418037415
fScore: tensor([0.9934], device='cuda:0')
precision: tensor([0.9934], device='cuda:0')
sensitivity: tensor([0.9934], device='cuda:0')

@KumoLiu
Copy link
Contributor

KumoLiu commented Nov 16, 2023

Hi @al3ms, it should be possible that f1, precision, and sensitivity are equal if the "fn==fp" in your case.
And the aggregate() should be called outside of the loop otherwise you always calculate the last batch.

Thanks!

@al3ms
Copy link
Author

al3ms commented Nov 16, 2023

Dear KumoLiu,
Thanks for your support.
Actually, I am using batch_size=1. That will not cause any problems, right?

Regarding the result, I tested on five images, and all three metrics are the same for each image, which means something is wrong.

Image 1

fScore: tensor([0.9975], device='cuda:0')
precision: tensor([0.9975], device='cuda:0')
sensitivity: tensor([0.9975], device='cuda:0')

Image 2

fScore: tensor([0.9934], device='cuda:0')
precision: tensor([0.9934], device='cuda:0')
sensitivity: tensor([0.9934], device='cuda:0')

Image 3
fScore: tensor([0.9956], device='cuda:0')
precision: tensor([0.9956], device='cuda:0')
sensitivity: tensor([0.9956], device='cuda:0')

Image 4
fScore: tensor([0.9931], device='cuda:0')
precision: tensor([0.9931], device='cuda:0')
sensitivity: tensor([0.9931], device='cuda:0')

Image 5
fScore: tensor([0.9972], device='cuda:0')
precision: tensor([0.9972], device='cuda:0')
sensitivity: tensor([0.9972], device='cuda:0')

@al3ms al3ms changed the title Same value return different confusion_matrix Same values returned from different confusion_matrix Nov 16, 2023
@KumoLiu
Copy link
Contributor

KumoLiu commented Nov 16, 2023

Hi @al3ms, I think I may have found the problem, for a binary classification task, there is no need for one-hot, otherwise, fn and fp will always be equal.

@al3ms
Copy link
Author

al3ms commented Nov 16, 2023

Now I removed the one-hot from. Which makes sense!

This is the result of one image:

Test DSC: 0.8751203417778015
Test HD: 12.543912887573242
Test SD: 1.5333940982818604
Test IoU: 0.7779679894447327
fScore: tensor([0.8751], device='cuda:0')
precision: tensor([0.8600], device='cuda:0')
sensitivity: tensor([0.8908], device='cuda:0')

Again thanks a lot @KumoLiu for your support!

@KumoLiu KumoLiu closed this as completed Nov 16, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants