-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"dataloader.test.batch_size" > 1 when evaluation is not supported! #134
Comments
We will check this problem later~ |
We will check this problem this weekend, sry for the long time waiting |
I was wondering which model you are used in this situation~ @wanghao9610 When setting |
You can try this solution if you're in a hurry by modifying the model like: if self.training:
batch_size, _, H, W = images.tensor.shape
img_masks = images.tensor.new_ones(batch_size, H, W)
for img_id in range(batch_size):
img_h, img_w = batched_inputs[img_id]["instances"].image_size
img_masks[img_id, :img_h, :img_w] = 0
else:
batch_size, _, H, W = images.tensor.shape
img_masks = images.tensor.new_ones(batch_size, H, W)
for img_id in range(batch_size):
img_h, img_w = images.image_sizes[img_id]
img_masks[img_id, :(img_h-1), :(img_w-1)] = 0 |
Thanks for your immediate reply~ I will try your solution. |
@rentainhe Hello, I have tried your provided solution, but I got the same much lower mAP result. Have you try it successfully? |
I was wondering which model you're testing under this situation @wanghao9610 I tried this on DINO-R50 with |
I run this on DN_DETR-R50 with you provided weight (dn_detr_r50_50ep.pth), setting "dataloader.test.batch_size=16" with 2gpus, got 0.59 AP. My tesing command as below: |
I will try to reproduce this issue later~ |
Same problem. I also got <1% AP with DINO-R50 and dataloader.test.batch_size=8 on single GPU. The mask patch provided above doesn't make a difference. |
Same problem, still not fixed yet. |
Isn't this BUG fixed yet? |
Hi:
I want to set "dataloader.test.batch_size = 16" (default is 1) to accelerate the evaluation speed. But, I get a much smaller mAP results, e.g. 0.59 vs 41.5 mAP. Is it the upper stream(detectron2)'s issue? Could you fix the issue?
The text was updated successfully, but these errors were encountered: