You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello,
I was training with RT-DETRv2 with multiple GPUs. The training got stucked when a batch of data on a GPU are all background images (i.e., no ground-truth/target boxes), the volatile gpu-util became 100%, and there were no errors or warnings reported. The training was just stucked there and did not go on. In such case, the VFL loss is very small (0.0528), l1 and giou box loss are all 0. The training was stucked in the scaler.backward or scaler.step (or scaler.update maybe) process. When there are some target boxes in a batch data, the training is fine. By the way, when training with gloo backend, it was stucked in the same occasion, and the following error was reported:
Thanks for attention!
The text was updated successfully, but these errors were encountered:
Thanks. But the loss_bbox is already 0 in such case (or maybe i misunderstand?), and i also want to train with these background images to prevent wrong detections.
I guess this could be a data mismatch problem between different GPUs, according to the error reported when using gloo backend. I found that when there are no target boxes, the "dn_aux_outputs" in model output will not be defined, then dn related loss items will not be defined. I tried to add these loss items with zero values and no grad. Besides, the find_unused_parameters is set as True when setting DDP mode to prevent DDP waiting for these unused parameters. These operations solved the stucking problem during training in my case.
I'm not sure this is a common problem for such case in which there are many background images, and also the above solution may not be a accurate or good one cause it increased a little bit training time.
@NotoCJ any chance you could post the code changes you made?
I believe I have some similar issues, large dataset with many background images
I've modified the loss values and also set find_unused_parameters to True but I haven't been able to have as much luck as you with actually getting training to work for this dataset on multi-gpu
Hello,
I was training with RT-DETRv2 with multiple GPUs. The training got stucked when a batch of data on a GPU are all background images (i.e., no ground-truth/target boxes), the volatile gpu-util became 100%, and there were no errors or warnings reported. The training was just stucked there and did not go on. In such case, the VFL loss is very small (0.0528), l1 and giou box loss are all 0. The training was stucked in the scaler.backward or scaler.step (or scaler.update maybe) process. When there are some target boxes in a batch data, the training is fine. By the way, when training with gloo backend, it was stucked in the same occasion, and the following error was reported:
Thanks for attention!
The text was updated successfully, but these errors were encountered: