You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Traceback (most recent call last):
File "eval.py", line 51, in <module>
eval_model(model_name, test_img_path, submit_path)
File "eval.py", line 31, in eval_model
detect_dataset(model, device, test_img_path, submit_path)
File "/home/***/EAST/detect.py", line 174, in detect_dataset
boxes = detect(Image.open(img_file), model, device)
File "/home/***/EAST/detect.py", line 144, in detect
score, geo = model(load_pil(img).to(device))
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/***/EAST/model.py", line 171, in forward
return self.output(self.merge(self.extractor(x)))
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/***/EAST/model.py", line 76, in forward
x = m(x)
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/module.py", line 727, in _call_impl
result = self.forward(*input, **kwargs)
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 423, in forward
return self._conv_forward(input, self.weight)
File "/home/***/anaconda3/envs/py38/lib/python3.8/site-packages/torch/nn/modules/conv.py", line 419, in _conv_forward
return F.conv2d(input, weight, self.bias, self.stride,
RuntimeError: CUDA out of memory. Tried to allocate 16.26 GiB (GPU 0; 31.75 GiB total capacity; 3.76 GiB already allocated; 8.02 GiB free; 16.86 GiB reserved in total by PyTorch)
My dataset images are over 3200 px of width and so I resized my images but still this error occured
GPUs are 2 of Tesla V100 32GB and CUDA version is 11.0.3
I think this code needs data control like Dataloader for data parallel of evaluating, but it only uses single GPU with using zip files and raw images....
Is there solution of solving this problem?
The text was updated successfully, but these errors were encountered:
My dataset images are over 3200 px of width and so I resized my images but still this error occured
GPUs are 2 of Tesla V100 32GB and CUDA version is 11.0.3
I think this code needs data control like Dataloader for data parallel of evaluating, but it only uses single GPU with using zip files and raw images....
Is there solution of solving this problem?
The text was updated successfully, but these errors were encountered: