You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
首先非常感谢您的工作,这是目前效果很好的去文字模型,但运行遇到一些问题:
目前的Pytorch版本比较新Pytorch1.10+,尝试过评论区1.4版本,数据上GPU很慢很慢,显卡RTX3090。
估计是loss的计算没有适配新版Pytorch的多卡训练,请问如何修改,感谢!
以下付上报错log: Cuda is available! /root/miniconda3/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( /root/miniconda3/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or Nonefor 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passingweights=VGG16_Weights.IMAGENET1K_V1. You can also use weights=VGG16_Weights.DEFAULT to get the most up-to-date weights. warnings.warn(msg) OK! /root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1965: UserWarning: Calling .zero_grad() from a module created with nn.DataParallel() has no effect. The parameters are copied (in a differentiable manner) from the original module. This means they are not leaf nodes in autograd and so don't accumulate gradients. If you need gradients in your forward method, consider using autograd.grad instead. warnings.warn( /root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' Traceback (most recent call last): File "train_STE.py", line 107, in <module> G_loss.backward() File "/root/miniconda3/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward torch.autograd.backward( File "/root/miniconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Output 38 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
The text was updated successfully, but these errors were encountered:
首先非常感谢您的工作,这是目前效果很好的去文字模型,但运行遇到一些问题:
目前的Pytorch版本比较新Pytorch1.10+,尝试过评论区1.4版本,数据上GPU很慢很慢,显卡RTX3090。
估计是loss的计算没有适配新版Pytorch的多卡训练,请问如何修改,感谢!
以下付上报错log:
Cuda is available! /root/miniconda3/lib/python3.8/site-packages/torchvision/models/_utils.py:208: UserWarning: The parameter 'pretrained' is deprecated since 0.13 and may be removed in the future, please use 'weights' instead. warnings.warn( /root/miniconda3/lib/python3.8/site-packages/torchvision/models/_utils.py:223: UserWarning: Arguments other than a weight enum or
Nonefor 'weights' are deprecated since 0.13 and may be removed in the future. The current behavior is equivalent to passing
weights=VGG16_Weights.IMAGENET1K_V1. You can also use
weights=VGG16_Weights.DEFAULTto get the most up-to-date weights. warnings.warn(msg) OK! /root/miniconda3/lib/python3.8/site-packages/torch/nn/modules/module.py:1965: UserWarning: Calling .zero_grad() from a module created with nn.DataParallel() has no effect. The parameters are copied (in a differentiable manner) from the original module. This means they are not leaf nodes in autograd and so don't accumulate gradients. If you need gradients in your forward method, consider using autograd.grad instead. warnings.warn( /root/miniconda3/lib/python3.8/site-packages/torch/nn/parallel/_functions.py:68: UserWarning: Was asked to gather along dimension 0, but all input tensors were scalars; will instead unsqueeze and return a vector. warnings.warn('Was asked to gather along dimension 0, but all ' Traceback (most recent call last): File "train_STE.py", line 107, in <module> G_loss.backward() File "/root/miniconda3/lib/python3.8/site-packages/torch/_tensor.py", line 488, in backward torch.autograd.backward( File "/root/miniconda3/lib/python3.8/site-packages/torch/autograd/__init__.py", line 197, in backward Variable._execution_engine.run_backward( # Calls into the C++ engine to run the backward pass RuntimeError: Output 38 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one.
The text was updated successfully, but these errors were encountered: