Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discriminative loss error: IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1) #13

Open
sharmalakshay93 opened this issue Jan 5, 2021 · 12 comments

Comments

@sharmalakshay93
Copy link

Hi,

When running python3 lanenet/train.py --dataset ./data/training_data_example, I'm seeing the following exception:

Traceback (most recent call last):
  File "lanenet/train.py", line 156, in <module>
    main()
  File "lanenet/train.py", line 144, in main
    train_iou = train(train_loader, model, optimizer, epoch)
  File "lanenet/train.py", line 68, in train
    total_loss, binary_loss, instance_loss, out, train_iou = compute_loss(net_output, binary_label, instance_label)
  File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/model.py", line 75, in compute_loss
  File "/home/lashar/.local/lib/python3.6/site-packages/torch/nn/modules/module.py", line 722, in _call_impl
    result = self.forward(*input, **kwargs)
  File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/loss.py", line 33, in forward
  File "/usr/local/lib/python3.6/dist-packages/lanenet-0.1.0-py3.6.egg/lanenet/model/loss.py", line 71, in _discriminative_loss
  File "/home/lashar/.local/lib/python3.6/site-packages/torch/functional.py", line 1100, in norm
    return _VF.frobenius_norm(input, _dim, keepdim=keepdim)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)

Any clues about how I can fix this? Not sure if I'm doing something incorrectly.

Thanks!

@sharmalakshay93
Copy link
Author

PS: I just saw issues/12, and its related commit. Changing it back to dim=0 makes it work. However, since the aforementioned issue says that the correct value is dim=1, not sure if this works as intended? Would appreciate it if clarification could be provided!

Thanks!

@Flying-Osterich
Copy link

I'm having the same problem. Reverting back to dim=0 resolves the issue.

@DongwhanLee
Copy link

DongwhanLee commented Mar 24, 2021

PS: I just saw issues/12, and its related commit. Changing it back to dim=0 makes it work. However, since the aforementioned issue says that the correct value is dim=1, not sure if this works as intended? Would appreciate it if clarification could be provided!

Thanks!

Hi, I am running your code, and I encountered the same problem.
I tried both dim=0 and dim=1, but it doesn't work.

Have been there any updates after your last commit?
Please let me know how to fix it and run the train! Thank you.

@mengnutonomy
Copy link

I also tried both dim=0 and dim=1, both not working.
@klintan would like to know any updates on this? thanks.

@mummy2358
Copy link

Managed to fix it. It should be dim=1. The norm should be calculated along the "embedding" axis. Problem comes from

embedding_i = embedding_b[seg_mask_i]

which break the dims to get a single dimensional vector.

Changing it to:
embedding_I = embedding_b * seg_mask_i

works for me

@mengnutonomy
Copy link

@mummy2358 Thank you. Will give it a try.

@klintan
Copy link
Owner

klintan commented May 6, 2021

@mummy2358 awesome thanks! Feel free to add a PR for the fix :)

@mengnutonomy
Copy link

mengnutonomy commented May 6, 2021

@mummy2358 I do not know why, on my side, after changing to embedding_i = embedding_b * seg_mask_i I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert to dim=0 and w/o aforementioned change, also works.

@william3863-prog
Copy link

May i knw where exactly the posotion of dim?

@diazoangga
Copy link

@mummy2358 I do not know why, on my side, after changing to embedding_i = embedding_b * seg_mask_i I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert to dim=0 and w/o aforementioned change, also works.

hey @mengnutonomy, it happened to me also. I don't know if this does anything, but it seems you have to run 'python setup.py install' everytime you change your coding. Hope it'll work :)

@Answergeng
Copy link

@mummy2358 I do not know why, on my side, after changing to embedding_i = embedding_b * seg_mask_i I still have the same problem. Not sure if it's computer related, as I tried on another pc, it works. And on that pc, revert to dim=0 and w/o aforementioned change, also works.

hey @mengnutonomy, it happened to me also. I don't know if this does anything, but it seems you have to run 'python setup.py install' everytime you change your coding. Hope it'll work :)

thx a lot ! rerun works for me

@xudh1991
Copy link

xudh1991 commented Aug 3, 2023

            if len(embedding_i.shape)==1:  
                embedding_i=torch.unsqueeze(embedding_i,0)  

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants