Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Handle PyTorchTensor[ndarray] #5

Open
wants to merge 1 commit into
base: master
Choose a base branch
from

Conversation

mglisse
Copy link
Contributor

@mglisse mglisse commented Apr 13, 2020

Hello,

my goal is to work around pytorch/pytorch#34452 .

import torch
import numpy as np
import eagerpy as ep

t=ep.astensor(torch.tensor([0.]))
i=np.array([[0,0],[0,0]])
t[i]
t[ep.astensor(i)]

Currently, the last 2 lines fail

IndexError: too many indices for tensor of dimension 1

The patch lets this test pass. I also ran the testsuite, and it seems to work as well after as before the patch (one jax test fails because of a relative error 1.5295117e-07 > 1e-07 and the first tensorflow test runs forever, but the pytorch tests all pass, which seems the most relevant).

I don't claim that it is the one right way to do it (how do you detect "something that torch.as_tensor likes" more generally than ndarray? Just try?). Also, I guess eagerpy is not meant to allow interoperability between types from different backends. However, in some cases, I need to generate some indices, I can easily do it with whatever backend, but I don't know how to conveniently build them with "the same backend as the input", and I see that you already use np.arange to generate an index for a torch.tensor in onehot_like, so it looks acceptable.

It does not solve the same issue with a list or whatever other iterable, but then I can just build a numpy array from those, I only need one way to do this (preferably not one as silly as t[i,]). I think currently the official way to do it may be t[t.from_numpy(i)].

@jonasrauber
Copy link
Owner

jonasrauber commented Apr 23, 2020

The thing that is slightly confusing with EagerPy is that NumPy takes two roles. On the one hand, NumPy is one of the EagerPy backends (like PyTorch, TensorFlow, JAX). On the other hand, NumPy is kind of the defacto standard with which they all interoperate.

It is correct that (at the moment), we don't support mixing backends.

ep.astensor(i) creates an eagerpy.NumPyTensor, but t is an eagerpy.PyTorchTensor.
So I don't think we (currently) want to support t[ep.astensor(i)].
Instead, the correct way would be to create an eagerpy.PyTorchTensor from i and use that for the indexining: t[t.from_numpy(i)] or t[ep.from_numpy(t, i)].

Can you explain why you you want t[ep.astensor(i)] instead of t[ep.from_numpy(t, i)] (i.e. why the second option is not good enough)?

If it is because you absolutely want to avoid converting your indexing to a PyTorch tensor, then

  1. I am not sure if PyTorch doesn't do that internally anyway
  2. Supporting t[i] is probably what we actually want

So for now I will wait for your answers and explanations.
Until then, the recommended solution is t[t.from_numpy(i)] or t[ep.from_numpy(t, i)].
In principle I am open to supporting t[i], but then we should make sure it works for all frameworks (maybe just by calling t.from_numpy(i) under the hood).

@mglisse
Copy link
Contributor Author

mglisse commented Apr 23, 2020

Can you explain why you you want t[ep.astensor(i)] instead of t[ep.from_numpy(t, i)] (i.e. why the second option is not good enough)?

The one I would like most is t[i]. I tried t[ep.astensor(i)] because that was the first workaround I could think of when t[i] failed, but I don't particularly want to use it. Indeed, I later found out (see the last paragraph of the report, added in an edit a few days later) that t[i,] and t[t.from_numpy(i)] both seem to work, I know now how to use it and it isn't blocking my work. Still, t[i] remains a tempting option, and it is kind of a trap, because it works with some backends, and for pytorch it works for some matrix sizes and has a different meaning for others.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants