Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support for Tensorflow 2 #190

Closed
faysou opened this issue Nov 7, 2019 · 18 comments
Closed

Support for Tensorflow 2 #190

faysou opened this issue Nov 7, 2019 · 18 comments

Comments

@faysou
Copy link

faysou commented Nov 7, 2019

Is there any plan to migrate the library to support Tensorflow 2 ?
What would need to be changed in the library for this ?

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 7, 2019 via email

@faysou
Copy link
Author

faysou commented Nov 7, 2019

Ok thank you, then this should be enough for me.
Great library by the way ! I've used it for tensor completion.
Tensor trains are quite magical, I knew about HOSVD, but this is big step forward.

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 7, 2019

Thanks, glad someone uses it! :)

Note that you'll probably have to change the tf imports in the whole library (I know, annoying). I guess the simplest thing to do would be to fork the library and use "replace in all files" in some editor.

@Bihaqo Bihaqo closed this as completed Nov 7, 2019
@Bihaqo
Copy link
Owner

Bihaqo commented Nov 7, 2019

I just realized that I probably can always import compat.v1 even in older version, so I'll try making it the default: #191

@faysou
Copy link
Author

faysou commented Nov 7, 2019

Yes that would be nice, so your library works out of the box with the newer version of tf.

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 7, 2019

I think #191 should work, just don't forget to tf.disable_v2_behavior() (like in the tutorial: https://colab.research.google.com/github/Bihaqo/t3f/blob/tf2_dummy_support/docs/quick_start.ipynb)

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 7, 2019

Will probably merge it tomorrow

@faysou
Copy link
Author

faysou commented Nov 15, 2019

I'm trying to use Session instead of eager in order to use gradients in a complection algorithm where the rank of a tensor is increased gradually.

It seems hard in tensorflow 1 to avoid recomputations as well as manage variables.

Does tensorflow 2 avoid this issue as the recommended mode in it is eager evaluation ?
Maybe this library could become easier to use in tensorflow 2, as using tf.Session really complexifies the code.

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 18, 2019 via email

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 18, 2019

I created a separate issue about it to not to forget: #193

@faysou
Copy link
Author

faysou commented Nov 20, 2019

Hi Alexander, sorry I didn't see your message.
You can have a look at the paper below, p14.

http://sma.epfl.ch/~anchpcommon/publications/ttcompletion.pdf

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 20, 2019

Fait enough, I agree it would be easier to do in tf 2.

Note though that even in TF 2 it would be super annoying because you have to use tf.function to compile pieces of code to make it run faster, and tf.function doesn't support anything but tf tensors (i.e. it doesn't support t3f objects).

So you would write something like

def step(tens):
  # ... process
  # Do something that increases ranks
  tens = tens + 1
  return tens
for i in range(num_iter):
  tens = step(tens)

But then, to make it run reasonably fast you'll have to do use tf.function on step, but to do that you'll have to make it take a list of tf.Tensors as input and output a list of tf.Tensors, i.e. something like

@tf.function
def step(tens_cores):
  # ... process
  # Do something that increases ranks
  tens = t3f.TensorTrain(tens_cores)
  tens = tens + 1
  return tens.tt_cores
for i in range(num_iter):
  tens_cores = step(tens_cores)

And at this point you can do something like this with TF 1 as well, i.e.

def step(tens_cores):
  # ... process
  # Do something that increases ranks
  tens = t3f.TensorTrain(tens_cores)
  tens = tens + 1
  return tens.tt_cores
next_iter_cores = step(tens)
with tf.Session() as sess:
  for i in range(num_iter):
    tens_cores = sess.run(next_iter_cores, feed_dict={tens.tt_cores[i]: tens_cores[i] for i in 
 range(len(tens_cores))})

Anyway, it's a bit ugly in both TF 1 and TF 2, but TF 2 indeed would be nicer :)

I'll take a look at implementing t3f.gradients in eager mode.

@faysou
Copy link
Author

faysou commented Nov 20, 2019

Great, thank you for your reply.

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 20, 2019 via email

@faysou
Copy link
Author

faysou commented Nov 20, 2019

I'll wait for your v2 then :)

Your library could save a lot of energy in training, it should be used by more people for neural networks.

@Bihaqo
Copy link
Owner

Bihaqo commented Nov 21, 2019

Ok, done :)

This is not submitted yet though, so please checkout this branch: #193

@faysou
Copy link
Author

faysou commented Nov 21, 2019

Waw, thank you. So to be clear your new commit is about the auto-diff with eager mode, not full support for tensorflow 2.

I'll try your new commit it, will normally speed up my code.

@Bihaqo
Copy link
Owner

Bihaqo commented Feb 24, 2020

Better support is added in #201

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants