You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I thought it might be worth a separate ticket – when running on GPU, all available memory is allocated, but the Tensorflow model of BERT may not actually need it. This should be simple enough to configure – e.g. in the Java API, the following code did the trick for me (replacing this line):
Hi again,
I thought it might be worth a separate ticket – when running on GPU, all available memory is allocated, but the Tensorflow model of BERT may not actually need it. This should be simple enough to configure – e.g. in the Java API, the following code did the trick for me (replacing this line):
Similarly in the Python API, it should be possible to start the TF session with an appropriately configured
ConfigProto
.Thanks
The text was updated successfully, but these errors were encountered: