-
In the initial experiments I was able to put MxNet working on a GPU. I now upgraded the OS to Linux 20.04. So the default CUDA is 11.2 with gcc 9. The driver is working fine and I don't won't to mess with the set-up - finally I can use on-demand GPU with the apps. In addition to this the compute nodes I use also upgraded to 11.2 - MxNet with GPU was also ok with 10.2. While installing CUDA I noticed that my old CUDA 10.2 compiled examples can still run on the 11.2 run-time. So my question is, is it possible to "force" DJL to use the GPU version of the native libraries? I tried just including the GPU libraries in the Java path but I still get:
I assume that all I need to do is not use the TIA |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 1 reply
-
I think this is more on the MXNet engine side. MXNet has a specific build for each version of CUDA so it is expecting that version. We package up a few builds of MXNet for a few different CUDA flavors. All the auto does is choose between our packaged builds based on your system. Right now, we haven't done the CUDA 11.x ones but I am working on releasing some with the new MXNet 1.8 right now. This will include a build of CUDA 11.0 as that is what MXNet seems to support by their install page. You can also use a custom build of MXNet with DJL that would work on other CUDA flavors, environments, etc. Just specify the path to the MXNet build with the environment variable |
Beta Was this translation helpful? Give feedback.
-
@hmf
|
Beta Was this translation helpful? Give feedback.
@hmf
Yes you supply your own version of libmxnet.so file:
See: http://docs.djl.ai/docs/development/troubleshooting.html#4-how-to-run-djl-using-other-versions-of-mxnet