-
Notifications
You must be signed in to change notification settings - Fork 84
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Matlab Mexcuda support for cufinufft #634
base: master
Are you sure you want to change the base?
Conversation
|
||
.. code-block:: bash | ||
|
||
cmake -S . -B build -D FINUFFT_USE_CUDA=ON -D FINUFFT_STATIC_LINKING=OFF -D CMAKE_VERBOSE_MAKEFILE:BOOL=ON -D FINUFFT_CUDA_ARCHITECTURES="60;70;80;90" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we don't reallly need to specify the architectures as it should use native by default.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, we don't have to. But I find it's a bit annoying compiled on my FI cluster workstation with P4000 card, later testing on v100, a100 and v100 recompiling every time, at least for cluster users compiling with different arches save some time.
Nice progress. Are you making a PR for mwrap github too? (so we can see the new formats for gpuarrays). Will be exciting to have that as a general tool too (I could add examples to mwrapdemo repo...). Let me know if need any help. |
Yes, I should make a PR for mwrap, I tested mwrap with simple demo https://github.com/lu1and10/mwrap/blob/gpu/testing/test_gpu.mw and our cufinufft code wrap. I should do more cleaning and have more tests C/Fortran, different combinations of gpu array of integers, complex, floating point types with in/out/inout tokens. Maybe I should make a draft PR @zgimbutas and add more tests for mwrp later.
Yes, I think it will be useful to go through the mexcuda install/compile process on different machines, and Matlab versions to see what kind of problem we could meet. For now, I tested on FI cluster and @haiszhu tested on his ubuntu machine. |
Then I suggest using all-major as the supported architectures depends on
the cuda version installed. This fails with cuda 11.3 for example.
…On Tuesday, February 18, 2025, Libin Lu ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In docs/install_gpu.rst
<#634 (comment)>
:
> @@ -92,3 +92,35 @@ Assuming ``pytest`` is installed (otherwise, just run ``pip install pytest``), y
In contrast to the C interface tests, these check for correctness, so a successful test run signifies that the library is working correctly.
Note that you can specify other framework (``pycuda``, ``torch``, or ``numba``) for testing using the ``--framework`` argument.
+
+
+Matlab interface
+----------------
+
+.. _install-matlab-gpu:
+
+In addition to the Python interface, cuFINUFFT also comes with a Matlab interface. To install this, you first build the shared library.
+For example, assuming in the root directory of finufft, then run
+
+.. code-block:: bash
+
+ cmake -S . -B build -D FINUFFT_USE_CUDA=ON -D FINUFFT_STATIC_LINKING=OFF -D CMAKE_VERBOSE_MAKEFILE:BOOL=ON -D FINUFFT_CUDA_ARCHITECTURES="60;70;80;90"
Yes, we don't have to. But I find it's a bit annoying compiled on my FI
cluster workstation with P4000 card, later testing on v100, a100 and v100
recompiling every time, at least for cluster users compiling with different
arches save some time.
—
Reply to this email directly, view it on GitHub
<#634 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACGKNQMGLLSQM2UMUZ6QB4D2QO4IFAVCNFSM6AAAAABXMHEAAKVHI2DSMVQWIX3LMV43YUDVNRWFEZLROVSXG5CSMV3GSZLXHMZDMMRVGI2DENJUG4>
.
You are receiving this because you commented.Message ID:
***@***.***>
|
That is why there is line saying that "You may adjust |
I would be happy to help with the pull request to mwrap once the patch is ready. Thanks for taking care of gpu/cuda bindings.On Feb 18, 2025, at 5:27 PM, Libin Lu ***@***.***> wrote:
Then I suggest using all-major as the supported architectures depends on the cuda version installed. This fails with cuda 11.3 for example.
That is why there is line saying that "You may adjust FINUFFT_CUDA_ARCHITECTURES to generate the code for different compute capabilities.".
I think it's a use case of FINUFFT_CUDA_ARCHITECTURES which you also use here https://github.com/DiamonDinoia/finufft/blob/d10220f23ae472e660006d62fe5d4638e1a7d4e6/.github/workflows/build_cufinufft_wheels.yml#L61—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
lu1and10 left a comment (flatironinstitute/finufft#634)
Then I suggest using all-major as the supported architectures depends on the cuda version installed. This fails with cuda 11.3 for example.
That is why there is line saying that "You may adjust FINUFFT_CUDA_ARCHITECTURES to generate the code for different compute capabilities.".
I think it's a use case of FINUFFT_CUDA_ARCHITECTURES which you also use here https://github.com/DiamonDinoia/finufft/blob/d10220f23ae472e660006d62fe5d4638e1a7d4e6/.github/workflows/build_cufinufft_wheels.yml#L61
—Reply to this email directly, view it on GitHub, or unsubscribe.You are receiving this because you were mentioned.Message ID: ***@***.***>
|
@DiamonDinoia btw, a side finding is that the current https://finufft.readthedocs.io/en/latest/install_gpu.html#cmake-installation does not provide proper install instruction, i.e., Line 305 in b8cea9d
FINUFFT_CUDA_ARCHITECTURES defaults to native, always defined.)
|
To add support gpuArray in Matlab for Matlab interface with cufinufft as the backend.
check_finufft.m
,check_finufft_single.m
, math tests and examples.cufinufft_plan.docsrc
docs/install_gpu.rst
docs/matlab.rst