-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Boost #159
Comments
Would love to see that, too :-) |
I have been asked about this already by Michael and the main argument was to ease using it on less well maintained HPC systems with either no or very old versions of Boost. First of all, Boost is an excellent library collection. It comes with high quality code which underwent multiple peer reviews. I could not write code this good. LLAMA uses Boost because it needs functionality beyond the C++ standard library, like intensive metaprogramming. Boost::mp11 is an excellent meta programming library in this regard, peer reviewed and even brought into C++ standardization. Unfortantely the proposal was rejected, because metaprogramming approaches are still too volatile to be standardized. But this leaves LLAMA in the spot where it can either write its own metaprogramming library (as alpaka does) or pick a popular library. I chose to not implement my own metaprogramming library (albeit being interesting) because I want to move forward with LLAMA as fast as possible and not spend a month on reinventing a barely functional wheel when their are high quality wheels freely available. A similar case could be made for boost::demangle, although this is a less important feature. I do not know enough platform specifics (I know Windows and Linux to some degree) to implement portable name demangling. The fmt library is also part of LLAMA for better text formatting. This library has indeed been standardized now in C++20, but LLAMA tries to stay at C++17 for CUDA's sake and also to support older compilers as well. This dependency will go away with updating to C++20 at some point. As for the standalone argument: If deploying is an issue, I could ship the parts of Boost I need as part of LLAMA. Boost::mp11 is shipped with Boost but does not have any dependencies on other Boost libraries. It can be shipped/installed alone. For boost::demangle I don't know. fmt is only used for layout dumping, so you could not use that part of LLAMA if you do not want to install fmt. However, before I make any changes I want to really understand the problem you are having @ax3l! What is the problem you need to solve? I am really happy for anyone wanting to play with LLAMA and I want to make this experience as easy as possible! But be mindful that dropping dependencies means that I need to write and maintain them myself, which in turns means more bugs and less time for exciting features. Btw: for such discussions, please use the |
Jup, that's the question basically. Also fmt can be vendored easily.
I am just checking in on how standalone and orthogonal LLAMA is at the moment. For any projects I am (co-)maintaining or contributing these days, a direct Boost dependency is not possible until better full CMake support, HPC compiler coverage and modularization lands in boost or the modules we pull. Luckily, some components can be used as standalone and provide such aspects, as you mentioned mp11 (you'll find PRs of ours on this in its history).
Sorry, didn't know - maybe we can hint this in the |
This model of shipping dependencies yourself feels incredibly weird. What is preventing version conflicts then? Let's say your software uses LLAMA (with own Boost) and another version of Boost. What if alpaka uses a different Boost than LLAMA and you mix both? That sounds pretty dangerous to me. Getting external dependencies should be the purpose of package managers. What is preventing you from using a package manager? Or even more fundamental: Why can't you provide dependencies yourself (by any means)?
I want to understand this. Does that mean
Thx for trying to fix the world! We need more such people!
Don't be sorry, I just enabled it after seeing that you started a discussion in an issue ;) |
If your boost version is newer than the cmake version CMake targets will not be available because each boost version will be added separately into CMake. This requires workarounds like we use in PIConGPU As @ax3l said boost is not tested with HPC compiler hipcc, nvcc, xl see this list Bost is not backporting fixes which requires often workarounds e.g what we use in PIConGPU On an HPC system, you mostly get the modules provided by the admin/vendor often compiled with the compiler best supported by the vendor. Because of the compiler issues explained above it can be hard to compile boost on your own. I am a big fan of boost but absolutely understand why @ax3l is looking into reducing dependencies that are hard to maintain in the HPC world. |
When vendoring a lib (either directly as copy/subtree or as cmake fetch content), one always adds a build switch to prefer a system/external library if requested. That way you get developer productivity and package maintainability. |
Just to add an example to the discussion, we are probably having a related issue right now with a project mixing two alpaka versions: alpaka-group/cupla#198 |
Since there is the ongoing discussion in one alpaka issue @bernhardmgruber Is LLAMA using something else than mp11 from boost? Would it be possible to switch to the mp11 stand-alone version? |
Apart from mp11, LLAMA uses a couple of other Boost features that I would like to avoid replacing, like:
Since the dumping code is not included by default, the demangle is the only other Boost functionality strongly required. If Boost becomes an issue, we could find a workaround for this. |
Thanks for the summary. Dependencies in examples are not as critical - they might be complicating smoke test in new environments but at least they do not propagate to dependents. (Adding a CMake option to disable example builds is always good to have then.) |
With #266 in, LLAMA now only uses:
|
We recently acquired:
|
I honestly see no way we can get rid of Boost at some point. The library just provides too much value and we absolutely lack the personpower to implement all the features ourselves. In some areas, a C++ upgrade could help us, but then we would need to cut CUDA support. For these reasons, I want to close the ticket, as won't fix. |
Hi,
are there plans to remove the Boost dependency to make LLAMA more lightweight / standalone? :)
The text was updated successfully, but these errors were encountered: