Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Boost #159

Closed
ax3l opened this issue Mar 4, 2021 · 13 comments
Closed

Boost #159

ax3l opened this issue Mar 4, 2021 · 13 comments
Assignees
Labels
question Further information is requested

Comments

@ax3l
Copy link
Member

ax3l commented Mar 4, 2021

Hi,

are there plans to remove the Boost dependency to make LLAMA more lightweight / standalone? :)

@ax3l ax3l added the question Further information is requested label Mar 4, 2021
@bussmann
Copy link

bussmann commented Mar 4, 2021

Would love to see that, too :-)

@bernhardmgruber
Copy link
Member

I have been asked about this already by Michael and the main argument was to ease using it on less well maintained HPC systems with either no or very old versions of Boost.

First of all, Boost is an excellent library collection. It comes with high quality code which underwent multiple peer reviews. I could not write code this good.
Boost has come a bit of age in the recent years and I observed more discussion now on dropping old parts of Boost which are now part of the C++ standard library. I can also see increasing issues with many workarounds inside Boost to support older compilers, which cause troubles with very new compilers. That said, Boost is currently in a difficult phase and we will see how this develops.

LLAMA uses Boost because it needs functionality beyond the C++ standard library, like intensive metaprogramming. Boost::mp11 is an excellent meta programming library in this regard, peer reviewed and even brought into C++ standardization. Unfortantely the proposal was rejected, because metaprogramming approaches are still too volatile to be standardized. But this leaves LLAMA in the spot where it can either write its own metaprogramming library (as alpaka does) or pick a popular library.

I chose to not implement my own metaprogramming library (albeit being interesting) because I want to move forward with LLAMA as fast as possible and not spend a month on reinventing a barely functional wheel when their are high quality wheels freely available.

A similar case could be made for boost::demangle, although this is a less important feature. I do not know enough platform specifics (I know Windows and Linux to some degree) to implement portable name demangling.

The fmt library is also part of LLAMA for better text formatting. This library has indeed been standardized now in C++20, but LLAMA tries to stay at C++17 for CUDA's sake and also to support older compilers as well. This dependency will go away with updating to C++20 at some point.

As for the standalone argument: If deploying is an issue, I could ship the parts of Boost I need as part of LLAMA. Boost::mp11 is shipped with Boost but does not have any dependencies on other Boost libraries. It can be shipped/installed alone. For boost::demangle I don't know. fmt is only used for layout dumping, so you could not use that part of LLAMA if you do not want to install fmt.

However, before I make any changes I want to really understand the problem you are having @ax3l! What is the problem you need to solve? I am really happy for anyone wanting to play with LLAMA and I want to make this experience as easy as possible! But be mindful that dropping dependencies means that I need to write and maintain them myself, which in turns means more bugs and less time for exciting features.

Btw: for such discussions, please use the Discussions feature of GitHub in the future! I just enabled it for exactly such discussions!

@ax3l
Copy link
Member Author

ax3l commented Mar 4, 2021

As for the standalone argument: If deploying is an issue, I could ship the parts of Boost I need as part of LLAMA. Boost::mp11 is shipped with Boost but does not have any dependencies on other Boost libraries. It can be shipped/installed alone. For boost::demangle I don't know. fmt is only used for layout dumping, so you could not use that part of LLAMA if you do not want to install fmt.

Jup, that's the question basically. Also fmt can be vendored easily.

However, before I make any changes I want to really understand the problem you are having @ax3l! What is the problem you need to solve?

I am just checking in on how standalone and orthogonal LLAMA is at the moment. For any projects I am (co-)maintaining or contributing these days, a direct Boost dependency is not possible until better full CMake support, HPC compiler coverage and modularization lands in boost or the modules we pull.

Luckily, some components can be used as standalone and provide such aspects, as you mentioned mp11 (you'll find PRs of ours on this in its history).

Btw: for such discussions, please use the Discussions feature of GitHub in the future! I just enabled it for exactly such discussions!

Sorry, didn't know - maybe we can hint this in the CONTRIBUTING.md/README.md or via an issue template?

@bernhardmgruber
Copy link
Member

Jup, that's the question basically. Also fmt can be vendored easily.

This model of shipping dependencies yourself feels incredibly weird. What is preventing version conflicts then? Let's say your software uses LLAMA (with own Boost) and another version of Boost. What if alpaka uses a different Boost than LLAMA and you mix both? That sounds pretty dangerous to me. Getting external dependencies should be the purpose of package managers. What is preventing you from using a package manager? Or even more fundamental: Why can't you provide dependencies yourself (by any means)?

a direct Boost dependency is not possible until better full CMake support, HPC compiler coverage and modularization lands in boost or the modules we pull.

I want to understand this. Does that mean find_package(Boost 1.70.0 REQUIRED) does not work on your system? Because maybe find_package(Boost 1.70.0 REQUIRED COMPONENTS core mp11) could work then.
Boost is partially modularized. In vcpkg I can already download/install any Boost library individually. Most system package managers split header-only and compiled parts of Boost. What do you need from modularization? Smaller download sizes?
HPC compiler coverage is highly dependent on what parts of Boost LLAMA uses. I think the best we can do is just try it?

Luckily, some components can be used as standalone and provide such aspects, as you mentioned mp11 (you'll find PRs of ours on this in its history).

Thx for trying to fix the world! We need more such people!

Btw: for such discussions, please use the Discussions feature of GitHub in the future! I just enabled it for exactly such discussions!

Sorry, didn't know - maybe we can hint this in the CONTRIBUTING.md/README.md or via an issue template?

Don't be sorry, I just enabled it after seeing that you started a discussion in an issue ;)

@psychocoderHPC
Copy link
Member

If your boost version is newer than the cmake version CMake targets will not be available because each boost version will be added separately into CMake. This requires workarounds like we use in PIConGPU
Could be that this issue is already solved with the latest CMake versions.

As @ax3l said boost is not tested with HPC compiler hipcc, nvcc, xl see this list
At the time where I tested PIConGPU the first time with the XL compiler on OpenPower, IBM was providing a fixed boost version (was not publically available) for me because it was not possible to compile boost with the XL compiler. For old boost versions, you can find patches late after the boost release but if you require one of the latest boost version you mostly have a problem.

Bost is not backporting fixes which requires often workarounds e.g what we use in PIConGPU

On an HPC system, you mostly get the modules provided by the admin/vendor often compiled with the compiler best supported by the vendor. Because of the compiler issues explained above it can be hard to compile boost on your own.
spack can sometimes help but the problem with the untested HPC compiler is still challenging.

I am a big fan of boost but absolutely understand why @ax3l is looking into reducing dependencies that are hard to maintain in the HPC world.

@ax3l
Copy link
Member Author

ax3l commented Mar 6, 2021

This model of shipping dependencies yourself feels incredibly weird. What is preventing version conflicts then? Let's say your software uses LLAMA (with own Boost) and another version of Boost. What if alpaka uses a different Boost than LLAMA and you mix both?

When vendoring a lib (either directly as copy/subtree or as cmake fetch content), one always adds a build switch to prefer a system/external library if requested. That way you get developer productivity and package maintainability.

@bernhardmgruber
Copy link
Member

Just to add an example to the discussion, we are probably having a related issue right now with a project mixing two alpaka versions: alpaka-group/cupla#198

@psychocoderHPC
Copy link
Member

Since there is the ongoing discussion in one alpaka issue

@bernhardmgruber Is LLAMA using something else than mp11 from boost? Would it be possible to switch to the mp11 stand-alone version?

@bernhardmgruber
Copy link
Member

Apart from mp11, LLAMA uses a couple of other Boost features that I would like to avoid replacing, like:

  • getting the hostname via boost.asio to display on the plots in some examples
  • hash functions to verify correct copies in the viewcopy example
  • container hash functions for the coloring in the mapping to SVG/HTML dumping code
  • demangling type names

Since the dumping code is not included by default, the demangle is the only other Boost functionality strongly required. If Boost becomes an issue, we could find a workaround for this.

@ax3l
Copy link
Member Author

ax3l commented Apr 21, 2021

Thanks for the summary. Dependencies in examples are not as critical - they might be complicating smoke test in new environments but at least they do not propagate to dependents. (Adding a CMake option to disable example builds is always good to have then.)

@bernhardmgruber
Copy link
Member

With #266 in, LLAMA now only uses:

  • boost::mp11
  • boost::hash
  • boost::core::demangle

@bernhardmgruber
Copy link
Member

We recently acquired:

  • boost::atomic (for atomic_ref before C++20)
  • boost::iostreams (for memory mapped files in one example)

@bernhardmgruber
Copy link
Member

I honestly see no way we can get rid of Boost at some point. The library just provides too much value and we absolutely lack the personpower to implement all the features ourselves. In some areas, a C++ upgrade could help us, but then we would need to cut CUDA support. For these reasons, I want to close the ticket, as won't fix.

@bernhardmgruber bernhardmgruber closed this as not planned Won't fix, can't repro, duplicate, stale Nov 15, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

4 participants