Replies: 16 comments 21 replies
-
@shabble, @TommyMurphyTM1234, @AntonKrug, @LnnrtS, @maxgerhardt any thoughts? |
Beta Was this translation helpful? Give feedback.
-
Ideally each tool should be a separate package, but this is indeed a lot of work, at least initially, thus the proposal to start with all tools bundled in a super-package (the The problem with the super-bundle is that it takes a lot of time to build, and sometimes updating a minor tool becomes pretty unpractical, at least this was the experience with the huge Docker images. Building the
If you mean that having too many different tools available at once, even when not needed, might create problems, yes, in theory it may, but in practice this was not an issue with the huge Docker images, which include almost everything you can think of. The major disadvantage of bundling too many tools is mainly the long build time. |
Beta Was this translation helpful? Give feedback.
-
I'm not sure of the use case you have in mind, but each application package has it's own dependencies, and it is the responsibility of the author to select the proper versions for the dependencies. This does not mean that if you fork one of my projects you cannot experiment with other dependencies.
Sure, xpm guarantees reproducibility for a given set of dependencies, if you change them you may get different results. |
Beta Was this translation helpful? Give feedback.
-
Why do you think that the details that you are requesting are important for xpm? Things are quite layered, and xpm only cares about which versions of the dependencies must be linked to the project, for the rest the build scripts should handle the details. |
Beta Was this translation helpful? Give feedback.
-
As far as binary tools are installed, xpm checks the sha sums of the downloaded archives, so I think that this is covered. |
Beta Was this translation helpful? Give feedback.
-
The Nix package manager seems a Linux only solution; xpm targets macOS and Windows too. I don't claim xpm to be perfect, so if you you have any suggestions to improve it with some Nix features, feel free to do so. However directly incorporating Nix packages might not be realistic. |
Beta Was this translation helpful? Give feedback.
-
Right. I would say that But apart from isolating the system deps from the host, another major advantage is that it should allow to run the build for, let's say macOS 10.13, on a newer macOS. For Linux this is solved with Ubuntu 18 Docker images and the build can happen on any newer system, but for macOS I have to maintain a dedicated machine with exactly macOS 10.13 for the Intel builds, and another one with macOS 11.x, for the Apple Silicon builds, which is expensive and tedious to maintain. |
Beta Was this translation helpful? Give feedback.
-
Yes, if the build is performed on a recent Ubuntu, the resulting binaries will have references to a recent GLIBC, (for example GLIBC 2.35 for Ubuntu 22) and will not run on older systems with older GLIBCs. Thus it is necessary to decide how far back in time do we need to go. The current xPack Linux binaries are based on GLIBC 2.27, which is available in Ubuntu 18. For a comprehensive list of Linux distribution versions, please see: |
Beta Was this translation helpful? Give feedback.
-
Although in theory cross building is possible, in practice it is quite a challenge to build the Windows binaries on Linux, although the mingw-w64-gcc is a well established solution. For other cross-builds I would expect the need for lots of patches during configure, making such solutions not practical.
QEMU is a great tool, and I was able to run some Arm builds on x86_64, but the performances are very poor, and so I ended with using Raspberry PIs, which are some very nice small machines. To be noted that due to some weird Arm compatibility issues, I had to use a separate Raspberry Pi OS 32-bit machine since Docker got confused when asked to run in 32-bit mode on a 64-bit machine. So, for practical reasons, Arm builds require 2 Arm machines, the macOS builds require 2 machines, and the Intel Linux & Windows builds can be done on Linux. 5 machines all together. To this I added a separate Arm 64-bit machine used to run the tests. The need for it is a bit difficult to explain, but it has to do with the fact that sometimes the resulting binaries may have absolute references to shared libraries. If the test runs on the same machine, the shared libraries created during the build are available, but if the test runs on a separate machine, these missing references will brake the test. |
Beta Was this translation helpful? Give feedback.
-
I added some more details to the fictive configuration. The invoked scripts will probably be different, and the common definitions can be parametrised and written only once, but for a preview, this should be fine. |
Beta Was this translation helpful? Give feedback.
-
Garrett @0Grit, any thoughts on this? |
Beta Was this translation helpful? Give feedback.
-
PS: I am still a daily user of the ARM toolchain xpack |
Beta Was this translation helpful? Give feedback.
-
My brief 2 cents about it:
@shabble You can use cmake presets for it. Just define a preset with all common build properties and a specific one for every platform that sets a dedicated toolchain and inherits from the base preset. Then for very platform preset cmake --preset p
cmake --build --preset p
ctest --preset p This is basically the same that xpm "configurations" but using an industry standard format. |
Beta Was this translation helpful? Give feedback.
-
That would be nice, but not only they are out of stock, but their machines have only 2 GB of RAM. Oracle might be more interesting, but, as far as I know, they do not provide free machines for open source projects. $73/month. :-( MacStadium has a special offer for open source projects, so my Apple Silicon build box is in Vegas. All other are in my office. |
Beta Was this translation helpful? Give feedback.
-
A quick update, there are 3 new binary package: Those three, plus the older packages (like gcc, clang, cmake, ninja, etc), should be able to offer the functionality previously provided by the XBB v3.4 docker images (and macOS archives).
Generally all Perl-based projects suffer from this design issue, and cannot be used directly in relocatable projects. In some cases it is possible to patch them and replace the absolute path with the program name, thus preserving the functionality as long as the referred program is in the PATH. The next step is to update the existing projects to use these binary xPacks and remove the dependency to XBB v3.4. |
Beta Was this translation helpful? Give feedback.
-
@eraus, if you want to understand the current status of the xPack Build Box, perhaps you can also take a look at this discussion. |
Beta Was this translation helpful? Give feedback.
-
Building binaries for multiple platforms (Windows/macOS/Linux) proved to be a major challenge, and the xPack Build Box was an attempt to find a solution.
Purpose
As documented at https://xpack.github.io/xbb/, the xPack Build Box is an elaborated build environment focused on obtaining reproducible builds while building cross-platform standalone binaries for GNU/Linux, macOS and Windows.
The first clients of XBB were the binary xPacks.
Requirements
The resulting binaries have to meet two main criteria:
To meet the first criterion, each resulting binary archive should include all required libraries, which usually means to compile them from sources and adjust the run path to include them.
The second criterion is more difficult to meet, since it requires the actual build to be performed on a slightly older version of an operating system, like Ubuntu 18 for the Linux binaries, or macOS 10.13 for the Intel macOS binaries.
Older operating systems also mean older compilers and older other tools, which sometimes make building modern packages impossible.
Challenge
The main challenge of XBB is to provide the new compilers and tools required for compiling modern sources, but in the environment of an older version of the supported operating systems.
The current monolithic solution
The XBB v3.x achieves these goals by using a monolithic solution, implemented as a large Docker image on Linux, and, similarly, a large folder on macOS.
To be accurate, there are 3 such Docker images (x86_64, armhf, aarch64) and 2 archives for macOS (x86_64 and aarch64).
Initially these images were meant to include only a more recent GCC, but grew to include lots of various other tools, to the point of becoming almost unmanageable, due to the very large build time.
Plus that at +5GB, using them in CI environments is not very efficient.
The proposed modular solution
The new solution should have lighter prerequisites and be more flexible, allowing easier individual tools updates without having to rebuild the entire environment.
Since the xPack project is part of the node.js ecosystem, the minimum prerequisites should be a recent node. This requirement can be easily met on both Linux & macOS.
For reproducible builds, Docker images with node can be easily created; updating them with new node versions is also relatively easy.
Binary dependencies
With node available, the compilers and all other tools required for a build can be defined as binary dependencies and loaded with xpm.
A fictive configuration might look like:
Note: the definitions can be factorised and parametrised, but for readability reasons are presented in expanded form.
The commands to run a build are simple:
Other tools
However in practice things are not that simple, since a build, in addition to the obvious dependencies like the compiler/cmake/ninja, would also need several other tools.
In addition, since the builds should pass on any Linux or macOS system, the versions of these other tools must also be controlled.
An incomplete list of such tools, currently included in the monolithic XBB is:
Action points
Migrating from the current monolithic XBB to using binary xPacks cannot be done over night, and requires several steps.
There are two main obstacles:
Creating binary xPacks with all these tools can be done, but this would also take two steps:
Create xbb-bootstrap-xpack
To save a step in creating these new binary xPacks, an intermediate binary xPack with all these tools is proposed.
This bootstrap packages (one for each supported platform) might also be relatively large, but hopefully
not as large as the existing Docker images, and building them should not be that tedious.
Update the build scripts
With this bootstrap binary xPack available, it should be possible to migrate the existing binary packages to the new xPack builds, and finally retire the large Docker images.
Create new binary xPacks
Once the structure of the new build scripts is proven functional, new binary xPacks with the required tools can be created, and these tools removed from xbb-bootstrap-xpack.
When all binary xPacks will no longer depend on xbb-bootstrap-xpack, it can be retired.
Details that require attention
Although the above plan seems realistic, there are a few details that need to be considered.
Windows cross builds
The xPack design generally provides platform-neutral solutions, which means user applications should pass the builds on all platforms, including Windows.
However, some tools might not be Windows friendly, and building the Windows binaries on Windows might not be convenient, or even possible; thus, at least in the initial step, Windows binaries will be cross-compiled on Linux, with
mingw-gcc
, and tested withwine
.macOS builds
macOS builds cannot benefit from Docker images, and will have to run natively, on the required minimum system (like 10.13 for Intel macOS).
An alternate solution to be considered for a future release would be to use
chroot
to a folder with a minimalistic system where node is installed, and run the builds inside this sandbox.Beta Was this translation helpful? Give feedback.
All reactions