November 10 2024

The peculiar world of Gentoo package testing

Michał Górny (mgorny) November 10, 2024, 14:33

While discussing uv tests with Fedora developers, it occurred to me how different your average Gentoo testing environment is — not only from these used upstream, but also from these used by other Linux distributions. This article will be dedicated exactly to that: to pointing out how it’s different, what does that imply and why I think it’s not a bad thing.

Gentoo as a source-first distro

The first important thing about Gentoo is that it is a source-first distribution. The best way to explain this is to compare it with your average “binary” distribution.

In a “binary” distribution, source and binary packages are somewhat isolated from one another. Developers work with source packages (recipes, specs) and use them to build binary packages — either directly, or via an automation. Then the binary packages hit repositories. The end users usually do not interface with sources at all — may well not even be aware that such a thing exists.

In Gentoo, on the other hand, source packages are the first degree citizens. All users use source repositories, and can optionally use local or remote binary package repositories. I think the best way of thinking about binary packages is: as a form of “cache”.

If the package manager is configured to use binary packages, it attempts to find a package that matches the build parameters — the package version, USE flags, dependencies. If it finds a match, it can use it. If it doesn’t, it just proceeds with building from source. If configured to do so, it may write a binary package as a side effect of that — almost literally cache it. It can also be set to create a binary package without installing it (pre-fill the “cache”). It should hardly surprise anyone at this point that the default local binary packages repository is under the /var/cache tree.

A side implication of this is that the binary packages provided by Gentoo are a subset of all packages available — and on top of that, only a small number of viable package configurations are covered by the official packages.

The build phases

The source build in Gentoo is split into a few phases. The central phases that are of interest here are largely inspired by how autotools-based packages were built. These are:

  1. src_configure — meant to pass input parameters to the build system, and get it to perform necessary platform checks. Usually involves invoking a configure script, or an equivalent action of a build system such as CMake, Meson or another.
  2. src_compile — meant to execute the bulk of compilation, and leave the artifacts in the build tree. Usually involves invoking a builder such as make or ninja.
  3. src_test — meant to run the test suite, if the user wishes testing to be done. Usually involves invoking the check or test target.
  4. src_install — meant to install the artifacts and other files from the work directory into a staging directory (not the live system). The files can be afterwards transferred to the live system and/or packed into a binary package. Usually involves invoking the install target.

Clearly, it’s very similar to how you’d compile and install software yourself: configure, build, optionally test before installing, and then install.

Of course, this process is not really one-size-fits-all. For example, the modern Python packages no longer even try fitting into it. Instead, we build the wheel in the PEP 517 blackbox manner, and install it to a temporary directory straight in the compile phase. As a result, the test phase is run with a locally-installed package (relying on the logic from virtual environments), and the install phase merely moves files around for the package manager to pick them up.

The implications for testing

The key takeaways of the process are these:

  1. The test phase is run inside the working tree, against package that was just built but not installed into the live system.
  2. All the package’s build-time dependencies should be installed into the live system.
  3. However, the system may contain any other packages, including packages that could affect the just-built package or its test suite in unpredictable ways.
  4. As a corollary, the live system may or may not contain a copy of the package in question already installed. And if it does, it may be a different version, and/or a different build configuration.

All of these mean trouble. Sometimes random packages will cause the tests to fail as false positives — and sometimes they make also them wrongly pass or get ignored. Sometimes packages already installed will prevent developers from seeing that they’ve missed some dependency. Often mismatches between installed packages will make reproducing issues hard. On top of that, sometimes an earlier installed copy of the package will leak into the test environment, causing confusing problems.

If there are so many negatives, why do we do it then? Because there is also a very important positive: the packages are being tested as close to the production environment as possible (short of actually installing them — but we want to test before that happens). Presence of a certain package may cause tests to fail as false positive — but it may also uncover an actual runtime issue, one that would not otherwise be caught until it actually broke production. And I’m not talking theoretical here. While I don’t have any links handy right now, over and over again we were hitting real issues — either these that haven’t been caught by upstream CI setups yet, or that simply couldn’t have been caught in an idealized test environment.

So yeah, testing stuff this way may be quite a pain, and a source of huge frustration with the constant stream of false positives. But it’s also an important strength that no idealized — not to say “lazy” — test environment can bring. Add to that the fact that a fair number of Gentoo users are actually installing their packages with tests enabled, and you get testing on a huge variety of systems, with different architectures, dependency versions and USE flags, configuration files… and on top of that, a knack for hacking. Yeah, people hate us for finding all these bugs they’d rather not hear about.

While discussing uv tests with Fedora developers, it occurred to me how different your average Gentoo testing environment is — not only from these used upstream, but also from these used by other Linux distributions. This article will be dedicated exactly to that: to pointing out how it’s different, what does that imply and why I think it’s not a bad thing.

Gentoo as a source-first distro

The first important thing about Gentoo is that it is a source-first distribution. The best way to explain this is to compare it with your average “binary” distribution.

In a “binary” distribution, source and binary packages are somewhat isolated from one another. Developers work with source packages (recipes, specs) and use them to build binary packages — either directly, or via an automation. Then the binary packages hit repositories. The end users usually do not interface with sources at all — may well not even be aware that such a thing exists.

In Gentoo, on the other hand, source packages are the first degree citizens. All users use source repositories, and can optionally use local or remote binary package repositories. I think the best way of thinking about binary packages is: as a form of “cache”.

If the package manager is configured to use binary packages, it attempts to find a package that matches the build parameters — the package version, USE flags, dependencies. If it finds a match, it can use it. If it doesn’t, it just proceeds with building from source. If configured to do so, it may write a binary package as a side effect of that — almost literally cache it. It can also be set to create a binary package without installing it (pre-fill the “cache”). It should hardly surprise anyone at this point that the default local binary packages repository is under the /var/cache tree.

A side implication of this is that the binary packages provided by Gentoo are a subset of all packages available — and on top of that, only a small number of viable package configurations are covered by the official packages.

The build phases

The source build in Gentoo is split into a few phases. The central phases that are of interest here are largely inspired by how autotools-based packages were built. These are:

  1. src_configure — meant to pass input parameters to the build system, and get it to perform necessary platform checks. Usually involves invoking a configure script, or an equivalent action of a build system such as CMake, Meson or another.
  2. src_compile — meant to execute the bulk of compilation, and leave the artifacts in the build tree. Usually involves invoking a builder such as make or ninja.
  3. src_test — meant to run the test suite, if the user wishes testing to be done. Usually involves invoking the check or test target.
  4. src_install — meant to install the artifacts and other files from the work directory into a staging directory (not the live system). The files can be afterwards transferred to the live system and/or packed into a binary package. Usually involves invoking the install target.

Clearly, it’s very similar to how you’d compile and install software yourself: configure, build, optionally test before installing, and then install.

Of course, this process is not really one-size-fits-all. For example, the modern Python packages no longer even try fitting into it. Instead, we build the wheel in the PEP 517 blackbox manner, and install it to a temporary directory straight in the compile phase. As a result, the test phase is run with a locally-installed package (relying on the logic from virtual environments), and the install phase merely moves files around for the package manager to pick them up.

The implications for testing

The key takeaways of the process are these:

  1. The test phase is run inside the working tree, against package that was just built but not installed into the live system.
  2. All the package’s build-time dependencies should be installed into the live system.
  3. However, the system may contain any other packages, including packages that could affect the just-built package or its test suite in unpredictable ways.
  4. As a corollary, the live system may or may not contain a copy of the package in question already installed. And if it does, it may be a different version, and/or a different build configuration.

All of these mean trouble. Sometimes random packages will cause the tests to fail as false positives — and sometimes they make also them wrongly pass or get ignored. Sometimes packages already installed will prevent developers from seeing that they’ve missed some dependency. Often mismatches between installed packages will make reproducing issues hard. On top of that, sometimes an earlier installed copy of the package will leak into the test environment, causing confusing problems.

If there are so many negatives, why do we do it then? Because there is also a very important positive: the packages are being tested as close to the production environment as possible (short of actually installing them — but we want to test before that happens). Presence of a certain package may cause tests to fail as false positive — but it may also uncover an actual runtime issue, one that would not otherwise be caught until it actually broke production. And I’m not talking theoretical here. While I don’t have any links handy right now, over and over again we were hitting real issues — either these that haven’t been caught by upstream CI setups yet, or that simply couldn’t have been caught in an idealized test environment.

So yeah, testing stuff this way may be quite a pain, and a source of huge frustration with the constant stream of false positives. But it’s also an important strength that no idealized — not to say “lazy” — test environment can bring. Add to that the fact that a fair number of Gentoo users are actually installing their packages with tests enabled, and you get testing on a huge variety of systems, with different architectures, dependency versions and USE flags, configuration files… and on top of that, a knack for hacking. Yeah, people hate us for finding all these bugs they’d rather not hear about.

November 09 2024

Ready-to-boot, fresh & experimental Gentoo QCOW2 disk images

Andreas K. Hüttel (dilfridge) November 09, 2024, 0:46

Recently I've been experimenting with Catalyst, the tool that generates stages and iso files for Gentoo's Release Engineering team. The first, still very experimental result is now available for download - a bootable hard disk image in QEmu's qcow2 format that immediately drops you into a fully working Gentoo environment.

Feel free to download it and try it out, either this first upload or any future weekly build from the amd64 release file directories. The files are not linked on the www.gentoo.org webserver since I consider them not really finished yet, but instead experimental and under development. You can use a QEmu commandline as for example

qemu-system-x86_64 \
       -m 8G -smbios type=0,uefi=on -bios /usr/share/edk2-ovmf/OVMF_CODE.fd \
       -smp 4 -cpu host -accel kvm -vga virtio -drive file=di.qcow2 &

where the last "file" argument specifies the file that you downloaded, for testing.

The current download initially does not start any network login services such as sshd, but has an empty root password for logging in on the console - this is why I call it a "console" type disk image. Future variants I'm planning include for example a "cloud-init" type, which sets up log-in credentials and further configuration as supplied by a cloud provider.

Cheers and enjoy!



Recently I've been experimenting with Catalyst, the tool that generates stages and iso files for Gentoo's Release Engineering team. The first, still very experimental result is now available for download - a bootable hard disk image in QEmu's qcow2 format that immediately drops you into a fully working Gentoo environment.

Feel free to download it and try it out, either this first upload or any future weekly build from the amd64 release file directories. The files are not linked on the www.gentoo.org webserver since I consider them not really finished yet, but instead experimental and under development. You can use a QEmu commandline as for example

qemu-system-x86_64 \
       -m 8G -smbios type=0,uefi=on -bios /usr/share/edk2-ovmf/OVMF_CODE.fd \
       -smp 4 -cpu host -accel kvm -vga virtio -drive file=di.qcow2 &

where the last "file" argument specifies the file that you downloaded, for testing.

The current download initially does not start any network login services such as sshd, but has an empty root password for logging in on the console - this is why I call it a "console" type disk image. Future variants I'm planning include for example a "cloud-init" type, which sets up log-in credentials and further configuration as supplied by a cloud provider.

Cheers and enjoy!



October 23 2024

DTrace 2.0 for Gentoo

Gentoo News (GentooNews) October 23, 2024, 5:00

The real, mythical DTrace comes to Gentoo! Need to dynamically trace your kernel or userspace programs, with rainbows, ponies, and unicorns - and all entirely safely and in production?! Gentoo is now ready for that! Just emerge dev-debug/dtrace and you’re all set. All required kernel options are already enabled in the newest stable Gentoo distribution kernel; if you are compiling manually, the DTrace ebuild will inform you about required configuration changes. Internally, DTrace 2.0 for Linux builds on the BPF engine of the Linux kernel, so don’t be surprised if the awesome cross-compilation features of Gentoo are used to install a gcc that outputs BPF code (which, btw, also comes in very handy for sys-apps/systemd).

Documentation? Sure, there’s lots of it. You can start with our DTrace wiki page, the DTrace for Linux page on GitHub, or the original documentation for Illumos. Enjoy!

The DTrace Ponycorn

The real, mythical DTrace comes to Gentoo! Need to dynamically trace your kernel or userspace programs, with rainbows, ponies, and unicorns - and all entirely safely and in production?! Gentoo is now ready for that! Just emerge dev-debug/dtrace and you’re all set. All required kernel options are already enabled in the newest stable Gentoo distribution kernel; if you are compiling manually, the DTrace ebuild will inform you about required configuration changes. Internally, DTrace 2.0 for Linux builds on the BPF engine of the Linux kernel, so don’t be surprised if the awesome cross-compilation features of Gentoo are used to install a gcc that outputs BPF code (which, btw, also comes in very handy for sys-apps/systemd).

Documentation? Sure, there’s lots of it. You can start with our DTrace wiki page, the DTrace for Linux page on GitHub, or the original documentation for Illumos. Enjoy!

October 07 2024

Arm Ltd. provides fast Ampere Altra Max server for Gentoo

Gentoo News (GentooNews) October 07, 2024, 5:00

We’re very happy to announce that Arm Ltd. and specifically its Works on Arm team has sent us a fast Ampere Altra Max server to support Gentoo development. With 96 Armv8.2+ 64bit cores, 256 GByte of RAM, and 4 TByte NVMe storage, it is now hosted together with some of our other hardware at OSU Open Source Lab. The machine will be a clear boost to our future arm64 (aarch64) and arm (32bit) support, via installation stage builds and binary packages, architecture testing of Gentoo packages, as well as our close work with upstream projects such as GCC and glibc. Thank you!

Arm Ltd. logo

We’re very happy to announce that Arm Ltd. and specifically its Works on Arm team has sent us a fast Ampere Altra Max server to support Gentoo development. With 96 Armv8.2+ 64bit cores, 256 GByte of RAM, and 4 TByte NVMe storage, it is now hosted together with some of our other hardware at OSU Open Source Lab. The machine will be a clear boost to our future arm64 (aarch64) and arm (32bit) support, via installation stage builds and binary packages, architecture testing of Gentoo packages, as well as our close work with upstream projects such as GCC and glibc. Thank you!

October 04 2024

Testing the safe time64 transition path

Michał Górny (mgorny) October 04, 2024, 13:54

Recently I’ve been elaborating on the perils of transition to 64-bit time_t, following the debate within Gentoo. Within these deliberations, I have also envisioned potential solutions to ensure that production systems could be migrated safely.

My initial ideas involved treating time64 as a completely new ABI, with a new libdir and forced incompatibility between binaries. This ambitious plan faced two disadvantages. Firstly, it required major modification to various toolchains, and secondly, it raised compatibility concerns between Gentoo (and other distributions that followed this plan) and distributions that switched before or were going to switch without making similar changes. Effectively, it would not only require a lot of effort from us, but also a lot of convincing other people, many of whom probably don’t want to spend any more time on doing extra work for 32-bit architectures. This made me consider alternative ideas.

One of them was to limit the changes to the transition period — use a libt32 temporary library directory to prevent existing programs from breaking while rebuilds were performed, and then simply remove them, and be left with plain lib like other distributions that switched already. In this post, I’d like to elaborate how I went about testing the feasibility of this solution. Please note that this is not a migration guide — it includes steps that are meant to detect problems with the approach, and are not suitable for production systems.

Preparing to catch time32/time64 mixing

As I’ve explained before, the biggest risk during the transition is accidental mixing of time32 and time64 binaries. In the worst case, it could mean not only breaking programs running on production, but actively creating vulnerabilities via out-of-bounds accesses. Therefore, I believe it is crucial to ensure that no such thing happens throughout the migration.

My first step towards testing the migration process was to create an ABI mixing check that would be injected into executables. I’ve placed the following code into /usr/include/__gentoo_time.h:

#include <stdio.h>
#include <stdlib.h>

__attribute__((weak))
__attribute__((visibility("default")))
struct {
	int time32;
	int time64;
} __gentoo_time_bits;

__attribute__((constructor))
static void __gentoo_time_check() {
#if _TIME_BITS == 64
#error "not now"
	__gentoo_time_bits.time64 = 1;
#else
	__gentoo_time_bits.time32 = 1;
#endif

	if (__gentoo_time_bits.time32 && __gentoo_time_bits.time64) {
		FILE *f;
		fprintf(stderr, "time32 and time64 ABI mixing detected\n");
		/* trigger a sandbox failure for good measure too */
		f = fopen("/time32-time64-mixing", "w");
		if (f)
			fclose(f);
		abort();
	}
}

Then, I have added the following line to /usr/include/time.h, just above __BEGIN_DECLS:

#include <__gentoo_time.h>

Now, this meant that any binary including <time.h>, even indirectly, would get our check. In fact, the check would probably be duplicated a lot, but that’s not really a problem for the test system.

The check itself utilizes a bit of magic. It creates a weak __gentoo_time_bits structure that would be shared between the executable itself and all loaded libraries. Every binary would run the constructor function upon loading, and it would fits store its own _TIME_BITS value within the shared structure, and then ensure that no binary set the other value. If that did happen, it would not only cause the program to immediately abort, but also try to trigger a sandbox failure, so the package build would be considered failed even if the build system ignored that particular failure.

However, note the #error in the snippet. This is a temporary hack to block packages that automatically try to use -D_TIME_BITS=64 (e.g. coreutils, grep, man-db), as they would trigger the check prematurely, and as a false positive.

At this point, I did rebuild the whole system, except for glibc, to inject the check into as many time32 binaries as possible:

emerge -ve --exclude=sys-libs/glibc --keep-going=y --jobs=16 @world

A number of packages fail here, because they attempt to force -D_TIME_BITS=64. This is okay, we don’t need perfect coverage, and we definitely don’t want false positives.

Preparing for the transition

The next step is to actually prepare for the transition. The preparation involves two changes, to all packages except for sys-libs/glibc:

  1. Moving all libraries from lib to libt32.
  2. Injecting libt32 directories into RUNTIME of all binaries, executables and libraries alike.

This is done using a tool called time32-prep. It takes care of finding all potential libdirs from ld.so, setting RUNPATH on binaries (and removing any references to plain lib, while at it), and then moving the libraries.

Rebuilding everything

The next step is to configure the system to compile time64 binaries by default. For a start, I have added the following snippet to make.conf, to easily distinguish packages that were rebuilt:

CHOST="i686-pc-linux-gnut64"
CHOST_x86="i686-pc-linux-gnut64"

I’ve rebuilt the dependencies of GCC using time64 flags explicitly:

CFLAGS="-D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64" emerge -1v sys-apps/sandbox dev-libs/{gmp,mpfr,mpc} sys-libs/zlib app-arch/{xz-utils,zstd}

Rebuilt and switched binutils:

emerge -1v sys-devel/binutils
binutils-config 1

Then, I’ve added a user patch to make GCC default to time64:

--- a/gcc/c-family/c-cppbuiltin.cc
+++ b/gcc/c-family/c-cppbuiltin.cc
@@ -1560,6 +1560,9 @@ c_cpp_builtins (cpp_reader *pfile)
     builtin_define_with_int_value ("_FORTIFY_SOURCE", GENTOO_FORTIFY_SOURCE_LEVEL);
 #endif
 
+  cpp_define (pfile, "_FILE_OFFSET_BITS=64");
+  cpp_define (pfile, "_TIME_BITS=64");
+
   /* Misc.  */
   if (flag_gnu89_inline)
     cpp_define (pfile, "__GNUC_GNU_INLINE__");

And rebuilt GCC itself (without time64 flags):

USE=-sanitize emerge -v sys-devel/gcc
gcc-config 1

Note that I had to disable sanitizers, as they currently fail to build with _TIME_BITS=64. I also had to comment out the __gentoo_time.h include for the time of building GCC.

The final step was to rebuild all packages (except for GCC and glibc) with the new compiler:

emerge -ve --exclude=sys-libs/glibc --exclude=sys-devel/{binutils,gcc} --jobs=16 --keep-going=y @world
The results

Well, I have some bad news — at some point, the rebuilds started failing. However, it seems that all failures I’ve hit during the initial testing can be accounted for as something relatively harmless — Perl and Python extensions.

Long story short, since they are installed into a dedicated directory, they can’t be prevented from ABI mixing via the libt32 hack. However, that’s unlikely to be a real problem. They failed for me, because I’ve made ABI mixing absolutely fatal — but in reality only private parts of the Python API use time_t, and these should not be used by any third-party extensions. And in the end, the issues are resolved by rebuilding in a different order.

Next steps

While this could be considered an important success, we’re still way ahead from being ready to go full time64. The time32-prep tool itself has a few TODOs, and definitely needs testing on a more “production-like” system. Then, there are actual problems that the packages are facing on time64 setups (like the GCC build failure in sanitizers), and that need to be fixed before we make things official.

Recently I’ve been elaborating on the perils of transition to 64-bit time_t, following the debate within Gentoo. Within these deliberations, I have also envisioned potential solutions to ensure that production systems could be migrated safely.

My initial ideas involved treating time64 as a completely new ABI, with a new libdir and forced incompatibility between binaries. This ambitious plan faced two disadvantages. Firstly, it required major modification to various toolchains, and secondly, it raised compatibility concerns between Gentoo (and other distributions that followed this plan) and distributions that switched before or were going to switch without making similar changes. Effectively, it would not only require a lot of effort from us, but also a lot of convincing other people, many of whom probably don’t want to spend any more time on doing extra work for 32-bit architectures. This made me consider alternative ideas.

One of them was to limit the changes to the transition period — use a libt32 temporary library directory to prevent existing programs from breaking while rebuilds were performed, and then simply remove them, and be left with plain lib like other distributions that switched already. In this post, I’d like to elaborate how I went about testing the feasibility of this solution. Please note that this is not a migration guide — it includes steps that are meant to detect problems with the approach, and are not suitable for production systems.

Preparing to catch time32/time64 mixing

As I’ve explained before, the biggest risk during the transition is accidental mixing of time32 and time64 binaries. In the worst case, it could mean not only breaking programs running on production, but actively creating vulnerabilities via out-of-bounds accesses. Therefore, I believe it is crucial to ensure that no such thing happens throughout the migration.

My first step towards testing the migration process was to create an ABI mixing check that would be injected into executables. I’ve placed the following code into /usr/include/__gentoo_time.h:

#include <stdio.h>
#include <stdlib.h>

__attribute__((weak))
__attribute__((visibility("default")))
struct {
	int time32;
	int time64;
} __gentoo_time_bits;

__attribute__((constructor))
static void __gentoo_time_check() {
#if _TIME_BITS == 64
#error "not now"
	__gentoo_time_bits.time64 = 1;
#else
	__gentoo_time_bits.time32 = 1;
#endif

	if (__gentoo_time_bits.time32 && __gentoo_time_bits.time64) {
		FILE *f;
		fprintf(stderr, "time32 and time64 ABI mixing detected\n");
		/* trigger a sandbox failure for good measure too */
		f = fopen("/time32-time64-mixing", "w");
		if (f)
			fclose(f);
		abort();
	}
}

Then, I have added the following line to /usr/include/time.h, just above __BEGIN_DECLS:

#include <__gentoo_time.h>

Now, this meant that any binary including <time.h>, even indirectly, would get our check. In fact, the check would probably be duplicated a lot, but that’s not really a problem for the test system.

The check itself utilizes a bit of magic. It creates a weak __gentoo_time_bits structure that would be shared between the executable itself and all loaded libraries. Every binary would run the constructor function upon loading, and it would fits store its own _TIME_BITS value within the shared structure, and then ensure that no binary set the other value. If that did happen, it would not only cause the program to immediately abort, but also try to trigger a sandbox failure, so the package build would be considered failed even if the build system ignored that particular failure.

However, note the #error in the snippet. This is a temporary hack to block packages that automatically try to use -D_TIME_BITS=64 (e.g. coreutils, grep, man-db), as they would trigger the check prematurely, and as a false positive.

At this point, I did rebuild the whole system, except for glibc, to inject the check into as many time32 binaries as possible:

emerge -ve --exclude=sys-libs/glibc --keep-going=y --jobs=16 @world

A number of packages fail here, because they attempt to force -D_TIME_BITS=64. This is okay, we don’t need perfect coverage, and we definitely don’t want false positives.

Preparing for the transition

The next step is to actually prepare for the transition. The preparation involves two changes, to all packages except for sys-libs/glibc:

  1. Moving all libraries from lib to libt32.
  2. Injecting libt32 directories into RUNTIME of all binaries, executables and libraries alike.

This is done using a tool called time32-prep. It takes care of finding all potential libdirs from ld.so, setting RUNPATH on binaries (and removing any references to plain lib, while at it), and then moving the libraries.

Rebuilding everything

The next step is to configure the system to compile time64 binaries by default. For a start, I have added the following snippet to make.conf, to easily distinguish packages that were rebuilt:

CHOST="i686-pc-linux-gnut64"
CHOST_x86="i686-pc-linux-gnut64"

I’ve rebuilt the dependencies of GCC using time64 flags explicitly:

CFLAGS="-D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64" emerge -1v sys-apps/sandbox dev-libs/{gmp,mpfr,mpc} sys-libs/zlib app-arch/{xz-utils,zstd}

Rebuilt and switched binutils:

emerge -1v sys-devel/binutils
binutils-config 1

Then, I’ve added a user patch to make GCC default to time64:

--- a/gcc/c-family/c-cppbuiltin.cc
+++ b/gcc/c-family/c-cppbuiltin.cc
@@ -1560,6 +1560,9 @@ c_cpp_builtins (cpp_reader *pfile)
     builtin_define_with_int_value ("_FORTIFY_SOURCE", GENTOO_FORTIFY_SOURCE_LEVEL);
 #endif
 
+  cpp_define (pfile, "_FILE_OFFSET_BITS=64");
+  cpp_define (pfile, "_TIME_BITS=64");
+
   /* Misc.  */
   if (flag_gnu89_inline)
     cpp_define (pfile, "__GNUC_GNU_INLINE__");

And rebuilt GCC itself (without time64 flags):

USE=-sanitize emerge -v sys-devel/gcc
gcc-config 1

Note that I had to disable sanitizers, as they currently fail to build with _TIME_BITS=64. I also had to comment out the __gentoo_time.h include for the time of building GCC.

The final step was to rebuild all packages (except for GCC and glibc) with the new compiler:

emerge -ve --exclude=sys-libs/glibc --exclude=sys-devel/{binutils,gcc} --jobs=16 --keep-going=y @world

The results

Well, I have some bad news — at some point, the rebuilds started failing. However, it seems that all failures I’ve hit during the initial testing can be accounted for as something relatively harmless — Perl and Python extensions.

Long story short, since they are installed into a dedicated directory, they can’t be prevented from ABI mixing via the libt32 hack. However, that’s unlikely to be a real problem. They failed for me, because I’ve made ABI mixing absolutely fatal — but in reality only private parts of the Python API use time_t, and these should not be used by any third-party extensions. And in the end, the issues are resolved by rebuilding in a different order.

Next steps

While this could be considered an important success, we’re still way ahead from being ready to go full time64. The time32-prep tool itself has a few TODOs, and definitely needs testing on a more “production-like” system. Then, there are actual problems that the packages are facing on time64 setups (like the GCC build failure in sanitizers), and that need to be fixed before we make things official.

September 28 2024

The perils of transition to 64-bit time_t

Michał Górny (mgorny) September 28, 2024, 15:44

(please note that there’s a correction at the bottom)

In the Overview of cross-architecture portability problems, I have dedicated a section to the problems resulting from use of 32-bit time_t type. This design decision, still affecting Gentoo systems using glibc, means that 32-bit applications will suddenly start failing in horrible ways in 2038: they will be getting -1 error instead of the current time, they won’t be able to stat() files. In one word: complete mayhem will emerge.

There is a general agreement that the way forward is to change time_t to a 64-bit type. Musl has already switched to that, glibc supports it as an option. A number of other distributions such as Debian have taken the leap and switched. Unfortunately, source-based distributions such as Gentoo don’t have it that easy. So we are still debating the issue and experimenting, trying to figure out a maximally safe upgrade path for our users.

Unfortunately, that’s nowhere near trivial. Above all, we are talking about a breaking ABI change. It’s all-or-nothing. If a library uses time_t in its API, everything linking to it needs to use the same type width. In this post, I’d like to explore the issue in detail — why is it so bad, and what we can do to make it safer.

Going back to Large File Support

Before we get into the time64 change, as I’m going to shortly call it, we need to go back in history a bit and consider another similar problem: Large File Support.

Long story short, originally 32-bit architectures specify two important file-related types that were 32 bits wide: off_t used to specify file offsets (signed to support relative offsets) and ino_t used to specify inode numbers. This had two implications: you couldn’t open files larger than 2 GiB, and you couldn’t open files whose inode numbers exceeded 32-bit unsigned integer range.

To resolve this problem, Large File Support was introduced. It involved replacing these two types with 64-bit variants, and on glibc it is still optional today. In its case, we didn’t take the leap and transitioned globally. Instead, packages generally started enabling LFS support upstream — also taking care to resolve any ABI breakage in the process. While many packages did that, we shouldn’t consider the problem solved.

The important point here is that time64 support in glibc requires LFS to be used. This makes sense — if we are going to break stuff, we may as well solve both problems.

What ABIs are we talking about?

To put it simply, we have three possible sub-ABIs here:

  1. the original ABI with 32-bit types,
  2. LFS: 64-bit off_t and ino_t, 32-bit time_t,
  3. time64: LFS + 64-bit time_t.

What’s important here is that a single glibc build remains compatible with all three variants. However, libraries that use these types in their API are not.

Today, 32-bit systems roughly use a mix of the first and second ABI — the latter including packages that enabled LFS explicitly. For the future, our goal is to focus on the third option. We are not concerned about providing full-LFS systems with 32-bit time_t.

Why the ABI change is so bad?

Now, the big deal is that we are replacing a 32-bit type with a 64-bit type, in place. Unlike with LFS, glibc does not provide any transitional API that could be used to enable new functions while preserving backwards compatibility — it’s all-or-nothing.

Let’s consider structures. If a structure contains time_t with its natural 32-bit alignment, then there’s no padding for the type to extend to. Inevitable, all fields will have to shift to make room for the new type. Let’s consider a trivial example:

struct {
    int a;
    time_t b;
    int c;
};

With 32-bit time_t, the offset of c is 8. With the 64-bit type, it’s 16. If you mix binaries using different time_t width, they’re inevitably are going to read or write the wrong fields! Or perhaps even read or write out of bounds!

Let’s just look at the size of struct stat, as an example of structure that uses both file and time-related types. On plain 32-bit x86 glibc it’s 88 byte long. With LFS, it’s 96 byte long (size and inode number fields are expanded). With LFS + time64, it’s 108 byte long (three timestamps are expanded).

However, you don’t even need to use structures. After all, we are talking about x86 where function parameters are passed on stack. If one of the parameters is time_t, then positions of all parameters on stack change, and we find ourselves seeing the exact same problem! Consider the following prototype:

extern void foo(int a, time_t b, int c);

Let’s say we’re calling it as foo(1, 2, 3). With 32-bit types, the call looks like the following:

	pushl	$3
	pushl	$2
	pushl	$1
	call	foo@PLT

However, with 64-bit time_t, it changes to:

	pushl	$3
	pushl	$0
	pushl	$2
	pushl	$1
	call	foo@PLT

An additional 32-bit value (zero) is pushed between the “old” b and c. Once again, if we mix both kinds of binaries, they are going to fail to read the parameters correctly!

So yeah, it’s a big deal. And right now, there are no real protections in place to prevent mixing these ABIs. So what you actually may get is runtime breakage, potentially going as far as to create security issues.

You don’t have to take my word for it. You can reproduce it yourself on x86/amd64 easily enough. Let’s take the more likely case of a time32 program linked against a library that has been rebuilt for time64:

$ cat >libfoo.c <<EOF
#include <stdio.h>
#include <time.h>

void foo(int a, time_t b, int *c) {
   printf("a = %d\n", a);
   printf("b = %lld", (long long) b);
   printf("%s", ctime(&b));
   printf("c = %d\n", *c);
}
EOF
$ cat >foo.c <<EOF
#include <stddef.h>
#include <time.h>

extern void foo(int a, time_t b, int *c);

int main() {
    int three = 3;
    foo(1, time(NULL), &three);
    return 0;
}
EOF
$ cc -m32 libfoo.c -shared -o libfoo.so
$ cc -m32 foo.c -o foo -Wl,-rpath,. libfoo.so
$ ./foo
a = 1
b = 1727154919
Tue Sep 24 07:15:19 2024
c = 3
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
  libfoo.c -shared -o libfoo.so
$ ./foo 
a = 1
b = -34556652301432063
Thu Jul 20 06:16:17 -1095054749
c = 771539841

On top of that, the source-first nature of Gentoo amplifies these problems. An average binary distribution rebuilds all binary packages — and then the user upgrades the system in a single, relatively atomic step. Sure, if someone uses third-party repositories or has locally built programs that link to system libraries, problems can emerge but the process is relatively safe.

On the other hand, in Gentoo we are talking about rebuilding @world while breaking ABI in place. For a start, we are talking around prolonged periods of time between two packages being rebuilt when they would actually be mixing incompatible ABI. Then, there is a fair risk that some rebuild will fail and leave your system half-transitioned with no easy way out. Then, there is a real risk that cyclic dependencies will actually make rebuild impossible — rebuilding a dependency will break build-time tools, preventing stuff from being rebuilt. It’s a true horror.

What can we do to make it safer?

Our deliberations currently revolve about three ideas, that are semi-related, though not inevitably dependent one upon another:

  1. Changing the platform tuple (CHOST) for the new ABIs, to clearly distinguish them from the baseline 32-bit ABI.
  2. Changing the libdir for the new ABIs, effectively permitting the rebuilt libraries to be installed independently of the original versions.
  3. Introducing an binary-level ABI distinction that could prevent binaries using different sub-ABI to be linked to one another.

The subsequent sections will focus on each of these changes in detail. Note that all the values used there are just examples, and not necessarily the strings used in a final solution.

The platform tuple change

The platform tuple (generally referenced through the CHOST variable) identifies the platform targeted by the toolchain. For example, it is used as a part of GCC/binutils install paths, effectively allowing toolchains for multiple targets to be installed simultaneously. In clang, it can be used to switch between supported cross-compilation targets, and can control the defaults to match the specified ABI. In Gentoo, it is also used to uniquely identify ABIs for the purpose of multilib support. Because of that, we require that no two co-installable ABIs share the same tuple.

A tuple consists of four parts, separated by hyphens: architecture, vendor, operating system and libc. Of these, vendor is generally freeform but the other three are restricted to some degree. A few semi-equivalent examples of tuples used for 32-bit x86 platform include:

i386-pc-linux-gnu
i686-pc-linux-gnu
i686-unknown-linux-gnu

Historically, two approaches were used to introduce new ABIs. Either the vendor field was changed, or an additional ABI specification was appended to the libc field. For example, Gentoo historically used two different kind of tuples for ARM ABIs with hardware floating-point unit:

armv7a-hardfloat-linux-gnueabi
armv7a-unknown-linux-gnueabihf

The former approach was used earlier, to avoid incompatibility problems resulting from altering other tuple fields. However, as these were fixed and upstreams normalized on the latter solution, Gentoo followed suit.

Similarly, the discussion of time64 ABIs resurfaced the same dilemma: should we just “abuse” the vendor field for this, or instead change libc field and fix packages? The main difference is that the former is “cleaner” as a downstream solution limited to Gentoo, while the latter generally opens up discussions about interoperability. Therefore, the options look like:

i686-gentoo_t64-linux-gnu
i686-pc-linux-gnut64
armv7a-gentoo_t64-linux-gnueabihf
armv7a-unknown-linux-gnueabihft64

Fortunately, changing the tuple should not require much patching. The GNU toolchain and GNU build system both ignore everything following “gnu” in the libc field. Clang will require patching — but upstream is likely to accept our patches, and we will want to make patches anyway, as they will permit clang to automatically choose the right ABI based on the tuple.

The libdir change

The term “libdir” refers to the base name of the library install directory. Having different libdirs, and therefore separate library install directories, makes it possible to build multilib systems, i.e. installing multiple ABI variations of libraries on a single system, and making it possible to run executables for different ABIs. For example, this is what makes it possible to run 32-bit x86 executables on amd64 systems.

The libdir values are generally specified in the ABI. Naturally, the baseline value is plain lib. As a historical convention (since 32-bit architectures were first), usually 32-bit platforms (arm, ppc, x86) use lib, whereas their more modern 64-bit counterparts (amd64, arm64, ppc64) use lib64 — even if a particular architecture never really supported multilib on Gentoo.

Architectures that support multiple ABIs also define different libdirs. For example, the additional x32 ABI on x86 uses libx32. MIPS n32 ABI uses lib32 (with plain lib defining the o32 ABI).

Now, we are considering changing the libdir value for time64 variants of 32-bit ABIs, for example from lib to libt64. This would make it possible to install the rebuilt libraries separately from the old libraries, effectively bringing three advantages:

  1. reducing the risk of time64 executables accidentally linking to time32 libraries,
  2. enabling Portage’s preserved-libs feature to preserve time32 libraries once the respective packages have been rebuilt for time64, and before their reverse dependencies have been rebuilt,
  3. optionally, making it possible to use a time32 + time64 multilib profiles, that could be used to preserve compatibility with prebuilt time32 applications linking to system libraries.

In my opinion, the second point is a killer feature. As I’ve mentioned before, we are talking about the kind of migration that would break executables for a prolonged time on production systems, and possibly break build-time tools, preventing the rebuild from proceeding further. By preserving original libraries, we are minimizing the risk of actual breakage, since the existing executables will keep using the time32 libraries until they are rebuilt and linked to the time64 libraries.

The libdir change is definitely going to require some toolchain patching. We may want to also consider special-casing glibc, as the same set of glibc libraries is valid for all of the sub-ABIs we were considering. However, we will probably want a separate ld.so executable, as it would need to load libraries from the correct libdir, and then we will want to set .interp in time64 executables to reference the time64 ld.so.

Note that due to how multilib is designed in Gentoo, a proper multilib support for this (i.e. the third point) requires a unique platform tuple for the ABI as well — so that specific aspect is dependent on the tuple change.

Ensuring binary incompatibility

In general, you can’t mix binaries using different ABIs. For example, if you try to link a 64-bit program to a 32-bit library, the linker will object:

$ cc foo.c libfoo.so 
/usr/lib/gcc/x86_64-pc-linux-gnu/14/../../../../x86_64-pc-linux-gnu/bin/ld: libfoo.so: error adding symbols: file in wrong format
collect2: error: ld returned 1 exit status

Similarly, the dynamic loader will refuse to use a 32-bit library with 64-bit program:

$ ./foo 
./foo: error while loading shared libraries: libfoo.so: wrong ELF class: ELFCLASS32

There are a few mechanisms that are used for this. As demonstrated above, architectures with 32-bit and 64-bit ABIs use two distinct ELF classes (ELFCLASS32 and ELFCLASS64). Additionally, some architectures use different machine identifiers (EM_386 vs. EM_X86_64, EM_PPC vs. EM_PPC64). The x32 bit ABI on x86 “abuses” this by declaring its binaries as ELFCLASS32 + EM_X86_64 (and therefore distinct from ELFCLASS32 + EM_386 and from ELFCLASS64 + EM_X86_64).

Both ARM and MIPS use the flags field (it is a bit-field with architecture-specific flags) to distinguish different ABIs (hardfloat vs. softfloat, n32 ABI on MIPS…). Additionally, both feature a dedicated attribute section — and again, the linker refuses to link incompatible object files.

It may be desirable to implement a similar mechanism for time32 and time64 systems. Unfortunately, it’s not a trivial task. It doesn’t seem that there is a reusable generic mechanism that could be used for that. On top of that, we need a solution that would fit a fair number of different architectures. It seems that the most reasonable solution right now would be to add a new ELF note section dedicated to this feature, and implement complete toolchain support for it.

However, whatever we decide to do, we need to take into consideration that the user may want to disable it. Particularly, there is a fair number of prebuilt software that have no sources available, and it may continue working correctly against system libs, provided it does not call into any API using time_t. The cure of unconditionally preventing them from working might be worse than the disease.

On the bright side, it should be possible to create a non-fatal QA check for this without much hacking, provided that we go with separate libdirs. We can distinguish time64 executables by their .interp section, pointing to the dynamic loader in the appropriate libdir, and then verify that time32 programs will not load any libraries from libt64, and that time64 programs will not load any libraries directly from lib.

What about old prebuilt applications?

So far we were concerned about packages that are building from sources. However, there is still a fair number of old applications, usually proprietary, that are available only as prebuilt binaries — particularly for x86 and PowerPC architectures. These packages are going to face two problems: firstly, compatibility issues with system libraries, and secondly, the y2k38 problem itself.

For the compatibility problem, we have a reasonably good solution already. Since we already had to make them work on amd64, we have a multilib layout in place, along with necessary machinery to build multiple library versions. In fact, given that the primary purpose of multilib is compatibility with old software, it’s not even clear if there is much of a point in switching amd64 multilib to use time64 for 32-bit binaries. Either way, we can easily extend our multilib machinery to distinguish the regular abi_x86_32 target from abi_x86_t64 (and we probably should do that anyway), and then create new multilib x86 profiles that would support both ABIs.

The second part is much harder. Obviously, as soon as we’re past the 2038 cutoff date, all 32-bit programs — using system libraries or not — will simply start failing in horrible ways. One possibility is to work with faketime to control the system clock. Another is to run a whole VM that’s moved back in time.

Summary

As 2038 is approaching, 32-bit applications exercising 32-bit time_t are up to stop working. At this point, it is pretty clear that the only way forward is to rebuild these applications with 64-bit time_t (and while at it, force LFS as well). Unfortunately, that’s not a trivial task since it involves an ABI change, and mixing time32 and time64 programs and libraries can lead to horrible runtime bugs.

While the exact details are still in the making, the proposed changes revolve around three ideas that can be implemented independently to some degree: changing the platform tuple (CHOST), changing libdir and preventing accidentally mixing time32 and time64 binaries.

The tuple change is mostly a more formal way of distinguishing builds for the regular time32 ABI (e.g. i686-pc-linux-gnu) from ones specifically targeting time64 (e.g. i686-pc-linux-gnut64). It should be relatively harmless and easy to carry out, with minimal amount of fixing necessary. For example, clang will need to be updated to accept new tuples.

The libdir change is probably the most important of all, as it permits a breakage-free transition, thanks to Portage’s preserved-libs feature. Long story short, time64 libraries get installed to a new libdir (e.g. libt64), and the original time32 libraries remain in lib until the applications using them are rebuilt. Unfortunately, it’s a bit harder to implement — it requires toolchain changes, and ensuring that all software correctly respects libdir. The extra difficulty is that with this change alone, the dynamic loader won’t ignore time32 libraries if e.g. -Wl,-rpath,/usr/lib is injected somewhere.

The incompatibility part is quite important, but also quite difficult. Ideally, we’d like to stop the linker from trying to accidentally link time32 libraries with time64 programs, and likewise the dynamic loader from trying to load them. Unfortunately, so far we weren’t able to come up with a realistic way of doing that, short of actually making some intrusive changes to the toolchain. On the positive side, writing a QA check to detect accidental mixing at build time shouldn’t be that hard.

Doing all three should enable us to provide a clean and relatively safe transition path for 32-bit Gentoo systems using glibc. However, these only solve problems for packages built from source. Prebuilt 32-bit applications, particularly proprietary software like old games, can’t be helped that way. And even if time64 changes won’t break them via breaking the ABI compatibility with system libraries, then year 2038 will. Unfortunately, there does not seem to be a good solution to that, short of actually running them with faked system time, one way or another.

Of course, all of this is still only a rough draft. A lot may still change, following experiments, discussion and patch submission.

Acknowledgements

I would like to thank the following people for proof-reading and suggestions, and for their overall work towards time64 support in Gentoo: Arsen Arsenović, Andreas K. Hüttel, Sam James and Alexander Monakov.

2024-09-30 correction

Unfortunately, my original ideas were too optimistic. I’ve entirely missed the fact that all libdirs are listed in ld.so.conf, and therefore we cannot rely on hardcoding the libdir path inside ld.so itself. In retrospect, I should have seen that coming — after all, we already adjust these paths for custom LLVM prefix, and that one would require special handling too.

This effectively means that the libdir change probably needs to depend on the binary incompatibility part. Overall, we need to meet three basic goals:

  1. The dynamic loader needs to be able to distinguish time32 and time64 binaries. For time32 programs, it needs to load only time32 libraries; for time64 programs, it needs to load only time64 libraries. In both cases, we need to assume that both kind of libraries will appear in path.
  2. For backwards compatibility, we need to assume that all binaries that do not have an explicit time64 marking are time32.
  3. Therefore, all newly built binaries must carry an explicit time64 marking. This includes binaries built by non-C environments, such as Rust, even if they do not interact with time_t ABI at all. Otherwise, these binaries would forever depend on time32 libraries.

Meeting all these goals is a lot of effort. None of the hacks we debated so far seem sufficient to achieve that, so we are probably talking about the level of effort on par with patching multiple toolchains for a variety of programming languages. Naturally, this is not something we can carry locally in Gentoo, so it also requires cooperation from multiple parties. All that for architectures that are largely considered legacy, and sometimes not even really supported anymore.

Of course, another problem is whether these other toolchains are actually going to produce correct time64 executables. After all, unless they are specifically adapted to respect _TIME_BITS the way C programs do, they are probably going to hardcode specific time_t width, and break horribly when it changes. However, that’s really an upstream problem to solve, and tangential to the issues we are discussing here.

On top of that, we are talking of a major incompatibility. All binaries that aren’t explicitly marked as time64 are going to use time32 libraries, even if they use time64 ABI. Gentoo won’t be able to run third-party executables unless they are patched to carry the correct marking.

Perhaps a better solution is to set our aims lower. Rather than actually distinguishing time32 and time64 binaries, we could instead inject RPATH to all time64 executables, directly forcing the time64 libdir there. This definitely won’t prevent the dynamic loader from using time32 libraries, but it should help transition without causing major incompatibility concerns.

Alternatively, we could consider the problem the other way around. Rather than changing libdir permanently for time64 libraries, we could change it temporarily for time32 libraries. This would imply injecting RPATH into all existing programs and renaming the libdir. Newly built time64 libraries would be installed back into the old libdir, and newly built time64 programs would lack the RPATH forcing time32 libraries. A clear advantage of this solution is that it would remain entirely compatible with other distributions that have taken the leap already.

As you can see, the situation is developing rapidly. Every day is bringing new challenges, and new ideas how to overcome them.

(please note that there’s a correction at the bottom)

In the Overview of cross-architecture portability problems, I have dedicated a section to the problems resulting from use of 32-bit time_t type. This design decision, still affecting Gentoo systems using glibc, means that 32-bit applications will suddenly start failing in horrible ways in 2038: they will be getting -1 error instead of the current time, they won’t be able to stat() files. In one word: complete mayhem will emerge.

There is a general agreement that the way forward is to change time_t to a 64-bit type. Musl has already switched to that, glibc supports it as an option. A number of other distributions such as Debian have taken the leap and switched. Unfortunately, source-based distributions such as Gentoo don’t have it that easy. So we are still debating the issue and experimenting, trying to figure out a maximally safe upgrade path for our users.

Unfortunately, that’s nowhere near trivial. Above all, we are talking about a breaking ABI change. It’s all-or-nothing. If a library uses time_t in its API, everything linking to it needs to use the same type width. In this post, I’d like to explore the issue in detail — why is it so bad, and what we can do to make it safer.

Going back to Large File Support

Before we get into the time64 change, as I’m going to shortly call it, we need to go back in history a bit and consider another similar problem: Large File Support.

Long story short, originally 32-bit architectures specify two important file-related types that were 32 bits wide: off_t used to specify file offsets (signed to support relative offsets) and ino_t used to specify inode numbers. This had two implications: you couldn’t open files larger than 2 GiB, and you couldn’t open files whose inode numbers exceeded 32-bit unsigned integer range.

To resolve this problem, Large File Support was introduced. It involved replacing these two types with 64-bit variants, and on glibc it is still optional today. In its case, we didn’t take the leap and transitioned globally. Instead, packages generally started enabling LFS support upstream — also taking care to resolve any ABI breakage in the process. While many packages did that, we shouldn’t consider the problem solved.

The important point here is that time64 support in glibc requires LFS to be used. This makes sense — if we are going to break stuff, we may as well solve both problems.

What ABIs are we talking about?

To put it simply, we have three possible sub-ABIs here:

  1. the original ABI with 32-bit types,
  2. LFS: 64-bit off_t and ino_t, 32-bit time_t,
  3. time64: LFS + 64-bit time_t.

What’s important here is that a single glibc build remains compatible with all three variants. However, libraries that use these types in their API are not.

Today, 32-bit systems roughly use a mix of the first and second ABI — the latter including packages that enabled LFS explicitly. For the future, our goal is to focus on the third option. We are not concerned about providing full-LFS systems with 32-bit time_t.

Why the ABI change is so bad?

Now, the big deal is that we are replacing a 32-bit type with a 64-bit type, in place. Unlike with LFS, glibc does not provide any transitional API that could be used to enable new functions while preserving backwards compatibility — it’s all-or-nothing.

Let’s consider structures. If a structure contains time_t with its natural 32-bit alignment, then there’s no padding for the type to extend to. Inevitable, all fields will have to shift to make room for the new type. Let’s consider a trivial example:

struct {
    int a;
    time_t b;
    int c;
};

With 32-bit time_t, the offset of c is 8. With the 64-bit type, it’s 16. If you mix binaries using different time_t width, they’re inevitably are going to read or write the wrong fields! Or perhaps even read or write out of bounds!

Let’s just look at the size of struct stat, as an example of structure that uses both file and time-related types. On plain 32-bit x86 glibc it’s 88 byte long. With LFS, it’s 96 byte long (size and inode number fields are expanded). With LFS + time64, it’s 108 byte long (three timestamps are expanded).

However, you don’t even need to use structures. After all, we are talking about x86 where function parameters are passed on stack. If one of the parameters is time_t, then positions of all parameters on stack change, and we find ourselves seeing the exact same problem! Consider the following prototype:

extern void foo(int a, time_t b, int c);

Let’s say we’re calling it as foo(1, 2, 3). With 32-bit types, the call looks like the following:

	pushl	$3
	pushl	$2
	pushl	$1
	call	foo@PLT

However, with 64-bit time_t, it changes to:

	pushl	$3
	pushl	$0
	pushl	$2
	pushl	$1
	call	foo@PLT

An additional 32-bit value (zero) is pushed between the “old” b and c. Once again, if we mix both kinds of binaries, they are going to fail to read the parameters correctly!

So yeah, it’s a big deal. And right now, there are no real protections in place to prevent mixing these ABIs. So what you actually may get is runtime breakage, potentially going as far as to create security issues.

You don’t have to take my word for it. You can reproduce it yourself on x86/amd64 easily enough. Let’s take the more likely case of a time32 program linked against a library that has been rebuilt for time64:

$ cat >libfoo.c <<EOF
#include <stdio.h>
#include <time.h>

void foo(int a, time_t b, int *c) {
   printf("a = %d\n", a);
   printf("b = %lld", (long long) b);
   printf("%s", ctime(&b));
   printf("c = %d\n", *c);
}
EOF
$ cat >foo.c <<EOF
#include <stddef.h>
#include <time.h>

extern void foo(int a, time_t b, int *c);

int main() {
    int three = 3;
    foo(1, time(NULL), &three);
    return 0;
}
EOF
$ cc -m32 libfoo.c -shared -o libfoo.so
$ cc -m32 foo.c -o foo -Wl,-rpath,. libfoo.so
$ ./foo
a = 1
b = 1727154919
Tue Sep 24 07:15:19 2024
c = 3
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
  libfoo.c -shared -o libfoo.so
$ ./foo 
a = 1
b = -34556652301432063
Thu Jul 20 06:16:17 -1095054749
c = 771539841

On top of that, the source-first nature of Gentoo amplifies these problems. An average binary distribution rebuilds all binary packages — and then the user upgrades the system in a single, relatively atomic step. Sure, if someone uses third-party repositories or has locally built programs that link to system libraries, problems can emerge but the process is relatively safe.

On the other hand, in Gentoo we are talking about rebuilding @world while breaking ABI in place. For a start, we are talking around prolonged periods of time between two packages being rebuilt when they would actually be mixing incompatible ABI. Then, there is a fair risk that some rebuild will fail and leave your system half-transitioned with no easy way out. Then, there is a real risk that cyclic dependencies will actually make rebuild impossible — rebuilding a dependency will break build-time tools, preventing stuff from being rebuilt. It’s a true horror.

What can we do to make it safer?

Our deliberations currently revolve about three ideas, that are semi-related, though not inevitably dependent one upon another:

  1. Changing the platform tuple (CHOST) for the new ABIs, to clearly distinguish them from the baseline 32-bit ABI.
  2. Changing the libdir for the new ABIs, effectively permitting the rebuilt libraries to be installed independently of the original versions.
  3. Introducing an binary-level ABI distinction that could prevent binaries using different sub-ABI to be linked to one another.

The subsequent sections will focus on each of these changes in detail. Note that all the values used there are just examples, and not necessarily the strings used in a final solution.

The platform tuple change

The platform tuple (generally referenced through the CHOST variable) identifies the platform targeted by the toolchain. For example, it is used as a part of GCC/binutils install paths, effectively allowing toolchains for multiple targets to be installed simultaneously. In clang, it can be used to switch between supported cross-compilation targets, and can control the defaults to match the specified ABI. In Gentoo, it is also used to uniquely identify ABIs for the purpose of multilib support. Because of that, we require that no two co-installable ABIs share the same tuple.

A tuple consists of four parts, separated by hyphens: architecture, vendor, operating system and libc. Of these, vendor is generally freeform but the other three are restricted to some degree. A few semi-equivalent examples of tuples used for 32-bit x86 platform include:

i386-pc-linux-gnu
i686-pc-linux-gnu
i686-unknown-linux-gnu

Historically, two approaches were used to introduce new ABIs. Either the vendor field was changed, or an additional ABI specification was appended to the libc field. For example, Gentoo historically used two different kind of tuples for ARM ABIs with hardware floating-point unit:

armv7a-hardfloat-linux-gnueabi
armv7a-unknown-linux-gnueabihf

The former approach was used earlier, to avoid incompatibility problems resulting from altering other tuple fields. However, as these were fixed and upstreams normalized on the latter solution, Gentoo followed suit.

Similarly, the discussion of time64 ABIs resurfaced the same dilemma: should we just “abuse” the vendor field for this, or instead change libc field and fix packages? The main difference is that the former is “cleaner” as a downstream solution limited to Gentoo, while the latter generally opens up discussions about interoperability. Therefore, the options look like:

i686-gentoo_t64-linux-gnu
i686-pc-linux-gnut64
armv7a-gentoo_t64-linux-gnueabihf
armv7a-unknown-linux-gnueabihft64

Fortunately, changing the tuple should not require much patching. The GNU toolchain and GNU build system both ignore everything following “gnu” in the libc field. Clang will require patching — but upstream is likely to accept our patches, and we will want to make patches anyway, as they will permit clang to automatically choose the right ABI based on the tuple.

The libdir change

The term “libdir” refers to the base name of the library install directory. Having different libdirs, and therefore separate library install directories, makes it possible to build multilib systems, i.e. installing multiple ABI variations of libraries on a single system, and making it possible to run executables for different ABIs. For example, this is what makes it possible to run 32-bit x86 executables on amd64 systems.

The libdir values are generally specified in the ABI. Naturally, the baseline value is plain lib. As a historical convention (since 32-bit architectures were first), usually 32-bit platforms (arm, ppc, x86) use lib, whereas their more modern 64-bit counterparts (amd64, arm64, ppc64) use lib64 — even if a particular architecture never really supported multilib on Gentoo.

Architectures that support multiple ABIs also define different libdirs. For example, the additional x32 ABI on x86 uses libx32. MIPS n32 ABI uses lib32 (with plain lib defining the o32 ABI).

Now, we are considering changing the libdir value for time64 variants of 32-bit ABIs, for example from lib to libt64. This would make it possible to install the rebuilt libraries separately from the old libraries, effectively bringing three advantages:

  1. reducing the risk of time64 executables accidentally linking to time32 libraries,
  2. enabling Portage’s preserved-libs feature to preserve time32 libraries once the respective packages have been rebuilt for time64, and before their reverse dependencies have been rebuilt,
  3. optionally, making it possible to use a time32 + time64 multilib profiles, that could be used to preserve compatibility with prebuilt time32 applications linking to system libraries.

In my opinion, the second point is a killer feature. As I’ve mentioned before, we are talking about the kind of migration that would break executables for a prolonged time on production systems, and possibly break build-time tools, preventing the rebuild from proceeding further. By preserving original libraries, we are minimizing the risk of actual breakage, since the existing executables will keep using the time32 libraries until they are rebuilt and linked to the time64 libraries.

The libdir change is definitely going to require some toolchain patching. We may want to also consider special-casing glibc, as the same set of glibc libraries is valid for all of the sub-ABIs we were considering. However, we will probably want a separate ld.so executable, as it would need to load libraries from the correct libdir, and then we will want to set .interp in time64 executables to reference the time64 ld.so.

Note that due to how multilib is designed in Gentoo, a proper multilib support for this (i.e. the third point) requires a unique platform tuple for the ABI as well — so that specific aspect is dependent on the tuple change.

Ensuring binary incompatibility

In general, you can’t mix binaries using different ABIs. For example, if you try to link a 64-bit program to a 32-bit library, the linker will object:

$ cc foo.c libfoo.so 
/usr/lib/gcc/x86_64-pc-linux-gnu/14/../../../../x86_64-pc-linux-gnu/bin/ld: libfoo.so: error adding symbols: file in wrong format
collect2: error: ld returned 1 exit status

Similarly, the dynamic loader will refuse to use a 32-bit library with 64-bit program:

$ ./foo 
./foo: error while loading shared libraries: libfoo.so: wrong ELF class: ELFCLASS32

There are a few mechanisms that are used for this. As demonstrated above, architectures with 32-bit and 64-bit ABIs use two distinct ELF classes (ELFCLASS32 and ELFCLASS64). Additionally, some architectures use different machine identifiers (EM_386 vs. EM_X86_64, EM_PPC vs. EM_PPC64). The x32 bit ABI on x86 “abuses” this by declaring its binaries as ELFCLASS32 + EM_X86_64 (and therefore distinct from ELFCLASS32 + EM_386 and from ELFCLASS64 + EM_X86_64).

Both ARM and MIPS use the flags field (it is a bit-field with architecture-specific flags) to distinguish different ABIs (hardfloat vs. softfloat, n32 ABI on MIPS…). Additionally, both feature a dedicated attribute section — and again, the linker refuses to link incompatible object files.

It may be desirable to implement a similar mechanism for time32 and time64 systems. Unfortunately, it’s not a trivial task. It doesn’t seem that there is a reusable generic mechanism that could be used for that. On top of that, we need a solution that would fit a fair number of different architectures. It seems that the most reasonable solution right now would be to add a new ELF note section dedicated to this feature, and implement complete toolchain support for it.

However, whatever we decide to do, we need to take into consideration that the user may want to disable it. Particularly, there is a fair number of prebuilt software that have no sources available, and it may continue working correctly against system libs, provided it does not call into any API using time_t. The cure of unconditionally preventing them from working might be worse than the disease.

On the bright side, it should be possible to create a non-fatal QA check for this without much hacking, provided that we go with separate libdirs. We can distinguish time64 executables by their .interp section, pointing to the dynamic loader in the appropriate libdir, and then verify that time32 programs will not load any libraries from libt64, and that time64 programs will not load any libraries directly from lib.

What about old prebuilt applications?

So far we were concerned about packages that are building from sources. However, there is still a fair number of old applications, usually proprietary, that are available only as prebuilt binaries — particularly for x86 and PowerPC architectures. These packages are going to face two problems: firstly, compatibility issues with system libraries, and secondly, the y2k38 problem itself.

For the compatibility problem, we have a reasonably good solution already. Since we already had to make them work on amd64, we have a multilib layout in place, along with necessary machinery to build multiple library versions. In fact, given that the primary purpose of multilib is compatibility with old software, it’s not even clear if there is much of a point in switching amd64 multilib to use time64 for 32-bit binaries. Either way, we can easily extend our multilib machinery to distinguish the regular abi_x86_32 target from abi_x86_t64 (and we probably should do that anyway), and then create new multilib x86 profiles that would support both ABIs.

The second part is much harder. Obviously, as soon as we’re past the 2038 cutoff date, all 32-bit programs — using system libraries or not — will simply start failing in horrible ways. One possibility is to work with faketime to control the system clock. Another is to run a whole VM that’s moved back in time.

Summary

As 2038 is approaching, 32-bit applications exercising 32-bit time_t are up to stop working. At this point, it is pretty clear that the only way forward is to rebuild these applications with 64-bit time_t (and while at it, force LFS as well). Unfortunately, that’s not a trivial task since it involves an ABI change, and mixing time32 and time64 programs and libraries can lead to horrible runtime bugs.

While the exact details are still in the making, the proposed changes revolve around three ideas that can be implemented independently to some degree: changing the platform tuple (CHOST), changing libdir and preventing accidentally mixing time32 and time64 binaries.

The tuple change is mostly a more formal way of distinguishing builds for the regular time32 ABI (e.g. i686-pc-linux-gnu) from ones specifically targeting time64 (e.g. i686-pc-linux-gnut64). It should be relatively harmless and easy to carry out, with minimal amount of fixing necessary. For example, clang will need to be updated to accept new tuples.

The libdir change is probably the most important of all, as it permits a breakage-free transition, thanks to Portage’s preserved-libs feature. Long story short, time64 libraries get installed to a new libdir (e.g. libt64), and the original time32 libraries remain in lib until the applications using them are rebuilt. Unfortunately, it’s a bit harder to implement — it requires toolchain changes, and ensuring that all software correctly respects libdir. The extra difficulty is that with this change alone, the dynamic loader won’t ignore time32 libraries if e.g. -Wl,-rpath,/usr/lib is injected somewhere.

The incompatibility part is quite important, but also quite difficult. Ideally, we’d like to stop the linker from trying to accidentally link time32 libraries with time64 programs, and likewise the dynamic loader from trying to load them. Unfortunately, so far we weren’t able to come up with a realistic way of doing that, short of actually making some intrusive changes to the toolchain. On the positive side, writing a QA check to detect accidental mixing at build time shouldn’t be that hard.

Doing all three should enable us to provide a clean and relatively safe transition path for 32-bit Gentoo systems using glibc. However, these only solve problems for packages built from source. Prebuilt 32-bit applications, particularly proprietary software like old games, can’t be helped that way. And even if time64 changes won’t break them via breaking the ABI compatibility with system libraries, then year 2038 will. Unfortunately, there does not seem to be a good solution to that, short of actually running them with faked system time, one way or another.

Of course, all of this is still only a rough draft. A lot may still change, following experiments, discussion and patch submission.

Acknowledgements

I would like to thank the following people for proof-reading and suggestions, and for their overall work towards time64 support in Gentoo: Arsen Arsenović, Andreas K. Hüttel, Sam James and Alexander Monakov.

2024-09-30 correction

Unfortunately, my original ideas were too optimistic. I’ve entirely missed the fact that all libdirs are listed in ld.so.conf, and therefore we cannot rely on hardcoding the libdir path inside ld.so itself. In retrospect, I should have seen that coming — after all, we already adjust these paths for custom LLVM prefix, and that one would require special handling too.

This effectively means that the libdir change probably needs to depend on the binary incompatibility part. Overall, we need to meet three basic goals:

  1. The dynamic loader needs to be able to distinguish time32 and time64 binaries. For time32 programs, it needs to load only time32 libraries; for time64 programs, it needs to load only time64 libraries. In both cases, we need to assume that both kind of libraries will appear in path.
  2. For backwards compatibility, we need to assume that all binaries that do not have an explicit time64 marking are time32.
  3. Therefore, all newly built binaries must carry an explicit time64 marking. This includes binaries built by non-C environments, such as Rust, even if they do not interact with time_t ABI at all. Otherwise, these binaries would forever depend on time32 libraries.

Meeting all these goals is a lot of effort. None of the hacks we debated so far seem sufficient to achieve that, so we are probably talking about the level of effort on par with patching multiple toolchains for a variety of programming languages. Naturally, this is not something we can carry locally in Gentoo, so it also requires cooperation from multiple parties. All that for architectures that are largely considered legacy, and sometimes not even really supported anymore.

Of course, another problem is whether these other toolchains are actually going to produce correct time64 executables. After all, unless they are specifically adapted to respect _TIME_BITS the way C programs do, they are probably going to hardcode specific time_t width, and break horribly when it changes. However, that’s really an upstream problem to solve, and tangential to the issues we are discussing here.

On top of that, we are talking of a major incompatibility. All binaries that aren’t explicitly marked as time64 are going to use time32 libraries, even if they use time64 ABI. Gentoo won’t be able to run third-party executables unless they are patched to carry the correct marking.

Perhaps a better solution is to set our aims lower. Rather than actually distinguishing time32 and time64 binaries, we could instead inject RPATH to all time64 executables, directly forcing the time64 libdir there. This definitely won’t prevent the dynamic loader from using time32 libraries, but it should help transition without causing major incompatibility concerns.

Alternatively, we could consider the problem the other way around. Rather than changing libdir permanently for time64 libraries, we could change it temporarily for time32 libraries. This would imply injecting RPATH into all existing programs and renaming the libdir. Newly built time64 libraries would be installed back into the old libdir, and newly built time64 programs would lack the RPATH forcing time32 libraries. A clear advantage of this solution is that it would remain entirely compatible with other distributions that have taken the leap already.

As you can see, the situation is developing rapidly. Every day is bringing new challenges, and new ideas how to overcome them.

September 23 2024

Overview of cross-architecture portability problems

Michał Górny (mgorny) September 23, 2024, 9:34

Ideally, you’d want your program to work everywhere. Unfortunately, that’s not that simple, even if you’re using high-level “portable” languages such as Python. In this blog post, I’d like to focus on some aspects of cross-architecture problems I’ve seen or heard about during my time in Gentoo. Please note that I don’t mean this to be a comprehensive list of problems — instead, I’m aiming for an interesting read.

What breaks programs on 32-bit systems? Basic integer type sizes

If you asked anyone what’s the primary difference between 64-bit and 32-bit architectures, they will probably answer that it’s register sizes. For many people, register sizes imply differences in basic integer types, and therefore the primary source of problems on 32-bit architectures, when programs are tested on 64-bit architectures only (which is commonly the case nowadays). Actually, it’s not that simple.

Contrary to common expectations, the differences in basic integer types are minimal. Most importantly, your plain int is 32-bit everywhere. The only type that’s actually different is long — it’s 32-bit on 32-bit architectures, and 64-bit on 64-bit architectures. However, people don’t use long all that often in modern programs, so that’s not very likely to cause issues.

Perhaps some people worry about integer sizes because they still foggily remember the issues from porting old 32-bit software to 64-bit architectures. As I’ve mentioned before, int remained 32-bit — but pointers became 64-bit. As a result, if you attempted to cast pointers (or related data) to int, you’d be in trouble (hence we have size_t, ssize_t, ptrdiff_t). Of course, the same thing (i.e. casting pointers to long) made for 64-bit architectures is ugly but won’t technically cause problems on 32-bit architectures.

Note that I’m talking about System V ABI here. Technically, the POSIX and the C standards don’t specify exact integer sizes, and permit a lot more flexibility (the C standard especially — up to having, say, all the types exactly 32-bit).

Address space size

Now, a more likely problem is the address space limitation. Since pointers are 32-bit on 32-bit architectures, a program can address no more than 4 GiB of memory (in reality, somewhat less than that). What’s really important here is that this limits allocated memory, even it is never actually used.

This can cause curious issues. For example, let’s say that you have a program that allocates a lot of memory, but doesn’t use most of it. If you run this program on a 64-bit system with 2 GiB of total memory, it works just fine. However, if you run it on 32-bit userland with a lot of memory, it fails. And why is that? It’s because the system permitted the program to allocate more memory than it could ever provide — risking an OOM if the program actually tried to use it all; but on the 32-bit architecture, it simply cannot fit all these allocations into 32-bit addresses.

The following sample can trivially demonstrate this:

$ cat > mem-demo.c <<EOF
#include <stdlib.h>
#include <stdio.h>

int main() {
    void *allocs[100];
    int i, j;
    FILE *urandom = fopen("/dev/urandom", "r");

    for (i = 0; i < 100; ++i) {
        allocs[i] = malloc(1024 * 1024 * 1024);
        if (!allocs[i]) {
            printf("malloc for i = %d failed\n", i);
            return 1;
        }
        fread(allocs[i], 1024, 1, urandom);
    }

    for (i = 0; i < 100; ++i)
        free(allocs[i]);
    fclose(urandom);

    return 0;
}
EOF
$ cc -m64 mem-demo.c -o mem-demo && ./mem-demo
$ cc -m32 mem-demo.c -o mem-demo && ./mem-demo 
malloc for i = 3 failed

The program allocates a grand total of 100 GiB of memory, but uses only the first KiB of each allocation. This works just fine on 64-bit architectures but fails on 32-bit because of failing allocation.

At this point, it’s probably worth noting that we are talking about limitations applicable to a single process. A 32-bit kernel can utilize more than 4 GiB of memory, and therefore multiple processes can use a total of more than 4 GiB. There are also cursed ways of making it possible for a single process to access more than 4 GiB of memory. For example, one could use memfd_create() (or equivalently, files on tmpfs) to create in-memory files that exceed process’ address space, or use IPC to exchange data between multiple processes having separate address spaces (thanks to Arsen Arsenović and David Seifert for their hints on this).

Large File Support

Another problem faced by 32-bit programs is that the file-related types are traditionally 32-bit. This has two implications. The more obvious one is that off_t, the type used to express file sized and offsets, is a signed 32-bit integer, so you cannot stat() and therefore open files larger than 2 GiB. The less obvious implication is that ino_t, the type used to express inode numbers, is also 32-bit, so you cannot open files with inode numbers 2^32 and higher. In other words, given large enough filesystem, you may suddenly be unable to open random files, even if they are smaller than 2 GiB.

Now, this is a problem that can be solved. Modern programs usually define _FILE_OFFSET_BITS=64 and get 64-bit types instead. In fact, musl libc unconditionally provides 64-bit types, rendering this problem a relic of the past — and apparently glibc is planning to switch the default in the future as well.

Here’s a trivial demo:

$ cat > lfs-demo.c <<EOF
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>

int main() {
    int fd = open("lfs-test", O_RDONLY);

    if (fd == -1) {
        perror("open() failed");
        return 1;
    }

    close(fd);
    return 0;
}
EOF
$ truncate -s 2G lfs-test
$ cc -m64 lfs-demo.c -o lfs-demo && ./lfs-demo
$ cc -m32 lfs-demo.c -o lfs-demo && ./lfs-demo 
open() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 lfs-demo.c \
    -o lfs-demo && ./lfs-demo

Unfortunately, while fixing a single package is trivial, a global switch is not. The sizes of off_t and ino_t change, and so effectively does the ABI of any libraries that use these types in the API — i.e. if you rebuild the library without rebuilding the programs using it, they could break in unexpected ways. What you can do is either switch everything simultaneously, or go slowly and add change the types via a new API, preserving the old one for compatibility. The latter is unlikely to happen, given there’s very little interest in 32-bit architecture support these days. The former also isn’t free of issues — technically speaking, you may end up introducing incompatibility with prebuilt software that used the 32-bit types, and effectively lose the ability to run some proprietary software entirely.

time_t and the y2k38 problem

The low-level way of representing timestamps in C is through the number of seconds since the so-called epoch. This number is represented in a time_t type, which, as you can probably guess, was a signed 32 bit integer on 32-bit architectures. This means that it can hold positive values up to 231 – 1 seconds, which roughly corresponds to 68 years. Since the epoch on POSIX systems was defined as 1970, this means that the type can express timestamps up to 2038.

What does this mean in practice? Programs using 32-bit time_t can’t express dates beyond the cutoff 2038 date. If you try to do arithmetic spanning beyond this date (e.g. “20 years from now”), you get an overflow. stat() is going to fail on files with timestamps beyond that point (though, interestingly, open() works on glibc, so it’s not entirely symmetric with the LFS case). Past the overflow date, you get an error even trying to get the current time — and if your program doesn’t account for the possibility of time() failing, it’s going to be forever stuck 1 second before the epoch, or 1969-12-31 23:59:59. Effectively, it may end up hanging randomly (waiting for some wall clock time to pass), not firing events or seeding a PRNG with a constant.

Again, modern glibc versions provide a switch. If you define _TIME_BITS=64 (plus LFS flags, as a prerequisite), your program is going to get a 64-bit time_t. Modern versions of musl libc also default to the 64-bit type (since 1.2.0). Unfortunately, switching to the 64-bit type brings the same risks as switching to LFS globally — or perhaps even worse because time_t seems to be more common in library API than file size-related types were.

These solutions only work for software that is built from source, and uses time_t correctly. Converting timestamps to int will cause overflow bugs. File formats with 32-bit timestamp fields are essentially broken. Most importantly, all proprietary software will remain broken and in need of serious workarounds.

Here are some samples demonstrating the problems. Please note that the first sample assumes the system clock is set beyond 2038.

$ cat > time-test.c <<EOF
#include <stdio.h>
#include <time.h>

int main() {
    time_t t = time(NULL);

    if (t != -1) {
        struct tm *dt = gmtime(&t);
        char out[32];

        strftime(out, sizeof(out), "%F %T", dt);
        printf("%s\n", out);
    } else
        perror("time() failed");

    return 0;
}
EOF
$ cc -m64 time-test.c -o time-test && ./time-test
2060-03-04 11:13:02
$ cc -m32 time-test.c -o time-test && ./time-test
time() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
    time-test.c -o time-test && ./time-test
2060-03-04 11:13:32
$ cat > mtime-test.c <<EOF
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#include <time.h>
#include <unistd.h>

int main() {
    struct stat st;
    int fd;

    if (stat("time-data", &st) == 0) {
        char buf[32];
        struct tm *tm = gmtime(&st.st_mtime);
        strftime(buf, sizeof(buf), "%F %T", tm);
        printf("mtime: %s\n", buf);
    } else
        perror("stat() failed");

    fd = open("time-data", O_RDONLY);
    if (fd == -1) {
        perror("open() failed");
        return 1;
    }
    close(fd);

    return 0;
}
$ touch -t '206001021112' mtime-data
$ cc -m64 mtime-test.c -o mtime-test && ./mtime-test
mtime: 2060-01-02 10:12:00
$ cc -m32 mtime-test.c -o mtime-test && ./mtime-test
stat() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
    mtime-test.c -o mtime-test && ./mtime-test
mtime: 2060-01-02 10:12:00
Are these problems specific to C?

It is probably worth noting that while portability issues are generally discussed in terms of C, not all of them are specific to C, or to programs directly interacting with C API.

For example, address space limitations affect all programming languages, unless they take special effort to work around them (I’m not aware of any that do). So a Python program will be limited by the 4 GiB of address space the same way C programs are — except that Python programs don’t allocate memory explicitly, so the limit will be rather on memory used than allocated. On the minus side, Python programs will probably be less memory efficient than C programs.

File and time type sizes also sometimes affect programming languages internally. Modern versions of Python are built with Large File Support enabled, so they aren’t limited to 32-bit file sizes and inode numbers. However, they are limited to 32-bit timestamps:

>>> import datetime
>>> datetime.datetime(2060, 1, 1)
datetime.datetime(2060, 1, 1, 0, 0)
>>> datetime.datetime(2060, 1, 1).timestamp()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: timestamp out of range for platform time_t
Other generic issues Byte order (endianness)

The predominant byte order nowadays is little endian. X86 was always little endian. ARM is bi-endian, but defaults to running little endian (and there were never much incentive to run big endian ARM). PowerPC used to default to big endian, but these days PPC64 systems are mostly running little endian instead.

It’s not that either byte order is superior in some way. It’s just that x86 happened to arbitrarily use that byte order. Given its popularity, a lot of non-portable software has been written that worked correctly on little endian only. Over time, people lost the incentive to run big endian systems and this eventually led to even worse big endian support overall.

The most common issues related to byte order occur when implementing binary data formats, particularly file formats and network protocols. A missing byte order conversion can lead to the program throwing an error or incorrectly reading files written on other platforms, writing incorrect files or failing to communicate with peers on other platforms correctly. In extreme cases, a program that missed some byte order conversions may be unable to read a file it has written before.

Again, byte order problems are not limited to C. For example, the struct module in Python uses explicit byte order, size and alignment modifiers.

Curious enough, byte order issues are not limited to low-level data formats either. To give another example, the UTF-16 and UTF-32 encodings also have little endian and big endian variations. When the user does not request a specific byte order, Python uses host’s byte order and adds a BOM to the string, that is used to detect the correct byte order when decoding.

>>> "foo".encode("UTF-16LE")
b'f\x00o\x00o\x00'
>>> "foo".encode("UTF-16BE")
b'\x00f\x00o\x00o'
>>> "foo".encode("UTF-16")
b'\xff\xfef\x00o\x00o\x00'
char signedness

This is probably one of the most confusing portability problems you may see. Roughly, the problem is that the C standard does not specify the signedness of char type (unlike int). Some platforms define it as signed, others as unsigned. In fact, the standard goes a step further and defines char as a distinct type from both signed char and unsigned char, rather than an alias to either of them.

For example, the System V ABI for x86 and SPARC specifies that char is signed, whereas for MIPS and PowerPC it is unsigned. Assuming either and doing arithmetic on top of that could lead to surprising results on the other set of platforms. In fact, one of the most confusing cases I’ve seen was with code that was used only for big endian platforms, and therefore worked on PowerPC but not on SPARC (even though it would also fail on x86, if it was used there).

Here is an example inspired by it. The underlying idea is to read a little endian 32-bit unsigned integer from a char array:

$ cat > char-sign.c <<EOF
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main() {
        char buf[] = {0x00, 0x40, 0x80, 0xa0};
        char *p = buf;
        uint32_t val = 0;

        val |= (*p++);
        val |= (*p++) << 8;
        val |= (*p++) << 16;
        val |= (*p++) << 24;

        printf("%08" PRIx32 "\n", val);
}
EOF
$ cc -funsigned-char char-sign.c -o char-sign
$ ./char-sign
a0804000
$ cc -fsigned-char char-sign.c -o char-sign
$ ./char-sign
ff804000

Please note that for the sake of demonstration, the example uses -fsigned-char and -funsigned-char switches to override the default platform signedness. In real code, you’d explicitly use unsigned char instead.

Strict alignment

I feel that alignment is not a well-known problem, so perhaps I should start by explaining it a bit. Long story short, alignment is about ensuring that particular types are placed across appropriate memory boundaries. For example, on most platforms 32-bit types are expected to be aligned at 32-bit (= 4 byte) boundaries. In other words, you expect that the type’s memory address would be a multiple of 4 bytes — irrespective of whether it’s on stack or heap, used directly, in an array, a structure or perhaps an array of structures.

Perhaps the simplest way to explain that is to show how the compiler achieves alignment in structures. Please consider the following type:

struct {
    int16_t a;
    int32_t b;
    int16_t c;
}

As you can see, it contains two 2-byte types and one 4-byte type — that would be a total of 8 bytes, right? Nothing more wrong, at least on platforms requiring 32-bit alignment for int32_t. To guarantee that b would be correctly aligned whenever the whole structure is correctly aligned, the compiler needs to move it to an offset being a multiple of 4. Furthermore, to guarantee that if the structure is used in array, every instance is correctly aligned, it also needs to increase its size to a multiple of 4.

Effectively, the resulting structure resembles the following:

struct {
    int16_t a;
    int16_t _pad1;
    int32_t b;
    int16_t c;
    int16_t _pad2;
}

In fact, you can find some libraries actually defining structures with explicit padding. So you get a padding of 2 + 2 bytes, b at offset 4, and a total size of 12 bytes.

Now, what would happen if the alignment requirements weren’t met? On the majority of platforms, misaligned types are still going to work, usually at a performance penalty. However, on some platforms like SPARC, they will actually cause the program to terminate with a SIGBUS. Consider the following example:

$ cat > align-test.c <<EOF
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main() {
	uint8_t buf[6] = {0, 0, 0, 4, 0, 0};
	int32_t *number = (int32_t *) &buf[2];
	printf("%" PRIi32 "\n", *number);
	return 0;
}
EOF
$ cc align-test.c -o align-test
$ ./align-test
1024

The code is meant to resemble a cheap way of reading data from a file, and then getting a 32-bit integer at offset 2. However, on SPARC this code will not work as expected:

$ ./align-test
Bus error (core dumped)

As you can probably guess, there is a fair number of programs suffering from issues like that simply because they don’t crash on x86, and it’s easy to silence the normal compiler warnings (e.g. by type punning, as used it in the example). However, as noted before, this code will not only cause a crash on SPARC — it may also cause a performance penalty everywhere else.

Stack size

As low-level C programmers tend to learn, there are two main kinds of memory available to the program: the heap and the stack. The heap is the main memory area from which explicit allocations are done. The stack is a relatively small area of memory that is given to the program for its immediate use.

The main difference is that the use of heap is controlled — a well-written written program allocates as much memory as it needs, and doesn’t access areas outside of that. On the other hand, stack use is “uncontrolled” — programs generally don’t check stack bounds. As you may guess, this means that if a program uses it too much, it’s going to exceed the available stack — i.e. hit a stack overflow, which generally manifests itself as a “weird” segmentation fault.

And how do you actually use a lot of stack memory? In C, local function variables are kept on stack — so the more variables you use, the more stack you fill. Furthermore, some ABIs use stack to pass function parameters and return values — e.g. x86 (but not the newer amd64 or x32 ABIs). But most importantly, stack frames are used to record the function call history — and this means the deeper you call, the larger the stack use.

This is precisely why programmers are cautioned against recursive algorithms — especially if built without protection against deep recursion, they provide a trivial way to cause a stack overflow. And this last problem is not limited to C — recursive function calls in Python also result in recursive function calls in C. Python comes with a default recursion limit to prevent this from happening. However, as we recently found out the hard way, this limit needs to be adjusted across different architectures and compiler configurations, as their stack frame sizes may differ drastically: from a baseline of 8–16 bytes on common architectures such as x86 or ARM, through 112–128 bytes on PPC64, up to 160–176 bytes on s390x and SPARC64.

On top of that, the default thread stack size varies across the standard C libraries. On glibc, it is usually between 2 MiB and 10 MiB, whereas on musl it is 128 KiB. Therefore, in some cases you may actually need to explicitly request a larger stack.

The wondrous world of floating-point types x87 math

The x86 platform supports two modes of floating-point arithmetic:

  1. The legacy 387 floating-point arithmetic that utilizes 80-bit precision registers (-mfpmath=387).
  2. The more modern SSE arithmetic that supports all of 32-bit, 64-bit and 80-bit precision types (-mfpmath=sse).

The former is the default on 32-bit x86 platforms using the System V ABI, the latter everywhere else. And why does that matter? Because the former may imply performing some computations using the extended 80-bit precision before converting the result back to the original type, effectively implying a smaller rounding error than performing the same computations on the original type directly.

Consider the following example:

$ cat > float-demo.c <<EOF
#include <stdio.h>

__attribute__((noipa))
double fms(double a, double b, double c) {
	return a * b - c;
}

int main() {
	printf("%+.40f\n", fms(1./3, 1./3, 1./9));
	return 0;
}
EOF
$ cc -mfpmath=sse float-demo.c -o float-demo
$ ./float-demo
+0.0000000000000000000000000000000000000000
$ cc -mfpmath=387 float-demo.c -o float-demo
$ ./float-demo
-0.0000000000000000061663998560113064684174

What’s happening here? The program is computing 1/3 * 1/3 - 1/9, which we know should be zero. Except that it isn’t when using x87 FPU instructions. Why?

Normally, this computation is done in two steps. First, the multiplication 1/3 * 1/3 is done. Afterwards, 1/9 is subtracted from the result. In SSE mode, both steps are done directly on the double type. However, in x87 mode the doubles are converted to 80-bit floats first, both computations are done on these and then the result is converted back to double. We can see that looking at the respective assembly fragments:

$ cc -mfpmath=sse float-demo.c -S -o -
[…]
	movsd	-8(%rbp), %xmm0
	mulsd	-16(%rbp), %xmm0
	subsd	-24(%rbp), %xmm0
[…]
$ cc -mfpmath=387 float-demo.c -S -o -
[…]
	fldl	-8(%rbp)
	fmull	-16(%rbp)
	fsubl	-24(%rbp)
	fstpl	-32(%rbp)
[…]

Now, neither ⅓ nor ⅑ can be precisely expressed in binary system. So 1./3 is actually ⅓ + some error, and 1./9 is ⅑ + another error. It happens that 1./3 * 1./3 after rounding is giving the same value as 1./9 — so subtracting one from the other yields zero. However, when computations are done using an intermediate type of higher precision, the squared error from 1./3 * 1./3 is rounded at a higher precision — and therefore different from the one in 1./9. So counter-intuitively, higher precision here amplifies a rounding error and yields the “incorrect” result!

Of course, this is not that big of a deal — we are talking about 17 decimal places, and user-facing programs will probably round that down to 0. However, this can lead to problems in programs written to expect an exact value — e.g. in test suites.

Gentoo has already switched amd64 multilib profiles to force -mfpmath=sse for 32-bit builds, and it is planning to switch the x86 profiles as well. While this doesn’t solve the underlying issue, it yields more consistent results across different architectures and therefore reduces the risk of our users hitting these bugs. However, this has a surprising downside: some packages actually adapted to expect different results on 32-bit x86, and now fail when SSE arithmetic is used there.

It doesn’t take two architectures to make a rounding problem

Actually, you don’t have to run a program on two different architectures to see rounding problems — different optimization levels, particularly CPU instruction sets can also result in different rounding errors. Let’s try compiling the previous example with and without FMA instructions:

$ cc -mno-fma -O2 float-demo.c -o float-demo
$ ./float-demo
+0.0000000000000000000000000000000000000000
$ cc -mfma -O2 float-demo.c -o float-demo
$ ./float-demo
-0.0000000000000000061679056923619804377437

The first invocation is roughly the same as before. The second one enables use of the FMA instruction set that performs the multiplication and subtraction in one step:

$ cc -mfma -O2 float-demo.c -S -o -
[…]
	vfmsub132sd	%xmm1, %xmm2, %xmm0
[…]

Again, this means that the rounding of the intermediate value is not rounded down to double — and therefore doesn’t carry the same error as 1./9.

Bottom line is this: never match floating-point computation results exactly, allow for some error. Even if something works for you, it may fail not only for a different architecture, but even for different optimization flags. And counter-intuitively, more precise results may amplify errors and yields intuitively “wrong” values.

The long double type

As you can probably guess by now, the C standard doesn’t define precisely what float, double and long double types are. Fortunately, it seems that the first two types are uniformly implemented as, respectively, a single-precision (32-bit) and a double-precision (64-bit) IEEE 754 floating point number. However, as far as the third type is concerned, we might find it to be any of:

  • the same type as double — on architectures such as 32-bit ARM,
  • the 80-bit x87 extended precision type — on amd64 and x86,
  • a type implementing double-double arithmetic — i.e. representing the number as a sum of two double values, giving roughly 106-bit precision, e.g. on PowerPC,
  • the quadruple precision (128-bit) IEEE 754 type — e.g. on SPARC.

Once again, this is primarily a matter of precision, and therefore it only breaks test suites that assume specific precision for the type. To demonstrate the differences in precision, we can use the following sample program:

#include <stdio.h>

int main() {
	printf("%0.40Lf\n", 1.L/3);
	return 0;
}

Running it across different architectures, we’re going to see:

arm64: 0.3333333333333333333333333333333333172839
ppc64: 0.3333333333333333333333333333333292246828
amd64: 0.3333333333333333333423683514373792036167
arm32: 0.3333333333333333148296162562473909929395
Summary

Portability is no trivial matter, that’s clear. What’s perhaps more surprising is that portability problems aren’t limited to C and similar low-level languages — I have shown multiple examples of how they leak into Python.

Perhaps the most common portability issues these days come from 32-bit architectures. Many projects today are tested only on 64-bit systems, and therefore face regressions on 32-bit platforms. Perhaps surprisingly, most of the issues stem not from incorrect type use in C, but rather from platform limitations — available address space, lack of support for large files or large time_t. All of these limitations apply to non-C programs that are built on C runtime as well, and sometimes require non-trivial fixes. Notably, switching to a 64-bit time_t is going to be a major breaking change (and one that I’ll cover in a separate post).

Other issues may be more obscure, and specific to individual architectures. On PPC64 or SPARC, we hit issues related to big endian byte order. On MIPS and PowerPC, we may be surprised by char being unsigned. On SPARC, we’re going to hit crashes if we don’t align types properly. Again, on PPC64 and SPARC we are also more likely to hit stack overflows. And on i386, we may discover problems due to different precision in floating-point computations.

These are just some examples, and they definitely do not deplete the possible issues. Furthermore, sometimes you may discover a combination of two different problems, furthering your confusion — just like the package that was broken only on big endian systems with signed char.

On the other hand, all these differences provide an interesting opportunity: by testing the package on a bunch of architectures and knowing their characteristics, you can guess what could be wrong with it. Say, if it fails on PPC64 but passes on PPC64LE, you may guess it’s a byte order issue — and then it turns out it was actually a stack overflow, because big endian PPC64 happens to default to ELFv1 ABI that uses slightly larger stack frames. But hey, usually it does help.

Portability is important. The problematic architectures may constitute a tiny portion of your user base — in fact, sometimes I do wonder if some of the programs we’re fixing are actually going to be used by any real user of these architectures, or if we’re merely cargo culting keywords added a long time ago. You may even argue that it’s better for the environment if people discarded these machines rather than kept having them burn energy. However, portability makes for good code. What may seem like bothering for a tiny minority today, may turn out to prevent a major security incident for all your users tomorrow.

Ideally, you’d want your program to work everywhere. Unfortunately, that’s not that simple, even if you’re using high-level “portable” languages such as Python. In this blog post, I’d like to focus on some aspects of cross-architecture problems I’ve seen or heard about during my time in Gentoo. Please note that I don’t mean this to be a comprehensive list of problems — instead, I’m aiming for an interesting read.

What breaks programs on 32-bit systems?

Basic integer type sizes

If you asked anyone what’s the primary difference between 64-bit and 32-bit architectures, they will probably answer that it’s register sizes. For many people, register sizes imply differences in basic integer types, and therefore the primary source of problems on 32-bit architectures, when programs are tested on 64-bit architectures only (which is commonly the case nowadays). Actually, it’s not that simple.

Contrary to common expectations, the differences in basic integer types are minimal. Most importantly, your plain int is 32-bit everywhere. The only type that’s actually different is long — it’s 32-bit on 32-bit architectures, and 64-bit on 64-bit architectures. However, people don’t use long all that often in modern programs, so that’s not very likely to cause issues.

Perhaps some people worry about integer sizes because they still foggily remember the issues from porting old 32-bit software to 64-bit architectures. As I’ve mentioned before, int remained 32-bit — but pointers became 64-bit. As a result, if you attempted to cast pointers (or related data) to int, you’d be in trouble (hence we have size_t, ssize_t, ptrdiff_t). Of course, the same thing (i.e. casting pointers to long) made for 64-bit architectures is ugly but won’t technically cause problems on 32-bit architectures.

Note that I’m talking about System V ABI here. Technically, the POSIX and the C standards don’t specify exact integer sizes, and permit a lot more flexibility (the C standard especially — up to having, say, all the types exactly 32-bit).

Address space size

Now, a more likely problem is the address space limitation. Since pointers are 32-bit on 32-bit architectures, a program can address no more than 4 GiB of memory (in reality, somewhat less than that). What’s really important here is that this limits allocated memory, even it is never actually used.

This can cause curious issues. For example, let’s say that you have a program that allocates a lot of memory, but doesn’t use most of it. If you run this program on a 64-bit system with 2 GiB of total memory, it works just fine. However, if you run it on 32-bit userland with a lot of memory, it fails. And why is that? It’s because the system permitted the program to allocate more memory than it could ever provide — risking an OOM if the program actually tried to use it all; but on the 32-bit architecture, it simply cannot fit all these allocations into 32-bit addresses.

The following sample can trivially demonstrate this:

$ cat > mem-demo.c <<EOF
#include <stdlib.h>
#include <stdio.h>

int main() {
    void *allocs[100];
    int i, j;
    FILE *urandom = fopen("/dev/urandom", "r");

    for (i = 0; i < 100; ++i) {
        allocs[i] = malloc(1024 * 1024 * 1024);
        if (!allocs[i]) {
            printf("malloc for i = %d failed\n", i);
            return 1;
        }
        fread(allocs[i], 1024, 1, urandom);
    }

    for (i = 0; i < 100; ++i)
        free(allocs[i]);
    fclose(urandom);

    return 0;
}
EOF
$ cc -m64 mem-demo.c -o mem-demo && ./mem-demo
$ cc -m32 mem-demo.c -o mem-demo && ./mem-demo 
malloc for i = 3 failed

The program allocates a grand total of 100 GiB of memory, but uses only the first KiB of each allocation. This works just fine on 64-bit architectures but fails on 32-bit because of failing allocation.

At this point, it’s probably worth noting that we are talking about limitations applicable to a single process. A 32-bit kernel can utilize more than 4 GiB of memory, and therefore multiple processes can use a total of more than 4 GiB. There are also cursed ways of making it possible for a single process to access more than 4 GiB of memory. For example, one could use memfd_create() (or equivalently, files on tmpfs) to create in-memory files that exceed process’ address space, or use IPC to exchange data between multiple processes having separate address spaces (thanks to Arsen Arsenović and David Seifert for their hints on this).

Large File Support

Another problem faced by 32-bit programs is that the file-related types are traditionally 32-bit. This has two implications. The more obvious one is that off_t, the type used to express file sized and offsets, is a signed 32-bit integer, so you cannot stat() and therefore open files larger than 2 GiB. The less obvious implication is that ino_t, the type used to express inode numbers, is also 32-bit, so you cannot open files with inode numbers 2^32 and higher. In other words, given large enough filesystem, you may suddenly be unable to open random files, even if they are smaller than 2 GiB.

Now, this is a problem that can be solved. Modern programs usually define _FILE_OFFSET_BITS=64 and get 64-bit types instead. In fact, musl libc unconditionally provides 64-bit types, rendering this problem a relic of the past — and apparently glibc is planning to switch the default in the future as well.

Here’s a trivial demo:

$ cat > lfs-demo.c <<EOF
#include <fcntl.h>
#include <stdio.h>
#include <unistd.h>

int main() {
    int fd = open("lfs-test", O_RDONLY);

    if (fd == -1) {
        perror("open() failed");
        return 1;
    }

    close(fd);
    return 0;
}
EOF
$ truncate -s 2G lfs-test
$ cc -m64 lfs-demo.c -o lfs-demo && ./lfs-demo
$ cc -m32 lfs-demo.c -o lfs-demo && ./lfs-demo 
open() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 lfs-demo.c \
    -o lfs-demo && ./lfs-demo

Unfortunately, while fixing a single package is trivial, a global switch is not. The sizes of off_t and ino_t change, and so effectively does the ABI of any libraries that use these types in the API — i.e. if you rebuild the library without rebuilding the programs using it, they could break in unexpected ways. What you can do is either switch everything simultaneously, or go slowly and add change the types via a new API, preserving the old one for compatibility. The latter is unlikely to happen, given there’s very little interest in 32-bit architecture support these days. The former also isn’t free of issues — technically speaking, you may end up introducing incompatibility with prebuilt software that used the 32-bit types, and effectively lose the ability to run some proprietary software entirely.

time_t and the y2k38 problem

The low-level way of representing timestamps in C is through the number of seconds since the so-called epoch. This number is represented in a time_t type, which, as you can probably guess, was a signed 32 bit integer on 32-bit architectures. This means that it can hold positive values up to 231 – 1 seconds, which roughly corresponds to 68 years. Since the epoch on POSIX systems was defined as 1970, this means that the type can express timestamps up to 2038.

What does this mean in practice? Programs using 32-bit time_t can’t express dates beyond the cutoff 2038 date. If you try to do arithmetic spanning beyond this date (e.g. “20 years from now”), you get an overflow. stat() is going to fail on files with timestamps beyond that point (though, interestingly, open() works on glibc, so it’s not entirely symmetric with the LFS case). Past the overflow date, you get an error even trying to get the current time — and if your program doesn’t account for the possibility of time() failing, it’s going to be forever stuck 1 second before the epoch, or 1969-12-31 23:59:59. Effectively, it may end up hanging randomly (waiting for some wall clock time to pass), not firing events or seeding a PRNG with a constant.

Again, modern glibc versions provide a switch. If you define _TIME_BITS=64 (plus LFS flags, as a prerequisite), your program is going to get a 64-bit time_t. Modern versions of musl libc also default to the 64-bit type (since 1.2.0). Unfortunately, switching to the 64-bit type brings the same risks as switching to LFS globally — or perhaps even worse because time_t seems to be more common in library API than file size-related types were.

These solutions only work for software that is built from source, and uses time_t correctly. Converting timestamps to int will cause overflow bugs. File formats with 32-bit timestamp fields are essentially broken. Most importantly, all proprietary software will remain broken and in need of serious workarounds.

Here are some samples demonstrating the problems. Please note that the first sample assumes the system clock is set beyond 2038.

$ cat > time-test.c <<EOF
#include <stdio.h>
#include <time.h>

int main() {
    time_t t = time(NULL);

    if (t != -1) {
        struct tm *dt = gmtime(&t);
        char out[32];

        strftime(out, sizeof(out), "%F %T", dt);
        printf("%s\n", out);
    } else
        perror("time() failed");

    return 0;
}
EOF
$ cc -m64 time-test.c -o time-test && ./time-test
2060-03-04 11:13:02
$ cc -m32 time-test.c -o time-test && ./time-test
time() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
    time-test.c -o time-test && ./time-test
2060-03-04 11:13:32
$ cat > mtime-test.c <<EOF
#include <fcntl.h>
#include <sys/stat.h>
#include <stdio.h>
#include <time.h>
#include <unistd.h>

int main() {
    struct stat st;
    int fd;

    if (stat("time-data", &st) == 0) {
        char buf[32];
        struct tm *tm = gmtime(&st.st_mtime);
        strftime(buf, sizeof(buf), "%F %T", tm);
        printf("mtime: %s\n", buf);
    } else
        perror("stat() failed");

    fd = open("time-data", O_RDONLY);
    if (fd == -1) {
        perror("open() failed");
        return 1;
    }
    close(fd);

    return 0;
}
$ touch -t '206001021112' mtime-data
$ cc -m64 mtime-test.c -o mtime-test && ./mtime-test
mtime: 2060-01-02 10:12:00
$ cc -m32 mtime-test.c -o mtime-test && ./mtime-test
stat() failed: Value too large for defined data type
$ cc -m32 -D_FILE_OFFSET_BITS=64 -D_TIME_BITS=64 \
    mtime-test.c -o mtime-test && ./mtime-test
mtime: 2060-01-02 10:12:00

Are these problems specific to C?

It is probably worth noting that while portability issues are generally discussed in terms of C, not all of them are specific to C, or to programs directly interacting with C API.

For example, address space limitations affect all programming languages, unless they take special effort to work around them (I’m not aware of any that do). So a Python program will be limited by the 4 GiB of address space the same way C programs are — except that Python programs don’t allocate memory explicitly, so the limit will be rather on memory used than allocated. On the minus side, Python programs will probably be less memory efficient than C programs.

File and time type sizes also sometimes affect programming languages internally. Modern versions of Python are built with Large File Support enabled, so they aren’t limited to 32-bit file sizes and inode numbers. However, they are limited to 32-bit timestamps:

>>> import datetime
>>> datetime.datetime(2060, 1, 1)
datetime.datetime(2060, 1, 1, 0, 0)
>>> datetime.datetime(2060, 1, 1).timestamp()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
OverflowError: timestamp out of range for platform time_t

Other generic issues

Byte order (endianness)

The predominant byte order nowadays is little endian. X86 was always little endian. ARM is bi-endian, but defaults to running little endian (and there were never much incentive to run big endian ARM). PowerPC used to default to big endian, but these days PPC64 systems are mostly running little endian instead.

It’s not that either byte order is superior in some way. It’s just that x86 happened to arbitrarily use that byte order. Given its popularity, a lot of non-portable software has been written that worked correctly on little endian only. Over time, people lost the incentive to run big endian systems and this eventually led to even worse big endian support overall.

The most common issues related to byte order occur when implementing binary data formats, particularly file formats and network protocols. A missing byte order conversion can lead to the program throwing an error or incorrectly reading files written on other platforms, writing incorrect files or failing to communicate with peers on other platforms correctly. In extreme cases, a program that missed some byte order conversions may be unable to read a file it has written before.

Again, byte order problems are not limited to C. For example, the struct module in Python uses explicit byte order, size and alignment modifiers.

Curious enough, byte order issues are not limited to low-level data formats either. To give another example, the UTF-16 and UTF-32 encodings also have little endian and big endian variations. When the user does not request a specific byte order, Python uses host’s byte order and adds a BOM to the string, that is used to detect the correct byte order when decoding.

>>> "foo".encode("UTF-16LE")
b'f\x00o\x00o\x00'
>>> "foo".encode("UTF-16BE")
b'\x00f\x00o\x00o'
>>> "foo".encode("UTF-16")
b'\xff\xfef\x00o\x00o\x00'

char signedness

This is probably one of the most confusing portability problems you may see. Roughly, the problem is that the C standard does not specify the signedness of char type (unlike int). Some platforms define it as signed, others as unsigned. In fact, the standard goes a step further and defines char as a distinct type from both signed char and unsigned char, rather than an alias to either of them.

For example, the System V ABI for x86 and SPARC specifies that char is signed, whereas for MIPS and PowerPC it is unsigned. Assuming either and doing arithmetic on top of that could lead to surprising results on the other set of platforms. In fact, one of the most confusing cases I’ve seen was with code that was used only for big endian platforms, and therefore worked on PowerPC but not on SPARC (even though it would also fail on x86, if it was used there).

Here is an example inspired by it. The underlying idea is to read a little endian 32-bit unsigned integer from a char array:

$ cat > char-sign.c <<EOF
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main() {
        char buf[] = {0x00, 0x40, 0x80, 0xa0};
        char *p = buf;
        uint32_t val = 0;

        val |= (*p++);
        val |= (*p++) << 8;
        val |= (*p++) << 16;
        val |= (*p++) << 24;

        printf("%08" PRIx32 "\n", val);
}
EOF
$ cc -funsigned-char char-sign.c -o char-sign
$ ./char-sign
a0804000
$ cc -fsigned-char char-sign.c -o char-sign
$ ./char-sign
ff804000

Please note that for the sake of demonstration, the example uses -fsigned-char and -funsigned-char switches to override the default platform signedness. In real code, you’d explicitly use unsigned char instead.

Strict alignment

I feel that alignment is not a well-known problem, so perhaps I should start by explaining it a bit. Long story short, alignment is about ensuring that particular types are placed across appropriate memory boundaries. For example, on most platforms 32-bit types are expected to be aligned at 32-bit (= 4 byte) boundaries. In other words, you expect that the type’s memory address would be a multiple of 4 bytes — irrespective of whether it’s on stack or heap, used directly, in an array, a structure or perhaps an array of structures.

Perhaps the simplest way to explain that is to show how the compiler achieves alignment in structures. Please consider the following type:

struct {
    int16_t a;
    int32_t b;
    int16_t c;
}

As you can see, it contains two 2-byte types and one 4-byte type — that would be a total of 8 bytes, right? Nothing more wrong, at least on platforms requiring 32-bit alignment for int32_t. To guarantee that b would be correctly aligned whenever the whole structure is correctly aligned, the compiler needs to move it to an offset being a multiple of 4. Furthermore, to guarantee that if the structure is used in array, every instance is correctly aligned, it also needs to increase its size to a multiple of 4.

Effectively, the resulting structure resembles the following:

struct {
    int16_t a;
    int16_t _pad1;
    int32_t b;
    int16_t c;
    int16_t _pad2;
}

In fact, you can find some libraries actually defining structures with explicit padding. So you get a padding of 2 + 2 bytes, b at offset 4, and a total size of 12 bytes.

Now, what would happen if the alignment requirements weren’t met? On the majority of platforms, misaligned types are still going to work, usually at a performance penalty. However, on some platforms like SPARC, they will actually cause the program to terminate with a SIGBUS. Consider the following example:

$ cat > align-test.c <<EOF
#include <inttypes.h>
#include <stdint.h>
#include <stdio.h>

int main() {
	uint8_t buf[6] = {0, 0, 0, 4, 0, 0};
	int32_t *number = (int32_t *) &buf[2];
	printf("%" PRIi32 "\n", *number);
	return 0;
}
EOF
$ cc align-test.c -o align-test
$ ./align-test
1024

The code is meant to resemble a cheap way of reading data from a file, and then getting a 32-bit integer at offset 2. However, on SPARC this code will not work as expected:

$ ./align-test
Bus error (core dumped)

As you can probably guess, there is a fair number of programs suffering from issues like that simply because they don’t crash on x86, and it’s easy to silence the normal compiler warnings (e.g. by type punning, as used it in the example). However, as noted before, this code will not only cause a crash on SPARC — it may also cause a performance penalty everywhere else.

Stack size

As low-level C programmers tend to learn, there are two main kinds of memory available to the program: the heap and the stack. The heap is the main memory area from which explicit allocations are done. The stack is a relatively small area of memory that is given to the program for its immediate use.

The main difference is that the use of heap is controlled — a well-written written program allocates as much memory as it needs, and doesn’t access areas outside of that. On the other hand, stack use is “uncontrolled” — programs generally don’t check stack bounds. As you may guess, this means that if a program uses it too much, it’s going to exceed the available stack — i.e. hit a stack overflow, which generally manifests itself as a “weird” segmentation fault.

And how do you actually use a lot of stack memory? In C, local function variables are kept on stack — so the more variables you use, the more stack you fill. Furthermore, some ABIs use stack to pass function parameters and return values — e.g. x86 (but not the newer amd64 or x32 ABIs). But most importantly, stack frames are used to record the function call history — and this means the deeper you call, the larger the stack use.

This is precisely why programmers are cautioned against recursive algorithms — especially if built without protection against deep recursion, they provide a trivial way to cause a stack overflow. And this last problem is not limited to C — recursive function calls in Python also result in recursive function calls in C. Python comes with a default recursion limit to prevent this from happening. However, as we recently found out the hard way, this limit needs to be adjusted across different architectures and compiler configurations, as their stack frame sizes may differ drastically: from a baseline of 8–16 bytes on common architectures such as x86 or ARM, through 112–128 bytes on PPC64, up to 160–176 bytes on s390x and SPARC64.

On top of that, the default thread stack size varies across the standard C libraries. On glibc, it is usually between 2 MiB and 10 MiB, whereas on musl it is 128 KiB. Therefore, in some cases you may actually need to explicitly request a larger stack.

The wondrous world of floating-point types

x87 math

The x86 platform supports two modes of floating-point arithmetic:

  1. The legacy 387 floating-point arithmetic that utilizes 80-bit precision registers (-mfpmath=387).
  2. The more modern SSE arithmetic that supports all of 32-bit, 64-bit and 80-bit precision types (-mfpmath=sse).

The former is the default on 32-bit x86 platforms using the System V ABI, the latter everywhere else. And why does that matter? Because the former may imply performing some computations using the extended 80-bit precision before converting the result back to the original type, effectively implying a smaller rounding error than performing the same computations on the original type directly.

Consider the following example:

$ cat > float-demo.c <<EOF
#include <stdio.h>

__attribute__((noipa))
double fms(double a, double b, double c) {
	return a * b - c;
}

int main() {
	printf("%+.40f\n", fms(1./3, 1./3, 1./9));
	return 0;
}
EOF
$ cc -mfpmath=sse float-demo.c -o float-demo
$ ./float-demo
+0.0000000000000000000000000000000000000000
$ cc -mfpmath=387 float-demo.c -o float-demo
$ ./float-demo
-0.0000000000000000061663998560113064684174

What’s happening here? The program is computing 1/3 * 1/3 - 1/9, which we know should be zero. Except that it isn’t when using x87 FPU instructions. Why?

Normally, this computation is done in two steps. First, the multiplication 1/3 * 1/3 is done. Afterwards, 1/9 is subtracted from the result. In SSE mode, both steps are done directly on the double type. However, in x87 mode the doubles are converted to 80-bit floats first, both computations are done on these and then the result is converted back to double. We can see that looking at the respective assembly fragments:

$ cc -mfpmath=sse float-demo.c -S -o -
[…]
	movsd	-8(%rbp), %xmm0
	mulsd	-16(%rbp), %xmm0
	subsd	-24(%rbp), %xmm0
[…]
$ cc -mfpmath=387 float-demo.c -S -o -
[…]
	fldl	-8(%rbp)
	fmull	-16(%rbp)
	fsubl	-24(%rbp)
	fstpl	-32(%rbp)
[…]

Now, neither ⅓ nor ⅑ can be precisely expressed in binary system. So 1./3 is actually ⅓ + some error, and 1./9 is ⅑ + another error. It happens that 1./3 * 1./3 after rounding is giving the same value as 1./9 — so subtracting one from the other yields zero. However, when computations are done using an intermediate type of higher precision, the squared error from 1./3 * 1./3 is rounded at a higher precision — and therefore different from the one in 1./9. So counter-intuitively, higher precision here amplifies a rounding error and yields the “incorrect” result!

Of course, this is not that big of a deal — we are talking about 17 decimal places, and user-facing programs will probably round that down to 0. However, this can lead to problems in programs written to expect an exact value — e.g. in test suites.

Gentoo has already switched amd64 multilib profiles to force -mfpmath=sse for 32-bit builds, and it is planning to switch the x86 profiles as well. While this doesn’t solve the underlying issue, it yields more consistent results across different architectures and therefore reduces the risk of our users hitting these bugs. However, this has a surprising downside: some packages actually adapted to expect different results on 32-bit x86, and now fail when SSE arithmetic is used there.

It doesn’t take two architectures to make a rounding problem

Actually, you don’t have to run a program on two different architectures to see rounding problems — different optimization levels, particularly CPU instruction sets can also result in different rounding errors. Let’s try compiling the previous example with and without FMA instructions:

$ cc -mno-fma -O2 float-demo.c -o float-demo
$ ./float-demo
+0.0000000000000000000000000000000000000000
$ cc -mfma -O2 float-demo.c -o float-demo
$ ./float-demo
-0.0000000000000000061679056923619804377437

The first invocation is roughly the same as before. The second one enables use of the FMA instruction set that performs the multiplication and subtraction in one step:

$ cc -mfma -O2 float-demo.c -S -o -
[…]
	vfmsub132sd	%xmm1, %xmm2, %xmm0
[…]

Again, this means that the rounding of the intermediate value is not rounded down to double — and therefore doesn’t carry the same error as 1./9.

Bottom line is this: never match floating-point computation results exactly, allow for some error. Even if something works for you, it may fail not only for a different architecture, but even for different optimization flags. And counter-intuitively, more precise results may amplify errors and yields intuitively “wrong” values.

The long double type

As you can probably guess by now, the C standard doesn’t define precisely what float, double and long double types are. Fortunately, it seems that the first two types are uniformly implemented as, respectively, a single-precision (32-bit) and a double-precision (64-bit) IEEE 754 floating point number. However, as far as the third type is concerned, we might find it to be any of:

  • the same type as double — on architectures such as 32-bit ARM,
  • the 80-bit x87 extended precision type — on amd64 and x86,
  • a type implementing double-double arithmetic — i.e. representing the number as a sum of two double values, giving roughly 106-bit precision, e.g. on PowerPC,
  • the quadruple precision (128-bit) IEEE 754 type — e.g. on SPARC.

Once again, this is primarily a matter of precision, and therefore it only breaks test suites that assume specific precision for the type. To demonstrate the differences in precision, we can use the following sample program:

#include <stdio.h>

int main() {
	printf("%0.40Lf\n", 1.L/3);
	return 0;
}

Running it across different architectures, we’re going to see:

arm64: 0.3333333333333333333333333333333333172839
ppc64: 0.3333333333333333333333333333333292246828
amd64: 0.3333333333333333333423683514373792036167
arm32: 0.3333333333333333148296162562473909929395

Summary

Portability is no trivial matter, that’s clear. What’s perhaps more surprising is that portability problems aren’t limited to C and similar low-level languages — I have shown multiple examples of how they leak into Python.

Perhaps the most common portability issues these days come from 32-bit architectures. Many projects today are tested only on 64-bit systems, and therefore face regressions on 32-bit platforms. Perhaps surprisingly, most of the issues stem not from incorrect type use in C, but rather from platform limitations — available address space, lack of support for large files or large time_t. All of these limitations apply to non-C programs that are built on C runtime as well, and sometimes require non-trivial fixes. Notably, switching to a 64-bit time_t is going to be a major breaking change (and one that I’ll cover in a separate post).

Other issues may be more obscure, and specific to individual architectures. On PPC64 or SPARC, we hit issues related to big endian byte order. On MIPS and PowerPC, we may be surprised by char being unsigned. On SPARC, we’re going to hit crashes if we don’t align types properly. Again, on PPC64 and SPARC we are also more likely to hit stack overflows. And on i386, we may discover problems due to different precision in floating-point computations.

These are just some examples, and they definitely do not deplete the possible issues. Furthermore, sometimes you may discover a combination of two different problems, furthering your confusion — just like the package that was broken only on big endian systems with signed char.

On the other hand, all these differences provide an interesting opportunity: by testing the package on a bunch of architectures and knowing their characteristics, you can guess what could be wrong with it. Say, if it fails on PPC64 but passes on PPC64LE, you may guess it’s a byte order issue — and then it turns out it was actually a stack overflow, because big endian PPC64 happens to default to ELFv1 ABI that uses slightly larger stack frames. But hey, usually it does help.

Portability is important. The problematic architectures may constitute a tiny portion of your user base — in fact, sometimes I do wonder if some of the programs we’re fixing are actually going to be used by any real user of these architectures, or if we’re merely cargo culting keywords added a long time ago. You may even argue that it’s better for the environment if people discarded these machines rather than kept having them burn energy. However, portability makes for good code. What may seem like bothering for a tiny minority today, may turn out to prevent a major security incident for all your users tomorrow.

September 11 2024

Much improved MIPS and Alpha support in Gentoo Linux

Gentoo News (GentooNews) September 11, 2024, 5:00

Over the last years, MIPS and Alpha support in Gentoo has been slowing down, mostly due to a lack of volunteers keeping these architectures alive. Not anymore however! We’re happy to announce that thanks to renewed volunteer interest both arches have returned to the forefront of Gentoo Linux development, with a consistent dependency tree checked and enforced by our continuous integration system. Up-to-date stage builds and the accompanying binary packages are available for both, in the case of MIPS for all three ABI variants o32, n32, and n64 and for both big and little endian, and in the case of Alpha also with a bootable installation CD.

MIPS and Alpha logos

Over the last years, MIPS and Alpha support in Gentoo has been slowing down, mostly due to a lack of volunteers keeping these architectures alive. Not anymore however! We’re happy to announce that thanks to renewed volunteer interest both arches have returned to the forefront of Gentoo Linux development, with a consistent dependency tree checked and enforced by our continuous integration system. Up-to-date stage builds and the accompanying binary packages are available for both, in the case of MIPS for all three ABI variants o32, n32, and n64 and for both big and little endian, and in the case of Alpha also with a bootable installation CD.

August 31 2024

KDE Plasma 6 upgrade for stable Gentoo Linux

Gentoo News (GentooNews) August 31, 2024, 5:00

Exciting news for stable Gentoo users: It’s time for the upgrade to the new “megaversion” of the KDE community desktop environment, KDE Plasma 6! Together with KDE Gear 24.05.2, where now most of the applications have been ported, and KDE Frameworks 6.5.0, the underlying library architecture, KDE Plasma 6.1.4 will be stabilized over the next days. The base libraries of Qt 6 are already available.

More technical information on the upgrade, which should be fairly seamless, as well as architecture-specific notes can be found in a repository news item. Enjoy!

KDE Plasma logo

Exciting news for stable Gentoo users: It’s time for the upgrade to the new “megaversion” of the KDE community desktop environment, KDE Plasma 6! Together with KDE Gear 24.05.2, where now most of the applications have been ported, and KDE Frameworks 6.5.0, the underlying library architecture, KDE Plasma 6.1.4 will be stabilized over the next days. The base libraries of Qt 6 are already available.

More technical information on the upgrade, which should be fairly seamless, as well as architecture-specific notes can be found in a repository news item. Enjoy!

August 20 2024

Gentoo: profiles and keywords rather than releases

Michał Górny (mgorny) August 20, 2024, 18:44

Different distributions have different approaches to releases. For example, Debian simultaneously maintains multiple releases (branches). The “stable” branch is recommended for production use, “testing” for more recent software versions. Every two years or so, the branches “shift” (i.e. the previous “testing” becomes the new “stable”, and so on) and users are asked to upgrade to the next release.

Fedora releases aren’t really branched like Debian. Instead, they make a new release (with potentially major changes for an upgrade) every half a year, and maintain old releases for 13 months. You generally start with the newest release, and periodically upgrade.

Arch Linux follows a rolling release model instead. There is just one branch that all Arch users use, and releases are made periodically only for the purpose of installation media. Major upgrades are done in-place (and I have to say, they don’t always go well).

Now, Gentoo is something of a hybrid, as it combines the best of both worlds. It is a rolling release distribution with a single shared repository that is available to all users. However, within this repository we use a keywording system to provide a choice between stable and testing packages, to facilitate both production and development systems (with some extra flexibility), and versioned profiles to tackle major lock-step upgrades.

Architectures

Before we enter any details, we need to clarify what an architecture (though I suppose platform might be a better term) is in Gentoo. In Gentoo, architectures provide a coarse (and rather arbitrary) way of classifying different supported processor families.

For example, the amd64 architecture is indicates 64-bit x86 processors (also called x86-64) running 64-bit userland, while x86 indicates 32-bit userland for x86 processors (both 32-bit and 64-bit in capability). Similarly, 64-bit AArch64 (ARMv8) userland is covered by arm64, while the 32-bit userland on all ARM architecture versions is covered by the arm. This is best seen in the ARM stage downloads — a single architecture is split into subarchitectures there.

For some architectures, the split is even coarser. For example, mips and riscv (at least for the moment) cover both 32-bit and 64-bit variations of the architecture. ppc64 covers both big-endian and little-endian (PPC64LE) variations — and the default big-endian variation tends to cause more issues with software.

Why does the split matter? Primarily because architectures define keywords, and keywords indicate whether the package works. A coarser split means that a single keyword may be used to cover a wide variety of platforms — not of which are equally working. But more on that further on.

By the way, I’ve mentioned “platforms” earlier. Why? Because besides the usual architectures, we are using names such as amd64-linux and x64-macos for Prefix — i.e. running Gentoo inside another operating system (or Linux distribution). Historically, we also had a Gentoo/FreeBSD variation.

Profiles

The simplest way of thinking of profiles would be as different Gentoo configurations. Gentoo provides a number of profiles for every supported architecture. Profiles serve multiple purposes.

The most obvious purpose is providing suitable defaults for different, well, profiles of Gentoo usage. So we have base profiles that are better suited for headless systems, and desktop profiles that are optimized for desktop use. Within desktop profiles, we have subprofiles for GNOME and Plasma desktops. We have base profiles for OpenRC, and subprofiles for systemd; base profiles for the GNU toolchain and subprofiles for the LLVM toolchain. Of course, these merely control defaults — you aren’t actually required to use a specific subprofile to use the relevant software; you can adjust your configuration directly instead. However, using a right fit of a profile makes things easier, and increases the chances of finding Gentoo binary packages that match your setup.

But there’s more to profiles than that. Profiles also control non-trivial system configuration aspects that cannot be easily changed. We have separate profiles for systems that have undergone the “/usr merge”, and for systems that haven’t — and you can’t switch between the two without actually migrating your system first. On some architectures we have profiles with and without multilib; this is e.g. necessary to run 32-bit executables on amd64. On ARM, separate profiles are provided for different architecture versions. The implication of all that is that profiles also control which packages can actually be installed on a system. You can’t install 32-bit software on an amd64 non-multilib system, or packages requiring newer ARM instructions on a system using a profile for older processors.

Finally, profiles are versioned to carry out major changes in Gentoo. This is akin to how Debian or Fedora do releases. When we introduce major changes that require some kind of migration, we do that via a new profile version. Users are provided with upgrade instructions, and are asked to migrate their systems. And we do support both old and new profiles for some time. To list two examples:

  1. 17.1 amd64 profiles changed the multilib layout from using lib64 + lib32 (+ a compatibility lib symlink) to lib64 + lib.
  2. 23.0 profiles featured hardening- and optimization-related toolchain changes.

Every available profile has one of three stability levels: stable, dev or exp. As you can guess, “stable” profiles are the ones that are currently considered safe to use on production systems. “Dev” profiles should be good too, but they’re not as well tested yet. Then, “exp” profiles come with no guarantees, not even of dependency graph integrity (to be explained further on).

Keywords

While profiles can control which packages can be installed to some degree, keywords are at the core of that. Keywords are specified for every package version separately (inside the ebuild), and are specified (or not) for every architecture.

A keyword can effectively have one of four states:

  1. stable (e.g. amd64), indicating that the package should be good to be used on production;
  2. testing (often called ~arch, e.g. ~amd64), indicating that the package should work, but we don’t give strong guarantees;
  3. unkeyworded (i.e. no keyword for given architecture is present), usually indicating that the package has not been tested yet;
  4. disabled (e.g. -amd64), indicating that the package can’t work on given architecture. This is rarely used, usually for prebuilt software.

Now, the key point is that users have control over which keywords their package managers accepts. If you’re running a production system, you may want to set it to accept stable keywords only — in which case only stable packages will normally be allowed to be installed, and your packages will only be upgraded once the next version is marked stable. Or you may set your system to accept both stable and testing keywords, and help us test them.

Of course, this is not just a binary global switch. At the cost of increased risk and reduced chances of getting support, you can adjust allowed keywords for packages, and run a mix of stable and testing. Or you can install some packages that has no keywords at all, including live packages built straight from a VCS repository. Or you can even set your system to follow keywords for another architecture — the sky is the limit!

Note that not all Gentoo architectures use stable keywords at a time. There are so called “pure ~arch arches” that use testing keywords only. An examples of such architectures are alpha, loong and riscv.

Bad terminology: stable and stable

Here’s a time for a short intermezzo: as you may have noticed, we have used the term “stable” twice already: one time for profiles, and the other time for the keywords. Combined with the fact that not all architectures actually use stable keywords, this can get really confusing. Unfortunately, it’s a historical legacy that we have to live with.

So to clarify. A stable profile is a profile that should be good to use on production systems. A stable package (i.e. a package [version] with stable keywords) is a package version that should be good to use on production systems.

However, the two aren’t necessarily linked. You can use a dev or even exp profile, but only accept stable keywords, and the other way around. Furthermore, architectures that don’t use stable keywords at all, do have stable profiles.

Visibility and dependency graph integrity

Equipped with all that information, now we can introduce the concept of package visibility. Long story short, a package (version) is visible if it is installable on a given system. The primary reasons why a package couldn’t be installed are insufficient keywords, or an explicit mask. Let’s consider these cases in detail.

As I’ve mentioned earlier, a particular system can be configured to accept either stable, or both stable and testing keywords. Therefore, on a system set to accept stable keywords, only packages featuring stable keywords can be visible (the remaining packages are masked by “missing keyword”). On a system set to accept both stable and testing keywords, all packages featuring either stable or testing keywords can be visible.

Additionally, packages can be explicitly masked either globally in the repository, or in profiles. These masks are used for a variety of reasons: when a particular package is incompatible with the configuration of a given profile (say, 32-bit packages on a non-multilib 64-bit profile), when it turns out to be broken or when we believe that it needs more testing before we let users install it (even on testing-keyword systems).

The considerations of package visibility here are limited to the package itself. However, in order for the package to be installable, all its dependencies need to be installable as well. For packages with stable keywords, this means that all their dependencies (including optional dependencies conditional to USE flags that can be enabled on a stable system) have a matching version with stable keywords as well. Conversely, for packages with testing keywords, this means that all dependencies need to have either stable or testing keywords. Furthermore, said dependency versions must not be masked on any profile, on which the package in question is visible.

This is precisely what dependency graph integrity checks are all about. They are performed for all profiles that are either stable or dev (i.e. exp profiles are excluded, and don’t guarantee integrity), for all package versions with stable or testing keywords — and for each of these kind of keywords separately. And when integrity is not maintained, we get automated reports about it, and deployment pipeline is blocked, so ideally users don’t have to experience the problem firsthand.

The life of a keyword

Now that we have all the fundamental ideas covered, we can start discussing how packages get their keywords in the first place.

The default state for a keyword is “unspecified”. For a package to gain a testing keyword, it needs to be tested on the architecture in question. This can either be done by a developer directly, or via a keywording request filed on Gentoo Bugzilla, that will be processed by an arch tester. Usually, only the newest version of the package is handled, but in special circumstances testing keywords can be added to older versions as well (e.g. when required to satisfy a dependency). Any dependencies that are lacking a matching keyword need to be tested as well.

And what does happen if the package does not pass testing? Ideally, we file a bug upstream and get it fixed. But realistically, we can’t always manage that. Sometimes the bug remains open for quite some time, waiting for someone to take action or for a new release that might happen to start working. Sometimes we decide that keywording a particular package at the time is not worth the effort — and if it is required as an optional dependency of something else, we instead mask the relevant USE flags in the profiles corresponding to the given architecture. In extreme cases, we may actually add a negative -arch flag, to indicate that the package can’t work on given architecture. However, this is really rare and we generally do it only as a hint if people spend their time trying to keyword it over and over again.

Once a package gains a testing keyword, it “sticks”. Whenever a new version is added, all the keywords from the previous version are copied into it, and stable keywords are lowered into testing keywords. This is done even though the developer only tested it on one of the architectures. Packages generally lose testing keywords only if we either have a justified suspicion that they have stopped working, or if they gained new dependencies that are lacking the keywords in question. Most of the time, we request readding the testing keywords (rekeywording) immediately afterwards.

Now, stable requests follow a stricter routine. The maintainer must decide that a particular package version is ready to become stable first. A rule of thumb is that it’s been in testing for a month, and no major regressions have been reported. However, the exact details differ. For example, some projects make separate “stable branch” and “testing branch” releases, and we mark only the former stable. And when vulnerabilities are found in software, we tend to proceed with adding stable keywords to the fixed versions immediately.

Then, a stabilization request is filed, and then the package is tested on every architecture before the respective stable keyword is added. Testing is generally done on a system that is set only to accept stable keywords, therefore it may provide a slightly different environment that the original testing done when the package was added. Note that there is an exception to that rule — if we believe that particular packages are unlikely to exhibit different behavior across different architectures, we do ALLARCHES stabilization and add all the requested stable keywords after testing on one system.

Unlike with testing keywords, stable keywords need to be added to every version separately. When a new package version is added, all stable keywords in it are replaced by the corresponding testing keywords.

This process pretty much explains the difference between the guarantees given by testing and stable keywords. The testing keywords indicate that some version of the package has been tested on the given architecture at some point, and that we have good reasons to believe that it still works. The stable keywords indicate that this particular version has been tested on a system running stable keywords, and therefore it is less likely to turn out broken. Unfortunately, whether it actually is free of bugs is largely dependent on the quality of test suites, dependencies and so on. So yeah, it’s a mess.

The cost of keywords

I suppose that from user’s perspective it would be best if all packages that work on a given architecture had keywords for it; and ideally, all versions suitable for it would have stable keywords on all relevant architectures. However, every keyword comes with a cost. And that’s not only the cost of actual testing, but also a long-term maintenance cost.

For the most important architectures, Gentoo developers have access to one or more dedicated machines. These machines are used to various purposes: arch testing (i.e. processing keywording and stabilization requests, usually semi-automated), building stage archives, building binary packages, and last but not least: providing development environments that are needed to debug and fix bugs. For other architectures, we are entirely dependent on volunteers doing the testing — a few prominent volunteers worthy of the highest praise, I must add.

The cost incurred by testing keywords is comparatively small, but contrary to what you might think, it’s not a one time cost. Once a package gains a testing keyword, we generally want to keep it going forward. This means that if it gains new dependencies, we’re going to have to retest it — and its new dependencies. However, that’s the easy part.

The hard part is that stuff can actually break over time. The package itself can start exhibiting test failures, or stop working entirely. Its new dependencies may turn out to be broken on the architecture in question. In these cases, it’s not just the cost of testing — but actually reporting bugs, and possibly debugging and writing patches when upstream authors don’t have access to the relevant hardware (and/or don’t care). Sometimes you even learn that the author never intended to support given architecture, and is unwilling to accept well-written patches.

And if it turns out that it really isn’t feasible to keep the keyword going forward anymore, sometimes removing it may also turn out to be a lot of effort — especially if multiple packages depending on this one have been keyworded as well.

Of course, the cost for stable keywords is much higher. After all, it’s no longer a case of one time testing, but we actually have to test every single version that’s going stable. This is somewhat amortized by ALLARCHES packages that need to be tested on a single architecture only (and therefore usually are tested on one of the “fast” architectures), but still it’s a lot. On top of that, frequent testing is more likely to reveal problems, and therefore require immediate fixes. This is actually a good thing, but also a future cost to consider. And removing keywords from packages that used to be stable is likely to have greater impact than from these that never were.

Struggling architectures

All the costs considered, it shouldn’t come as a surprise that we sometimes find ourselves struggling with some of the less popular architectures. We may have limited access to hardware, the hardware itself may not be very performant, the hardware and the operating system may be susceptible to breakage. So if we keyword too much, then the arch teams can no longer keep up, the queue is getting long, and requests aren’t handled timely. In the extreme case, we may lose the last machine for a given architecture and become stuck, unable to go forward. These are all things to consider.

For these reasons, we periodically discuss the state of architectures in Gentoo. If we determine that some of them are finding it hard to cope, we look for solutions. Of course, one possibility to weigh in is getting more hardware — but that’s not always justified, or even possible. Sometimes we need to actually reduce the workload.

For architectures that use stable keywords, the obvious possibility is to reduce the number of packages using them — i.e. destabilize packages. Ordinarily, the best targets for this effort would be packages that are old, particularly problematic or unpopular, as they can reduce our effective maintenance cost while minimizing the potential discomfort to users. However, we might need to go deeper than that. In extreme cases, we can go as far as to reduce the stable package set to core system packages. At some point, this kind of reduction forces users to run a mixed stable-testing keyword system, but that at least permits them to limit risk of regressions in the most important packages.

If even that is insufficient, there are more options at our disposal. We can look into removing keywords entirely from packages, particularly packages that require further rekeywording work. We can decide to remove stable keywords from an architecture entirely. In the worst case, we can decide to mark all profiles exp, effectively abandoning dependency graph integrity (at this point, some dependencies may start missing keywords and packages may not be trivially installable), or we can decide to remove the support for a given architecture entirely.

Summary

Gentoo uses a combined profile and keyword system to facilitate user needs on top of a single ebuild repository. This is in contrast with many other distributions that use multiple repositories, make releases, sometimes maintain multiple release branches simultaneously. In fact, some distributions actually split into multiple versions to facilitate different user profiles. Gentoo does all that in a single, coherent product with rolling releases and profile upgrade paths.

The system of keywords is aimed at providing good user experience while keeping the maintenance affordable. On most of the supported architectures, we provide stable keywords to help keeping production systems on reasonably tested software. Before packages becomes stable, we offer them to more adventurous users via testing keywords. Gentoo also offers great flexibility — users can mix stable and testing keywords freely (though at the risk of hitting unexpected issues), or run experimental packages that aren’t ready to get testing keywords yet.

Unfortunately, there are limits to how much support for various architectures we can provide. We are largely reliant on either having appropriate machines available, or volunteers with the hardware to test stuff for us, not to mention developers having skills and energy to debug and fix architecture-specific problems. Sometimes this turns out to be insufficient to cope with all the work, and we need to give up on some of the architecture support.

Still, I think the system works pretty well here, and it is one of Gentoo’s strong suits. Sure, it occasionally needs a push here and there, or a policy change, but it’s been one of Gentoo’s foundations for years, and it doesn’t look as if it’s going to be replaced anytime soon.

Different distributions have different approaches to releases. For example, Debian simultaneously maintains multiple releases (branches). The “stable” branch is recommended for production use, “testing” for more recent software versions. Every two years or so, the branches “shift” (i.e. the previous “testing” becomes the new “stable”, and so on) and users are asked to upgrade to the next release.

Fedora releases aren’t really branched like Debian. Instead, they make a new release (with potentially major changes for an upgrade) every half a year, and maintain old releases for 13 months. You generally start with the newest release, and periodically upgrade.

Arch Linux follows a rolling release model instead. There is just one branch that all Arch users use, and releases are made periodically only for the purpose of installation media. Major upgrades are done in-place (and I have to say, they don’t always go well).

Now, Gentoo is something of a hybrid, as it combines the best of both worlds. It is a rolling release distribution with a single shared repository that is available to all users. However, within this repository we use a keywording system to provide a choice between stable and testing packages, to facilitate both production and development systems (with some extra flexibility), and versioned profiles to tackle major lock-step upgrades.

Architectures

Before we enter any details, we need to clarify what an architecture (though I suppose platform might be a better term) is in Gentoo. In Gentoo, architectures provide a coarse (and rather arbitrary) way of classifying different supported processor families.

For example, the amd64 architecture is indicates 64-bit x86 processors (also called x86-64) running 64-bit userland, while x86 indicates 32-bit userland for x86 processors (both 32-bit and 64-bit in capability). Similarly, 64-bit AArch64 (ARMv8) userland is covered by arm64, while the 32-bit userland on all ARM architecture versions is covered by the arm. This is best seen in the ARM stage downloads — a single architecture is split into subarchitectures there.

For some architectures, the split is even coarser. For example, mips and riscv (at least for the moment) cover both 32-bit and 64-bit variations of the architecture. ppc64 covers both big-endian and little-endian (PPC64LE) variations — and the default big-endian variation tends to cause more issues with software.

Why does the split matter? Primarily because architectures define keywords, and keywords indicate whether the package works. A coarser split means that a single keyword may be used to cover a wide variety of platforms — not of which are equally working. But more on that further on.

By the way, I’ve mentioned “platforms” earlier. Why? Because besides the usual architectures, we are using names such as amd64-linux and x64-macos for Prefix — i.e. running Gentoo inside another operating system (or Linux distribution). Historically, we also had a Gentoo/FreeBSD variation.

Profiles

The simplest way of thinking of profiles would be as different Gentoo configurations. Gentoo provides a number of profiles for every supported architecture. Profiles serve multiple purposes.

The most obvious purpose is providing suitable defaults for different, well, profiles of Gentoo usage. So we have base profiles that are better suited for headless systems, and desktop profiles that are optimized for desktop use. Within desktop profiles, we have subprofiles for GNOME and Plasma desktops. We have base profiles for OpenRC, and subprofiles for systemd; base profiles for the GNU toolchain and subprofiles for the LLVM toolchain. Of course, these merely control defaults — you aren’t actually required to use a specific subprofile to use the relevant software; you can adjust your configuration directly instead. However, using a right fit of a profile makes things easier, and increases the chances of finding Gentoo binary packages that match your setup.

But there’s more to profiles than that. Profiles also control non-trivial system configuration aspects that cannot be easily changed. We have separate profiles for systems that have undergone the “/usr merge”, and for systems that haven’t — and you can’t switch between the two without actually migrating your system first. On some architectures we have profiles with and without multilib; this is e.g. necessary to run 32-bit executables on amd64. On ARM, separate profiles are provided for different architecture versions. The implication of all that is that profiles also control which packages can actually be installed on a system. You can’t install 32-bit software on an amd64 non-multilib system, or packages requiring newer ARM instructions on a system using a profile for older processors.

Finally, profiles are versioned to carry out major changes in Gentoo. This is akin to how Debian or Fedora do releases. When we introduce major changes that require some kind of migration, we do that via a new profile version. Users are provided with upgrade instructions, and are asked to migrate their systems. And we do support both old and new profiles for some time. To list two examples:

  1. 17.1 amd64 profiles changed the multilib layout from using lib64 + lib32 (+ a compatibility lib symlink) to lib64 + lib.
  2. 23.0 profiles featured hardening- and optimization-related toolchain changes.

Every available profile has one of three stability levels: stable, dev or exp. As you can guess, “stable” profiles are the ones that are currently considered safe to use on production systems. “Dev” profiles should be good too, but they’re not as well tested yet. Then, “exp” profiles come with no guarantees, not even of dependency graph integrity (to be explained further on).

Keywords

While profiles can control which packages can be installed to some degree, keywords are at the core of that. Keywords are specified for every package version separately (inside the ebuild), and are specified (or not) for every architecture.

A keyword can effectively have one of four states:

  1. stable (e.g. amd64), indicating that the package should be good to be used on production;
  2. testing (often called ~arch, e.g. ~amd64), indicating that the package should work, but we don’t give strong guarantees;
  3. unkeyworded (i.e. no keyword for given architecture is present), usually indicating that the package has not been tested yet;
  4. disabled (e.g. -amd64), indicating that the package can’t work on given architecture. This is rarely used, usually for prebuilt software.

Now, the key point is that users have control over which keywords their package managers accepts. If you’re running a production system, you may want to set it to accept stable keywords only — in which case only stable packages will normally be allowed to be installed, and your packages will only be upgraded once the next version is marked stable. Or you may set your system to accept both stable and testing keywords, and help us test them.

Of course, this is not just a binary global switch. At the cost of increased risk and reduced chances of getting support, you can adjust allowed keywords for packages, and run a mix of stable and testing. Or you can install some packages that has no keywords at all, including live packages built straight from a VCS repository. Or you can even set your system to follow keywords for another architecture — the sky is the limit!

Note that not all Gentoo architectures use stable keywords at a time. There are so called “pure ~arch arches” that use testing keywords only. An examples of such architectures are alpha, loong and riscv.

Bad terminology: stable and stable

Here’s a time for a short intermezzo: as you may have noticed, we have used the term “stable” twice already: one time for profiles, and the other time for the keywords. Combined with the fact that not all architectures actually use stable keywords, this can get really confusing. Unfortunately, it’s a historical legacy that we have to live with.

So to clarify. A stable profile is a profile that should be good to use on production systems. A stable package (i.e. a package [version] with stable keywords) is a package version that should be good to use on production systems.

However, the two aren’t necessarily linked. You can use a dev or even exp profile, but only accept stable keywords, and the other way around. Furthermore, architectures that don’t use stable keywords at all, do have stable profiles.

Visibility and dependency graph integrity

Equipped with all that information, now we can introduce the concept of package visibility. Long story short, a package (version) is visible if it is installable on a given system. The primary reasons why a package couldn’t be installed are insufficient keywords, or an explicit mask. Let’s consider these cases in detail.

As I’ve mentioned earlier, a particular system can be configured to accept either stable, or both stable and testing keywords. Therefore, on a system set to accept stable keywords, only packages featuring stable keywords can be visible (the remaining packages are masked by “missing keyword”). On a system set to accept both stable and testing keywords, all packages featuring either stable or testing keywords can be visible.

Additionally, packages can be explicitly masked either globally in the repository, or in profiles. These masks are used for a variety of reasons: when a particular package is incompatible with the configuration of a given profile (say, 32-bit packages on a non-multilib 64-bit profile), when it turns out to be broken or when we believe that it needs more testing before we let users install it (even on testing-keyword systems).

The considerations of package visibility here are limited to the package itself. However, in order for the package to be installable, all its dependencies need to be installable as well. For packages with stable keywords, this means that all their dependencies (including optional dependencies conditional to USE flags that can be enabled on a stable system) have a matching version with stable keywords as well. Conversely, for packages with testing keywords, this means that all dependencies need to have either stable or testing keywords. Furthermore, said dependency versions must not be masked on any profile, on which the package in question is visible.

This is precisely what dependency graph integrity checks are all about. They are performed for all profiles that are either stable or dev (i.e. exp profiles are excluded, and don’t guarantee integrity), for all package versions with stable or testing keywords — and for each of these kind of keywords separately. And when integrity is not maintained, we get automated reports about it, and deployment pipeline is blocked, so ideally users don’t have to experience the problem firsthand.

The life of a keyword

Now that we have all the fundamental ideas covered, we can start discussing how packages get their keywords in the first place.

The default state for a keyword is “unspecified”. For a package to gain a testing keyword, it needs to be tested on the architecture in question. This can either be done by a developer directly, or via a keywording request filed on Gentoo Bugzilla, that will be processed by an arch tester. Usually, only the newest version of the package is handled, but in special circumstances testing keywords can be added to older versions as well (e.g. when required to satisfy a dependency). Any dependencies that are lacking a matching keyword need to be tested as well.

And what does happen if the package does not pass testing? Ideally, we file a bug upstream and get it fixed. But realistically, we can’t always manage that. Sometimes the bug remains open for quite some time, waiting for someone to take action or for a new release that might happen to start working. Sometimes we decide that keywording a particular package at the time is not worth the effort — and if it is required as an optional dependency of something else, we instead mask the relevant USE flags in the profiles corresponding to the given architecture. In extreme cases, we may actually add a negative -arch flag, to indicate that the package can’t work on given architecture. However, this is really rare and we generally do it only as a hint if people spend their time trying to keyword it over and over again.

Once a package gains a testing keyword, it “sticks”. Whenever a new version is added, all the keywords from the previous version are copied into it, and stable keywords are lowered into testing keywords. This is done even though the developer only tested it on one of the architectures. Packages generally lose testing keywords only if we either have a justified suspicion that they have stopped working, or if they gained new dependencies that are lacking the keywords in question. Most of the time, we request readding the testing keywords (rekeywording) immediately afterwards.

Now, stable requests follow a stricter routine. The maintainer must decide that a particular package version is ready to become stable first. A rule of thumb is that it’s been in testing for a month, and no major regressions have been reported. However, the exact details differ. For example, some projects make separate “stable branch” and “testing branch” releases, and we mark only the former stable. And when vulnerabilities are found in software, we tend to proceed with adding stable keywords to the fixed versions immediately.

Then, a stabilization request is filed, and then the package is tested on every architecture before the respective stable keyword is added. Testing is generally done on a system that is set only to accept stable keywords, therefore it may provide a slightly different environment that the original testing done when the package was added. Note that there is an exception to that rule — if we believe that particular packages are unlikely to exhibit different behavior across different architectures, we do ALLARCHES stabilization and add all the requested stable keywords after testing on one system.

Unlike with testing keywords, stable keywords need to be added to every version separately. When a new package version is added, all stable keywords in it are replaced by the corresponding testing keywords.

This process pretty much explains the difference between the guarantees given by testing and stable keywords. The testing keywords indicate that some version of the package has been tested on the given architecture at some point, and that we have good reasons to believe that it still works. The stable keywords indicate that this particular version has been tested on a system running stable keywords, and therefore it is less likely to turn out broken. Unfortunately, whether it actually is free of bugs is largely dependent on the quality of test suites, dependencies and so on. So yeah, it’s a mess.

The cost of keywords

I suppose that from user’s perspective it would be best if all packages that work on a given architecture had keywords for it; and ideally, all versions suitable for it would have stable keywords on all relevant architectures. However, every keyword comes with a cost. And that’s not only the cost of actual testing, but also a long-term maintenance cost.

For the most important architectures, Gentoo developers have access to one or more dedicated machines. These machines are used to various purposes: arch testing (i.e. processing keywording and stabilization requests, usually semi-automated), building stage archives, building binary packages, and last but not least: providing development environments that are needed to debug and fix bugs. For other architectures, we are entirely dependent on volunteers doing the testing — a few prominent volunteers worthy of the highest praise, I must add.

The cost incurred by testing keywords is comparatively small, but contrary to what you might think, it’s not a one time cost. Once a package gains a testing keyword, we generally want to keep it going forward. This means that if it gains new dependencies, we’re going to have to retest it — and its new dependencies. However, that’s the easy part.

The hard part is that stuff can actually break over time. The package itself can start exhibiting test failures, or stop working entirely. Its new dependencies may turn out to be broken on the architecture in question. In these cases, it’s not just the cost of testing — but actually reporting bugs, and possibly debugging and writing patches when upstream authors don’t have access to the relevant hardware (and/or don’t care). Sometimes you even learn that the author never intended to support given architecture, and is unwilling to accept well-written patches.

And if it turns out that it really isn’t feasible to keep the keyword going forward anymore, sometimes removing it may also turn out to be a lot of effort — especially if multiple packages depending on this one have been keyworded as well.

Of course, the cost for stable keywords is much higher. After all, it’s no longer a case of one time testing, but we actually have to test every single version that’s going stable. This is somewhat amortized by ALLARCHES packages that need to be tested on a single architecture only (and therefore usually are tested on one of the “fast” architectures), but still it’s a lot. On top of that, frequent testing is more likely to reveal problems, and therefore require immediate fixes. This is actually a good thing, but also a future cost to consider. And removing keywords from packages that used to be stable is likely to have greater impact than from these that never were.

Struggling architectures

All the costs considered, it shouldn’t come as a surprise that we sometimes find ourselves struggling with some of the less popular architectures. We may have limited access to hardware, the hardware itself may not be very performant, the hardware and the operating system may be susceptible to breakage. So if we keyword too much, then the arch teams can no longer keep up, the queue is getting long, and requests aren’t handled timely. In the extreme case, we may lose the last machine for a given architecture and become stuck, unable to go forward. These are all things to consider.

For these reasons, we periodically discuss the state of architectures in Gentoo. If we determine that some of them are finding it hard to cope, we look for solutions. Of course, one possibility to weigh in is getting more hardware — but that’s not always justified, or even possible. Sometimes we need to actually reduce the workload.

For architectures that use stable keywords, the obvious possibility is to reduce the number of packages using them — i.e. destabilize packages. Ordinarily, the best targets for this effort would be packages that are old, particularly problematic or unpopular, as they can reduce our effective maintenance cost while minimizing the potential discomfort to users. However, we might need to go deeper than that. In extreme cases, we can go as far as to reduce the stable package set to core system packages. At some point, this kind of reduction forces users to run a mixed stable-testing keyword system, but that at least permits them to limit risk of regressions in the most important packages.

If even that is insufficient, there are more options at our disposal. We can look into removing keywords entirely from packages, particularly packages that require further rekeywording work. We can decide to remove stable keywords from an architecture entirely. In the worst case, we can decide to mark all profiles exp, effectively abandoning dependency graph integrity (at this point, some dependencies may start missing keywords and packages may not be trivially installable), or we can decide to remove the support for a given architecture entirely.

Summary

Gentoo uses a combined profile and keyword system to facilitate user needs on top of a single ebuild repository. This is in contrast with many other distributions that use multiple repositories, make releases, sometimes maintain multiple release branches simultaneously. In fact, some distributions actually split into multiple versions to facilitate different user profiles. Gentoo does all that in a single, coherent product with rolling releases and profile upgrade paths.

The system of keywords is aimed at providing good user experience while keeping the maintenance affordable. On most of the supported architectures, we provide stable keywords to help keeping production systems on reasonably tested software. Before packages becomes stable, we offer them to more adventurous users via testing keywords. Gentoo also offers great flexibility — users can mix stable and testing keywords freely (though at the risk of hitting unexpected issues), or run experimental packages that aren’t ready to get testing keywords yet.

Unfortunately, there are limits to how much support for various architectures we can provide. We are largely reliant on either having appropriate machines available, or volunteers with the hardware to test stuff for us, not to mention developers having skills and energy to debug and fix architecture-specific problems. Sometimes this turns out to be insufficient to cope with all the work, and we need to give up on some of the architecture support.

Still, I think the system works pretty well here, and it is one of Gentoo’s strong suits. Sure, it occasionally needs a push here and there, or a policy change, but it’s been one of Gentoo’s foundations for years, and it doesn’t look as if it’s going to be replaced anytime soon.

August 14 2024

Gentoo Linux drops IA-64 (Itanium) support

Gentoo News (GentooNews) August 14, 2024, 5:00

Following the removal of IA-64 (Itanium) support in the Linux kernel and glibc, and subsequent discussions on our mailing list, as well as a vote by the Gentoo Council, Gentoo will discontinue all ia64 profiles and keywords. The primary reason for this decision is the inability of the Gentoo IA-64 team to support this architecture without kernel support, glibc support, and a functional development box (or even a well-established emulator). In addition, there have been only very few users interested in this type of hardware.

As also announced in a news item, in one month, i.e. in the first half of September 2024, all ia64 profiles will be removed, all ia64 keywords will be dropped from all packages, and all IA-64 related Gentoo bugs will be closed.

Intel Itanium logo

Following the removal of IA-64 (Itanium) support in the Linux kernel and glibc, and subsequent discussions on our mailing list, as well as a vote by the Gentoo Council, Gentoo will discontinue all ia64 profiles and keywords. The primary reason for this decision is the inability of the Gentoo IA-64 team to support this architecture without kernel support, glibc support, and a functional development box (or even a well-established emulator). In addition, there have been only very few users interested in this type of hardware.

As also announced in a news item, in one month, i.e. in the first half of September 2024, all ia64 profiles will be removed, all ia64 keywords will be dropped from all packages, and all IA-64 related Gentoo bugs will be closed.

July 23 2024

Optimizing distutils-r1.eclass via wheel reuse

Michał Górny (mgorny) July 23, 2024, 18:06

Yesterday I’ve enabled a new distutils-r1.eclass optimization: wheel reuse. Without this optimization, the eclass would build a separate wheel for every Python implementation enabled, and then install every one of these wheels. In many cases, this meant repeatedly building the same thing. With the optimization enabled, under some circumstances the eclass will be able to build one (or two) wheels, and install them for all implementations.

This change brings the eclass behavior closer to the behavior of package managers such as pip. While this will cause no change for users who build packages for a single Python version only, it can bring some nice speedup when building for multiple interpreters. Particularly, pure Python packages using setuptools will no longer incur the penalty of having to start setuptools multiple times (which is quite slow), and packages using the stable ABI won’t have to build roughly identical extensions multiple times.

In this post, I’m going to shortly go over a few design considerations of the new feature.

Pure Python wheels, and partial C extension compatibility

The obvious candidate for wheel reuse are pure Python wheels, i.e. packages using the *-py3-none-any.whl (or *-py2.py3-none-any.whl) suffix. Therefore, the algorithm would be roughly this: build a wheel; if you get a pure Python wheel, use it for all implementations.

[Well, to be more precise, the eclass works more like this: check if any of the previously built wheels can be used; if one can, use it; otherwise build a new wheel, add it to the list and use that.]

However, there is a problem with that approach: some packages feature extensions that aren’t used across all supported implementations. In particular, some packages don’t enable extensions for PyPy (often simply because pure Python code with JIT tends to be faster than calling into the C/Rust extension). Since we’re building for PyPy3 first, the pure Python package created for PyPy would end up being reused across all implementations!

Fortunately, a simple way around the problem was already available — for multiple reasons, we already expect DISTUTILS_EXT to be set for all ebuilds featuring (at least optional) compiled extensions. Therefore, I’ve modified the logic to reuse pure Python wheels only if we don’t expect extensions. If we do, then pure Python wheels are ignored.

Of course, this is not a perfect solution. If a package supports more than one implementation that uses pure Python version, the wheel won’t be reused. In fact, if a package features native-extensions flag and it’s disabled, so no extensions are built at all, the pure Python wheel reuse is also disabled! But that’s just a matter of missed optimization, and it’s better to stay on the safe side here.

Still, there are some risks left here. In particular, if a developer misses the CPython-only extension and includes PyPy3 from day one, wheel reuse will prevent the eclass from immediately reporting missing DISTUTILS_EXT. Fortunately, I think we can reasonably expect that someone will build it with PyPy3 target disabled and report the problem. In fact, I’m pretty sure our CI will catch that very fast.

Stable ABI wheels

The second candidate for wheel reuse are stable ABI wheels. Long story short, normally Python extensions are only guaranteed to be compatible with the single version of Python they were built for. However, should one use the so-called limited API, the resulting extensions will be forward-compatible with all CPython versions newer than the specified minimal version. The advantage from reusing stable ABI wheels is much greater than from pure Python wheels — since we can avoid repeatedly building the same C or Rust code, that can be quite resource consuming.

Normally, reusing stable ABI wheels requires determining whether a particular ABI/platform tag is compatible with the implementation in question. For example, a stable ABI wheel could be suffixed *-cp38-abi3-linux_x86_64.whl. This means that the particular wheel is compatible with CPython 3.8 and newer, on Linux x86_64 platform. Unfortunately, these tags can get quite complex and packaging features quite extensive code for determining tag compatibility.

Good news is that we don’t really need to do that. Since we’re building wheels locally, we don’t need to be concerned about the platform tag at all. Furthermore, since we are building from oldest to newest Python version, we can also ignore the ABI tag (beyond checking for abi3) and assume that the wheel built for previous (i.e. earlier) CPython version will be compatible with the newer version. That said, we need to take special consideration that the stable ABI is supported only by CPython and not PyPy.

Multiple wheels per package

One final problem with wheel reuse is that a single Gentoo package may be building multiple wheels. For example, dev-python/sqlglot builds a main Python package and a Rust extension. A “dumb” wheel reuse would mean that the first wheel built would be used for all subsequent calls, even if these were supposed to build completely different packages!

To resolve this issue, I’ve converted the DISTUTILS_WHEELS variable into an associative array, mapping wheels into directory paths. For every wheel built, we are recording both wheel path and the source directory — and reusing the wheel only if the directory matches.

Summary

The resulting code in distutils-r1.eclass implements all that was mentioned above. I have been using it for 2 months prior to enabling it by default, and found no issues. During this period, the eclass was additionally verifying that Python packages don’t install files with different contents, when they declare to produce universal wheels.

I’m really proud of how simple the logic is. If wheel reuse is enabled, scan recorded wheel list for wheels matching the current directory. For all matching wheels, check their tags. If we do not expect extensions, and we’ve got a pure Python wheel, use it. If we are installing for CPython, and we’ve got a stable ABI wheel, use it. Otherwise (no matching wheel or reuse disabled), build and install a new wheel (this is actually a call to the old function) and add it to the list.

Hope this helps you save some time and save some energy. I definitely don’t need the extra heating in this hell of a summer.

Yesterday I’ve enabled a new distutils-r1.eclass optimization: wheel reuse. Without this optimization, the eclass would build a separate wheel for every Python implementation enabled, and then install every one of these wheels. In many cases, this meant repeatedly building the same thing. With the optimization enabled, under some circumstances the eclass will be able to build one (or two) wheels, and install them for all implementations.

This change brings the eclass behavior closer to the behavior of package managers such as pip. While this will cause no change for users who build packages for a single Python version only, it can bring some nice speedup when building for multiple interpreters. Particularly, pure Python packages using setuptools will no longer incur the penalty of having to start setuptools multiple times (which is quite slow), and packages using the stable ABI won’t have to build roughly identical extensions multiple times.

In this post, I’m going to shortly go over a few design considerations of the new feature.

Pure Python wheels, and partial C extension compatibility

The obvious candidate for wheel reuse are pure Python wheels, i.e. packages using the *-py3-none-any.whl (or *-py2.py3-none-any.whl) suffix. Therefore, the algorithm would be roughly this: build a wheel; if you get a pure Python wheel, use it for all implementations.

[Well, to be more precise, the eclass works more like this: check if any of the previously built wheels can be used; if one can, use it; otherwise build a new wheel, add it to the list and use that.]

However, there is a problem with that approach: some packages feature extensions that aren’t used across all supported implementations. In particular, some packages don’t enable extensions for PyPy (often simply because pure Python code with JIT tends to be faster than calling into the C/Rust extension). Since we’re building for PyPy3 first, the pure Python package created for PyPy would end up being reused across all implementations!

Fortunately, a simple way around the problem was already available — for multiple reasons, we already expect DISTUTILS_EXT to be set for all ebuilds featuring (at least optional) compiled extensions. Therefore, I’ve modified the logic to reuse pure Python wheels only if we don’t expect extensions. If we do, then pure Python wheels are ignored.

Of course, this is not a perfect solution. If a package supports more than one implementation that uses pure Python version, the wheel won’t be reused. In fact, if a package features native-extensions flag and it’s disabled, so no extensions are built at all, the pure Python wheel reuse is also disabled! But that’s just a matter of missed optimization, and it’s better to stay on the safe side here.

Still, there are some risks left here. In particular, if a developer misses the CPython-only extension and includes PyPy3 from day one, wheel reuse will prevent the eclass from immediately reporting missing DISTUTILS_EXT. Fortunately, I think we can reasonably expect that someone will build it with PyPy3 target disabled and report the problem. In fact, I’m pretty sure our CI will catch that very fast.

Stable ABI wheels

The second candidate for wheel reuse are stable ABI wheels. Long story short, normally Python extensions are only guaranteed to be compatible with the single version of Python they were built for. However, should one use the so-called limited API, the resulting extensions will be forward-compatible with all CPython versions newer than the specified minimal version. The advantage from reusing stable ABI wheels is much greater than from pure Python wheels — since we can avoid repeatedly building the same C or Rust code, that can be quite resource consuming.

Normally, reusing stable ABI wheels requires determining whether a particular ABI/platform tag is compatible with the implementation in question. For example, a stable ABI wheel could be suffixed *-cp38-abi3-linux_x86_64.whl. This means that the particular wheel is compatible with CPython 3.8 and newer, on Linux x86_64 platform. Unfortunately, these tags can get quite complex and packaging features quite extensive code for determining tag compatibility.

Good news is that we don’t really need to do that. Since we’re building wheels locally, we don’t need to be concerned about the platform tag at all. Furthermore, since we are building from oldest to newest Python version, we can also ignore the ABI tag (beyond checking for abi3) and assume that the wheel built for previous (i.e. earlier) CPython version will be compatible with the newer version. That said, we need to take special consideration that the stable ABI is supported only by CPython and not PyPy.

Multiple wheels per package

One final problem with wheel reuse is that a single Gentoo package may be building multiple wheels. For example, dev-python/sqlglot builds a main Python package and a Rust extension. A “dumb” wheel reuse would mean that the first wheel built would be used for all subsequent calls, even if these were supposed to build completely different packages!

To resolve this issue, I’ve converted the DISTUTILS_WHEELS variable into an associative array, mapping wheels into directory paths. For every wheel built, we are recording both wheel path and the source directory — and reusing the wheel only if the directory matches.

Summary

The resulting code in distutils-r1.eclass implements all that was mentioned above. I have been using it for 2 months prior to enabling it by default, and found no issues. During this period, the eclass was additionally verifying that Python packages don’t install files with different contents, when they declare to produce universal wheels.

I’m really proud of how simple the logic is. If wheel reuse is enabled, scan recorded wheel list for wheels matching the current directory. For all matching wheels, check their tags. If we do not expect extensions, and we’ve got a pure Python wheel, use it. If we are installing for CPython, and we’ve got a stable ABI wheel, use it. Otherwise (no matching wheel or reuse disabled), build and install a new wheel (this is actually a call to the old function) and add it to the list.

Hope this helps you save some time and save some energy. I definitely don’t need the extra heating in this hell of a summer.

July 11 2024

The review-work balance, and other dilemmas

Michał Górny (mgorny) July 11, 2024, 14:11

One of the biggest problems of working in such a large project as Gentoo, is that there’s always a lot of work to be done. Once you get engaged deeply enough, no matter how hard you’re going to try, the backlog will just keep growing. There are just so many things that need to be done, and someone has to do them.

Sooner or later, you are going to start facing some dilemmas, such as:

  • This befell me, because nobody else was willing to do that. Should I continue overburdening myself with this, or should I leave it and let it rot?
  • I have more time than other people on the team. Should I continue doing the bulk of the work, or leave more of it to them?
  • What is the right balance between reviewing contributions, and doing the work myself?

In this post, I’d like to discuss these problems from my perspective as a long-time Gentoo developer.

Doing what needs to be done

There are things I’ve taken up in Gentoo simply because I’ve found them interesting or enjoyable. However, there are also some things that I’ve taken up, because they needed to be done and nobody was doing them. And then there are things that fall somewhere in the middle — like in Python, where I enjoy lots of stuff, but this also implies I’m ending up with a lot of thankless work. And I don’t believe it’s fair to just do the nice part, and ignore the hard part.

The immediate reasons for taking up these jobs vary. Sometimes a particular problem affected me directly, so I stepped up to resolve it — this is basically how people end up joining the Gentoo Infrastructure team. Sometimes I’ve noticed something early that would be a major hassle for users later on, and I’ve taken it up. Sometimes I’ve noticed that many users are already complaining about something, and that something needs to be done.

But then, what next? Let’s say I’ve ended up doing something that’s not really a good fit for me. I keep sending calls for help, but receive no offers. Now I’m facing said dilemma: Should I continue overburdening myself with this, or should I leave it and let it rot?

The truth is, sometimes abandoning stuff is the right thing to do. It has major drawbacks: it affects people, and makes the work pile up. However, it also makes people more aware of the problem. Sometimes it’s the only way to have another person pick it up. At other times, it makes more users aware of the problem and they can offer to help.

However, it’s never an easy choice to make, and should you make it, you are never sure if it will actually work. It may turn out that you will eventually have to pick it up yourself, and have to deal with all the resulting backlog.

Doing work yourself, or letting others do it

There’s a proverb: if you want something done right, you have to do it yourself. Perhaps it’s not the nicest way of putting stuff. Let’s frame the problem differently. You are person in the best position to do something. You are flexible, you can do things the same day, while others need two or three days.

So, there are advantages and disadvantages to doing things yourself. On one hand, it means things get done sooner (so users benefit), and people who are more busy with their lives aren’t distracted by stuff that you can do. On the other hand, it means that if you are already overburdened with work, you spend time on things that others can do for you, instead of on things that only you can do. Others have less opportunity to practice doing stuff, and in the end may even get discouraged from actively contributing.

The last part is actually the biggest problem, it’s a bit of chicken-or-the-egg problem. If you’re more experienced, you’re better equipped to deal with problems. However, this means that others don’t get an opportunity to gain the experience and become better. And this in turn means that the actual bus factor is not as high as it could be — you have people interested in doing stuff, but they don’t have the training.

This is where the dilemma comes in: Should I continue doing the bulk of the work, or leave more of it to them? Sometimes it’s not that big of a deal — in Python team, the version bumps are mostly a sliding window kind of work. I do the bulk of bumps every morning, and others join in at different times. But sometimes this isn’t this easy.

And in the end, you never really know whether things will work as expected: if you start doing things rarer, giving more time to others, will they actually find time to do them? Or will it just mean you’re going to end up doing more the next time, and at the same lose your perfect track record of response time?

The balance between doing and reviewing

As any Free Software project, Gentoo has a thriving community. A part of being developer is accepting contributions from this community. Unfortunately, reviewing them is not always easy.

Sometimes it is, for example, when a pull request is addressing a very specific problem, and you just have to look at the diff, and perhaps test it. At other times, reviewing actually takes more work than doing things yourself. And then you have to strive for balance.

For example, let’s consider an average version bump. How you do it yourself is, roughly: run a script to copy the ebuild, check the diff between the sources, update the ebuild, test. Sometimes it’s trivial, sometimes it’s not — but it’s all pretty streamlined. Now, if you’re reviewing a version bump done by someone, you need to merge their commits, diff the packages, diff the ebuilds, test. Most of the time, it means you’re actually doing more work than if you were doing it yourself — but this is fine so far.

The problem is, sometimes the user doesn’t do some extra maintenance tasks you’d do (not blaming them, but it’s something you want done anyway). Sometimes there are mistakes to be fixed. All these things multiply the work involved, and delay the actual bump (effectively affecting users negatively). You need to leave review comments, wait for the user to update and try again. Rinse and repeat.

The worst part is that you’re never sure if it’s worth it. Sometimes you don’t even know if the user is really interested in working on this, or just wanted to get it bumped. You spend your time pointing out issues, the user spends their fixing them, and in the end you both would have preferred if you’d have done it all yourself.

The flip side is that there are actually promising contributors, and if you go through the whole effort, you’ll end up having people who you can actually trust to do things right, and it pays back in the end. Perhaps people who are going to become Gentoo developers. But you have to put a lot effort and take a lot of risk for this. And this isn’t easy when you are already overburdened.

If you get the balance wrong on one side, you get things done, but you get no new help and the project eventually dies. If you get it wrong on the other side, you waste your time, get no benefit and don’t get other things done.

And then LLMs come and promise a new hell for you: people who could be (unintentionally) making pull requests with plagiarized, bad quality code. They submit stuff with the minimum of effort, you spend a lot of effort reviewing them, only to discover that the submitters have no clue what they’ve sent in the first place. That’s one of the reasons Gentoo has banned LLM contributions — and added an explicit checkbox to the pull request template. But will this suffice?

Summary

In this post, I’ve summarized some of the biggest dilemmas I’m facing as a Gentoo developer. In fact, we’re all facing them. We’re all forced to make decisions, and see their outcome. Sometimes we see that what we did was right, and it pays off. Sometimes, it turns out that things end up on fire, and again we have to make a choice — should we give up and run with the fire extinguisher, and go back to square one? Or should we just let it burn? Perhaps somebody else will extinguish it then, or perhaps it’s actually better if it burns to the ground… Maybe it will turn out to be a phoenix?

One of the biggest problems of working in such a large project as Gentoo, is that there’s always a lot of work to be done. Once you get engaged deeply enough, no matter how hard you’re going to try, the backlog will just keep growing. There are just so many things that need to be done, and someone has to do them.

Sooner or later, you are going to start facing some dilemmas, such as:

  • This befell me, because nobody else was willing to do that. Should I continue overburdening myself with this, or should I leave it and let it rot?
  • I have more time than other people on the team. Should I continue doing the bulk of the work, or leave more of it to them?
  • What is the right balance between reviewing contributions, and doing the work myself?

In this post, I’d like to discuss these problems from my perspective as a long-time Gentoo developer.

Doing what needs to be done

There are things I’ve taken up in Gentoo simply because I’ve found them interesting or enjoyable. However, there are also some things that I’ve taken up, because they needed to be done and nobody was doing them. And then there are things that fall somewhere in the middle — like in Python, where I enjoy lots of stuff, but this also implies I’m ending up with a lot of thankless work. And I don’t believe it’s fair to just do the nice part, and ignore the hard part.

The immediate reasons for taking up these jobs vary. Sometimes a particular problem affected me directly, so I stepped up to resolve it — this is basically how people end up joining the Gentoo Infrastructure team. Sometimes I’ve noticed something early that would be a major hassle for users later on, and I’ve taken it up. Sometimes I’ve noticed that many users are already complaining about something, and that something needs to be done.

But then, what next? Let’s say I’ve ended up doing something that’s not really a good fit for me. I keep sending calls for help, but receive no offers. Now I’m facing said dilemma: Should I continue overburdening myself with this, or should I leave it and let it rot?

The truth is, sometimes abandoning stuff is the right thing to do. It has major drawbacks: it affects people, and makes the work pile up. However, it also makes people more aware of the problem. Sometimes it’s the only way to have another person pick it up. At other times, it makes more users aware of the problem and they can offer to help.

However, it’s never an easy choice to make, and should you make it, you are never sure if it will actually work. It may turn out that you will eventually have to pick it up yourself, and have to deal with all the resulting backlog.

Doing work yourself, or letting others do it

There’s a proverb: if you want something done right, you have to do it yourself. Perhaps it’s not the nicest way of putting stuff. Let’s frame the problem differently. You are person in the best position to do something. You are flexible, you can do things the same day, while others need two or three days.

So, there are advantages and disadvantages to doing things yourself. On one hand, it means things get done sooner (so users benefit), and people who are more busy with their lives aren’t distracted by stuff that you can do. On the other hand, it means that if you are already overburdened with work, you spend time on things that others can do for you, instead of on things that only you can do. Others have less opportunity to practice doing stuff, and in the end may even get discouraged from actively contributing.

The last part is actually the biggest problem, it’s a bit of chicken-or-the-egg problem. If you’re more experienced, you’re better equipped to deal with problems. However, this means that others don’t get an opportunity to gain the experience and become better. And this in turn means that the actual bus factor is not as high as it could be — you have people interested in doing stuff, but they don’t have the training.

This is where the dilemma comes in: Should I continue doing the bulk of the work, or leave more of it to them? Sometimes it’s not that big of a deal — in Python team, the version bumps are mostly a sliding window kind of work. I do the bulk of bumps every morning, and others join in at different times. But sometimes this isn’t this easy.

And in the end, you never really know whether things will work as expected: if you start doing things rarer, giving more time to others, will they actually find time to do them? Or will it just mean you’re going to end up doing more the next time, and at the same lose your perfect track record of response time?

The balance between doing and reviewing

As any Free Software project, Gentoo has a thriving community. A part of being developer is accepting contributions from this community. Unfortunately, reviewing them is not always easy.

Sometimes it is, for example, when a pull request is addressing a very specific problem, and you just have to look at the diff, and perhaps test it. At other times, reviewing actually takes more work than doing things yourself. And then you have to strive for balance.

For example, let’s consider an average version bump. How you do it yourself is, roughly: run a script to copy the ebuild, check the diff between the sources, update the ebuild, test. Sometimes it’s trivial, sometimes it’s not — but it’s all pretty streamlined. Now, if you’re reviewing a version bump done by someone, you need to merge their commits, diff the packages, diff the ebuilds, test. Most of the time, it means you’re actually doing more work than if you were doing it yourself — but this is fine so far.

The problem is, sometimes the user doesn’t do some extra maintenance tasks you’d do (not blaming them, but it’s something you want done anyway). Sometimes there are mistakes to be fixed. All these things multiply the work involved, and delay the actual bump (effectively affecting users negatively). You need to leave review comments, wait for the user to update and try again. Rinse and repeat.

The worst part is that you’re never sure if it’s worth it. Sometimes you don’t even know if the user is really interested in working on this, or just wanted to get it bumped. You spend your time pointing out issues, the user spends their fixing them, and in the end you both would have preferred if you’d have done it all yourself.

The flip side is that there are actually promising contributors, and if you go through the whole effort, you’ll end up having people who you can actually trust to do things right, and it pays back in the end. Perhaps people who are going to become Gentoo developers. But you have to put a lot effort and take a lot of risk for this. And this isn’t easy when you are already overburdened.

If you get the balance wrong on one side, you get things done, but you get no new help and the project eventually dies. If you get it wrong on the other side, you waste your time, get no benefit and don’t get other things done.

And then LLMs come and promise a new hell for you: people who could be (unintentionally) making pull requests with plagiarized, bad quality code. They submit stuff with the minimum of effort, you spend a lot of effort reviewing them, only to discover that the submitters have no clue what they’ve sent in the first place. That’s one of the reasons Gentoo has banned LLM contributions — and added an explicit checkbox to the pull request template. But will this suffice?

Summary

In this post, I’ve summarized some of the biggest dilemmas I’m facing as a Gentoo developer. In fact, we’re all facing them. We’re all forced to make decisions, and see their outcome. Sometimes we see that what we did was right, and it pays off. Sometimes, it turns out that things end up on fire, and again we have to make a choice — should we give up and run with the fire extinguisher, and go back to square one? Or should we just let it burn? Perhaps somebody else will extinguish it then, or perhaps it’s actually better if it burns to the ground… Maybe it will turn out to be a phoenix?

June 23 2024

Evolving QA tooling

Tim Harder (pkgcraft) June 23, 2024, 3:09

QA support in Gentoo has been a fluid, amorphous goal over the project’s history. Throughout the years, developers have invented their own scripts and extensions to work around the limitations of official tooling. More recently, the relaxed standards have been tightened up a fair amount, but it should be possible to achieve more results with further improvement.

Beginning my tenure as an ebuild maintainer between 2005 and 2010, much of the development process revolved around CVS and repoman, both of which felt slow and antiquated even at the outset. Thankfully, CVS was swapped out for git in 2015, but repoman stuck around for years after that. While work was done on repoman over the years that followed, its overall design flaws were never corrected leading to it being officially retired in 2022 in favor of pkgcheck (and pkgdev).

Comparatively speaking, pkgcheck is much better designed than repoman; however, it still lags in many areas generally due in part to relying on pkgcore1 and using an amalgamation of caching and hacks to obtain a modicum of performance via parallelization. In short, performance can still be drastically improved, but the work required to achieve such results is not easy.

Pkgcraft support

Similar to how pkgcheck builds on top of pkgcore, pkgcraft provides its core set of QA tooling via pkgcruft2, an ebuild linter featuring a small subset of pkgcheck’s functionality with several extensions. As the project is quite new, its limited number of checks run the spectrum from bash parsing to dependency scanning.

An API for scanning and reports is also provided, allowing language bindings for pkgcruft or building report (de)serialization into web interfaces and other tools. For example, a client/server combination could be constructed that creates and responds to queries related to reports generated by given packages between certain commit ranges.

Looking towards the future, the current design allows extending its ability past ebuild repos to any viable targets that make sense on Gentoo systems. For example, it could be handy to scan binary repos for outdated packages, flag installed packages removed from the tree, or warn about USE flag settings in config files that aren’t relevant anymore. These types of tasks are often handled in a wide variety of places (or left to user implementation) at varying quality and performance levels.

Install

For those running Gentoo, it can be found in the main tree at dev-util/pkgcruft. Alternatively, it can be installed via cargo using the following commands:

Current release: cargo install pkgcruft

From git: cargo install pkgcruft --git github.com/pkgcraft/pkgcraft.git

Pre-built binaries are also provided for releases on supported platforms.

Metadata issues

Before going through usage patterns, it should be noted that pkgcraft currently doesn’t handle metadata generation in threaded contexts so pkgcruft will often crash when run against ebuilds with outdated metadata. Fixing this requires redesigning how pkgcraft interacts with its embedded bash interpreter, probably forcing the use of a process spawning daemon similar to pkgcore’s ebd (ebuild daemon), but handled natively instead of in bash.

A simple workaround involves incrementally generating metadata via running pk pkg metadata from any ebuild repo directory3. If that command completes successfully, then pkgcruft can be run from the same directory as well. On failure, the related errors should be fixed and metadata generated before attempting to run pkgcruft. So as a reference, pkgcruft can safely be run on writable repos via a command similar to the following:

  • pk pkg metadata && pkgcruft scan

It might be easiest to add a shell alias allowing for options to be specified for pkgcruft scan until pkgcraft’s metadata generation issue with threads is solved.

Usage

Much of the pkgcruft’s command-line interface mirrors that of pkgcheck as there are only so many ways to construct a linter and it aids mapping existing knowledge to a new tool. See the following commands for example usage:

Scanning

Scan the current directory assuming it’s inside an ebuild repo:

  • pkgcruft scan

Scan an unconfigured, external repo:

  • pkgcruft scan path/to/repo

Scan the configured gentoo repo:

  • pkgcruft scan --repo gentoo
  • pkgcruft scan '*::gentoo'

Scan all dev-python/* ebuilds in the configured gentoo repo:

  • pkgcruft scan --repo gentoo 'dev-python/*'
  • pkgcruft scan 'dev-python/*::gentoo'

See the help output for other scan-related options such as reporter support or report selection. Man pages and online documentation will also be provided in the future.

  • pkgcruft scan --help
Filtering

Native filtering support is included via the -f/--filters option allowing specific package versions matching various conditions to be targeted. Note that filters can be chained and inverted to further specify targets. Finally, only checks that operate on individual package versions can be run when filters are used, all others are automatically disabled.

Restrict to the latest version of all packages:

  • pkgcruft scan -f latest

Restrict to packages with only stable keywords:

  • pkgcruft scan -f stable

Restrict to unmasked packages:

  • pkgcruft scan -f '!masked'

Restrict to the latest, non-live version:

  • pkgcruft scan -f '!live' -f latest

Beyond statically defined filters, much more powerful package restrictions are supported and can be defined using a declarative query format that allows logical composition. More information relating to valid package restrictions will be available once better documentation is written for them and pkgcraft in general. Until that work has been done, see the following commands for example usage and syntax:

Restrict to non-live versions maintained by the python project:

  • pkgcruft scan -f '!live' -f "maintainers any email == 'python@gentoo.org'"

Restrict to packages without maintainers:

  • pkgcruft scan -f "maintainers is none"

Restrict to packages with RDEPEND containing dev-python/* and empty BDEPEND:

  • pkgcruft scan -f "rdepend any 'dev-python/*' && bdepend is none"
Replay

Similar to pkgcheck, replay support is provided as well supporting workflows that cache results and then replay them later, potentially using custom filters. Pkgcruft only supports serializing reports to newline-separated JSON objects at this time which can be done via the following command:

  • pkgcruft scan -R json > reports.json

The serialized reports file can then be passed to the replay subcommand to deserialize the reports.

  • pkgcruft replay reports.json

This functionality can be used to perform custom package filtering, sort the reports, or filter the report variants. See the following commands for some examples:

Replay all dev-python/* related reports, returning the total count:

  • pkgcruft replay -p 'dev-python/*' reports.json -R simple | wc -l

Replay all report variants generated by the Whitespace check:

  • pkgcruft replay -c Whitespace reports.json

Replay all python update reports:

  • pkgcruft replay -r PythonUpdate reports.json

Replay all reports in sorted order:

  • pkgcruft replay --sort reports.json
Benchmarks and performance

Rough benchmarks comparing pkgcruft and pkgcheck targeting a related check run over a semi-recent gentoo repo checkout on a modest laptop with 8 cores/16 threads (AMD Ryzen 7 5700U) using a midline SSD are as follows:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately 5s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .56s

For comparative parallel efficiency, pkgcruft achieves the following with different amounts of jobs:

  • pkgcruft: pkgcruft scan -c PythonUpdate -j8 – approximately .65s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j4 – approximately 1s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j2 – approximately 2s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j1 – approximately 4s

Note that these results are approximated averages for multiple runs without flushing memory caches. Initial runs of the same commands will be slower due to additional I/O latency.

While the python update check isn’t overly complex it does require querying the repo for package matches which is the most significant portion of its runtime. Little to no work has been done on querying performance for pkgcraft yet, so it may be possible to decrease the runtime before resorting to drastic changes such as a more performant metadata cache format.

While it should still be able to improve, pkgcruft already runs faster using a single thread than pkgcheck running on all available cores. Most of this probably comes from the implementation language which is further exhibited when restricting runs to single category and package targets where process startup time dominates. See the following results for the same check run in those contexts:

Targeting dev-python/*:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately 1s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .13s

Targeting dev-python/jupyter-server:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately .38s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .022s

Note that in the case of targeting a single package with multiple versions, pkgcruft currently doesn’t parallelize per version and thus could possibly half its runtime if that work is done.

Finally, in terms of memory usage pkgcruft usually consumes about an order of magnitude less than pkgcheck mostly due to python’s ownership model as rust can more easily use immutable references rather than cloning objects. Also, pkgcheck’s parallel design uses processes instead of threads due to python’s weaker concurrency support again due to historical language design4 leading to more inefficiency. This difference may increase as more intensive checks or query caching is implemented as pkgcruft should be able to share writable objects between threads via locking or channels more readily than pkgcheck can in a performant manner between processes.

But is the duplicated effort worth it?

Even with some benchmarks showing potential, it may be hard to convince others that reworking QA scanning yet again is a worthwhile endeavor. This is a fair assessment as much work has gone into pkgcheck in order to bring it to its recent state underpinning Gentoo’s QA. When regarding this opinion, it helps to revisit why repoman was supplanted and discuss its relative performance difference compared to pkgcheck.

Disregarding the work done on enabling more extensive checks, it can be argued that pkgcheck’s performance differential allowed it to be more reasonably deployed at scale and is one of the main reasons Gentoo QA has noticeably improved in the last five to ten years. Instead of measuring a full tree scan in hours (or perhaps even days on slower machines) it can run in minutes. This has enabled Gentoo’s CI (continuous integration) setup to flag issues within a shorter time period after being pushed to the tree.

Pkgcheck’s main performance improvement over repoman came in terms of its design enabling much better internal parallelization support which repoman entirely lacked for the majority of its existence. However, single thread performance was much closer for similar use cases.

With that in mind, pkgcruft runs significantly faster than pkgcheck for single threaded comparisons of related checks before taking its more efficient parallelization design (threads vs processes) into account. Similar to the jump from repoman to pkgcheck, using pkgcruft could enable even more CI functionality that has never been seriously considered such as rejecting git pushes server-side due to invalid commits.

Whether this makes the reimplementation effort worthwhile is still debatable, but it’s hard to argue against a design that achieves similar results using an order of magnitude less time and space with little work done towards performance thus far. If nothing else, it exhibits a glimpse of potential gains if Gentoo can ever break free of its pythonic shackles.

Future work

As with all replacement projects, there are many features pkgcruft lacks when comparing it to pkgcheck. Besides the obvious check set differential, the following are a few ideas beyond what pkgcheck supports that could come to fruition if more work is completed.

Viable revdeps cache

Verifying reverse dependencies (revdeps) is related to many dependency-based checks most of which are limited in scope or have to run over the entire repo. For example, when removing packages pkgcheck needs to do a full tree visibility scan in order to verify package dependencies.

Leveraging a revdeps cache, this could be drastically simplified to checking a much smaller set of packages. The major issues with this feature are defining a cache format supporting relatively quick (de)serialization and restriction matching while also supporting incremental updates in a performant fashion.

Git commit hooks

None of the QA tools developed for Gentoo have been fast enough to run server-side per git push, rejecting invalid commits before they hit the tree. In theory, pkgcruft might be able to get there, running in the 50-500ms range depending on the set of checks enabled, amount of target packages, and hardware running them.

Properly supporting this while minding concurrent pushes requires a daemon that the git hook queues tasks on with some type of filtering to ignore commits that cause too many package metadata updates (as it would take too long to responsively update metadata and scan them for most systems). Further down the road, it could make sense to decouple pushing directly to the main branch and instead provide support for a merge queue backed by pkgcruft thus alleviating some of the runtime sensitive pressure allowing to move from sub-second goals to sub-minute especially if some sense of progress and status is provided for feedback.

Native git bisect support

Extending pkgcheck’s git support provided by pkgcheck scan --commits, it should be possible to natively support bisecting ebuild repo commit ranges to find a bad commit generating certain report variants. Gentoo CI supports this in some form for its notification setup, but implements it in a more scripted fashion preventing regular users from leveraging it without recreating a similar environment.

Pkgcruft could internally run the procedure using native git library support and expose it via a command such as pkgcruft bisect a..b. While this may be a workflow only used by more experienced devs, it would be handy to support natively instead of forcing users to roll their own scripts.

  1. Pkgcore ambles over the low bar set by portage’s design but has been showing its age since 2015 or so. It’s overly meta, leaning into python’s “everything is an object” tenet too much while hacking around the downsides of that approach for performance reasons. ↩︎

  2. Aiming to fight the neverending torrent of package cruft in ebuild repos. ↩︎

  3. Install pkgcraft-tools in order to use the pk command. ↩︎

  4. Python’s weaker threading support may be improved due to ongoing work to disable the GIL (global interpreter lock) in CPython 3.13; however, it’s still difficult to see how a language not designed for threading (outside usage such as asynchronous I/O) adapts while supporting both GIL and non-GIL functionality as currently, separate builds (having already gone through a compatibility fiasco during the py2 -> py3 era). ↩︎

QA support in Gentoo has been a fluid, amorphous goal over the project’s history. Throughout the years, developers have invented their own scripts and extensions to work around the limitations of official tooling. More recently, the relaxed standards have been tightened up a fair amount, but it should be possible to achieve more results with further improvement.

Beginning my tenure as an ebuild maintainer between 2005 and 2010, much of the development process revolved around CVS and repoman, both of which felt slow and antiquated even at the outset. Thankfully, CVS was swapped out for git in 2015, but repoman stuck around for years after that. While work was done on repoman over the years that followed, its overall design flaws were never corrected leading to it being officially retired in 2022 in favor of pkgcheck (and pkgdev).

Comparatively speaking, pkgcheck is much better designed than repoman; however, it still lags in many areas generally due in part to relying on pkgcore1 and using an amalgamation of caching and hacks to obtain a modicum of performance via parallelization. In short, performance can still be drastically improved, but the work required to achieve such results is not easy.

Pkgcraft support

Similar to how pkgcheck builds on top of pkgcore, pkgcraft provides its core set of QA tooling via pkgcruft2, an ebuild linter featuring a small subset of pkgcheck’s functionality with several extensions. As the project is quite new, its limited number of checks run the spectrum from bash parsing to dependency scanning.

An API for scanning and reports is also provided, allowing language bindings for pkgcruft or building report (de)serialization into web interfaces and other tools. For example, a client/server combination could be constructed that creates and responds to queries related to reports generated by given packages between certain commit ranges.

Looking towards the future, the current design allows extending its ability past ebuild repos to any viable targets that make sense on Gentoo systems. For example, it could be handy to scan binary repos for outdated packages, flag installed packages removed from the tree, or warn about USE flag settings in config files that aren’t relevant anymore. These types of tasks are often handled in a wide variety of places (or left to user implementation) at varying quality and performance levels.

Install

For those running Gentoo, it can be found in the main tree at dev-util/pkgcruft. Alternatively, it can be installed via cargo using the following commands:

Current release: cargo install pkgcruft

From git: cargo install pkgcruft --git https://github.com/pkgcraft/pkgcraft.git

Pre-built binaries are also provided for releases on supported platforms.

Metadata issues

Before going through usage patterns, it should be noted that pkgcraft currently doesn’t handle metadata generation in threaded contexts so pkgcruft will often crash when run against ebuilds with outdated metadata. Fixing this requires redesigning how pkgcraft interacts with its embedded bash interpreter, probably forcing the use of a process spawning daemon similar to pkgcore’s ebd (ebuild daemon), but handled natively instead of in bash.

A simple workaround involves incrementally generating metadata via running pk pkg metadata from any ebuild repo directory3. If that command completes successfully, then pkgcruft can be run from the same directory as well. On failure, the related errors should be fixed and metadata generated before attempting to run pkgcruft. So as a reference, pkgcruft can safely be run on writable repos via a command similar to the following:

  • pk pkg metadata && pkgcruft scan

It might be easiest to add a shell alias allowing for options to be specified for pkgcruft scan until pkgcraft’s metadata generation issue with threads is solved.

Usage

Much of the pkgcruft’s command-line interface mirrors that of pkgcheck as there are only so many ways to construct a linter and it aids mapping existing knowledge to a new tool. See the following commands for example usage:

Scanning

Scan the current directory assuming it’s inside an ebuild repo:

  • pkgcruft scan

Scan an unconfigured, external repo:

  • pkgcruft scan path/to/repo

Scan the configured gentoo repo:

  • pkgcruft scan --repo gentoo
  • pkgcruft scan '*::gentoo'

Scan all dev-python/* ebuilds in the configured gentoo repo:

  • pkgcruft scan --repo gentoo 'dev-python/*'
  • pkgcruft scan 'dev-python/*::gentoo'

See the help output for other scan-related options such as reporter support or report selection. Man pages and online documentation will also be provided in the future.

  • pkgcruft scan --help

Filtering

Native filtering support is included via the -f/--filters option allowing specific package versions matching various conditions to be targeted. Note that filters can be chained and inverted to further specify targets. Finally, only checks that operate on individual package versions can be run when filters are used, all others are automatically disabled.

Restrict to the latest version of all packages:

  • pkgcruft scan -f latest

Restrict to packages with only stable keywords:

  • pkgcruft scan -f stable

Restrict to unmasked packages:

  • pkgcruft scan -f '!masked'

Restrict to the latest, non-live version:

  • pkgcruft scan -f '!live' -f latest

Beyond statically defined filters, much more powerful package restrictions are supported and can be defined using a declarative query format that allows logical composition. More information relating to valid package restrictions will be available once better documentation is written for them and pkgcraft in general. Until that work has been done, see the following commands for example usage and syntax:

Restrict to non-live versions maintained by the python project:

  • pkgcruft scan -f '!live' -f "maintainers any email == 'python@gentoo.org'"

Restrict to packages without maintainers:

  • pkgcruft scan -f "maintainers is none"

Restrict to packages with RDEPEND containing dev-python/* and empty BDEPEND:

  • pkgcruft scan -f "rdepend any 'dev-python/*' && bdepend is none"

Replay

Similar to pkgcheck, replay support is provided as well supporting workflows that cache results and then replay them later, potentially using custom filters. Pkgcruft only supports serializing reports to newline-separated JSON objects at this time which can be done via the following command:

  • pkgcruft scan -R json > reports.json

The serialized reports file can then be passed to the replay subcommand to deserialize the reports.

  • pkgcruft replay reports.json

This functionality can be used to perform custom package filtering, sort the reports, or filter the report variants. See the following commands for some examples:

Replay all dev-python/* related reports, returning the total count:

  • pkgcruft replay -p 'dev-python/*' reports.json -R simple | wc -l

Replay all report variants generated by the Whitespace check:

  • pkgcruft replay -c Whitespace reports.json

Replay all python update reports:

  • pkgcruft replay -r PythonUpdate reports.json

Replay all reports in sorted order:

  • pkgcruft replay --sort reports.json

Benchmarks and performance

Rough benchmarks comparing pkgcruft and pkgcheck targeting a related check run over a semi-recent gentoo repo checkout on a modest laptop with 8 cores/16 threads (AMD Ryzen 7 5700U) using a midline SSD are as follows:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately 5s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .56s

For comparative parallel efficiency, pkgcruft achieves the following with different amounts of jobs:

  • pkgcruft: pkgcruft scan -c PythonUpdate -j8 – approximately .65s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j4 – approximately 1s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j2 – approximately 2s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j1 – approximately 4s

Note that these results are approximated averages for multiple runs without flushing memory caches. Initial runs of the same commands will be slower due to additional I/O latency.

While the python update check isn’t overly complex it does require querying the repo for package matches which is the most significant portion of its runtime. Little to no work has been done on querying performance for pkgcraft yet, so it may be possible to decrease the runtime before resorting to drastic changes such as a more performant metadata cache format.

While it should still be able to improve, pkgcruft already runs faster using a single thread than pkgcheck running on all available cores. Most of this probably comes from the implementation language which is further exhibited when restricting runs to single category and package targets where process startup time dominates. See the following results for the same check run in those contexts:

Targeting dev-python/*:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately 1s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .13s

Targeting dev-python/jupyter-server:

  • pkgcheck: pkgcheck scan -c PythonCompatCheck -j16 – approximately .38s
  • pkgcruft: pkgcruft scan -c PythonUpdate -j16 – approximately .022s

Note that in the case of targeting a single package with multiple versions, pkgcruft currently doesn’t parallelize per version and thus could possibly half its runtime if that work is done.

Finally, in terms of memory usage pkgcruft usually consumes about an order of magnitude less than pkgcheck mostly due to python’s ownership model as rust can more easily use immutable references rather than cloning objects. Also, pkgcheck’s parallel design uses processes instead of threads due to python’s weaker concurrency support again due to historical language design4 leading to more inefficiency. This difference may increase as more intensive checks or query caching is implemented as pkgcruft should be able to share writable objects between threads via locking or channels more readily than pkgcheck can in a performant manner between processes.

But is the duplicated effort worth it?

Even with some benchmarks showing potential, it may be hard to convince others that reworking QA scanning yet again is a worthwhile endeavor. This is a fair assessment as much work has gone into pkgcheck in order to bring it to its recent state underpinning Gentoo’s QA. When regarding this opinion, it helps to revisit why repoman was supplanted and discuss its relative performance difference compared to pkgcheck.

Disregarding the work done on enabling more extensive checks, it can be argued that pkgcheck’s performance differential allowed it to be more reasonably deployed at scale and is one of the main reasons Gentoo QA has noticeably improved in the last five to ten years. Instead of measuring a full tree scan in hours (or perhaps even days on slower machines) it can run in minutes. This has enabled Gentoo’s CI (continuous integration) setup to flag issues within a shorter time period after being pushed to the tree.

Pkgcheck’s main performance improvement over repoman came in terms of its design enabling much better internal parallelization support which repoman entirely lacked for the majority of its existence. However, single thread performance was much closer for similar use cases.

With that in mind, pkgcruft runs significantly faster than pkgcheck for single threaded comparisons of related checks before taking its more efficient parallelization design (threads vs processes) into account. Similar to the jump from repoman to pkgcheck, using pkgcruft could enable even more CI functionality that has never been seriously considered such as rejecting git pushes server-side due to invalid commits.

Whether this makes the reimplementation effort worthwhile is still debatable, but it’s hard to argue against a design that achieves similar results using an order of magnitude less time and space with little work done towards performance thus far. If nothing else, it exhibits a glimpse of potential gains if Gentoo can ever break free of its pythonic shackles.

Future work

As with all replacement projects, there are many features pkgcruft lacks when comparing it to pkgcheck. Besides the obvious check set differential, the following are a few ideas beyond what pkgcheck supports that could come to fruition if more work is completed.

Viable revdeps cache

Verifying reverse dependencies (revdeps) is related to many dependency-based checks most of which are limited in scope or have to run over the entire repo. For example, when removing packages pkgcheck needs to do a full tree visibility scan in order to verify package dependencies.

Leveraging a revdeps cache, this could be drastically simplified to checking a much smaller set of packages. The major issues with this feature are defining a cache format supporting relatively quick (de)serialization and restriction matching while also supporting incremental updates in a performant fashion.

Git commit hooks

None of the QA tools developed for Gentoo have been fast enough to run server-side per git push, rejecting invalid commits before they hit the tree. In theory, pkgcruft might be able to get there, running in the 50-500ms range depending on the set of checks enabled, amount of target packages, and hardware running them.

Properly supporting this while minding concurrent pushes requires a daemon that the git hook queues tasks on with some type of filtering to ignore commits that cause too many package metadata updates (as it would take too long to responsively update metadata and scan them for most systems). Further down the road, it could make sense to decouple pushing directly to the main branch and instead provide support for a merge queue backed by pkgcruft thus alleviating some of the runtime sensitive pressure allowing to move from sub-second goals to sub-minute especially if some sense of progress and status is provided for feedback.

Native git bisect support

Extending pkgcheck’s git support provided by pkgcheck scan --commits, it should be possible to natively support bisecting ebuild repo commit ranges to find a bad commit generating certain report variants. Gentoo CI supports this in some form for its notification setup, but implements it in a more scripted fashion preventing regular users from leveraging it without recreating a similar environment.

Pkgcruft could internally run the procedure using native git library support and expose it via a command such as pkgcruft bisect a..b. While this may be a workflow only used by more experienced devs, it would be handy to support natively instead of forcing users to roll their own scripts.


  1. Pkgcore ambles over the low bar set by portage’s design but has been showing its age since 2015 or so. It’s overly meta, leaning into python’s “everything is an object” tenet too much while hacking around the downsides of that approach for performance reasons. ↩︎

  2. Aiming to fight the neverending torrent of package cruft in ebuild repos. ↩︎

  3. Install pkgcraft-tools in order to use the pk command. ↩︎

  4. Python’s weaker threading support may be improved due to ongoing work to disable the GIL (global interpreter lock) in CPython 3.13; however, it’s still difficult to see how a language not designed for threading (outside usage such as asynchronous I/O) adapts while supporting both GIL and non-GIL functionality as currently, separate builds (having already gone through a compatibility fiasco during the py2 -> py3 era). ↩︎

May 30 2024

The dead weight of packages in Gentoo

Michał Górny (mgorny) May 30, 2024, 19:24

You’ve probably noticed it already: Gentoo developers are overwhelmed. There is a lot of unresolved bugs. There is a lot of unmaintained packages. There is a lot of open pull requests. This is all true, but it’s all part of a larger problem, and a problem that doesn’t affect Gentoo alone.

It’s a problem that any major project is going to face sooner or later, and especially a project that’s almost entirely relying on volunteer work. It’s a problem of bitrot, of different focus, of energy deficit. And it is a very hard problem to solve.

A lot of packages — a lot of effort

Packages are at the core of a Linux distribution. After all, what would be any of Gentoo’s advantages worth if people couldn’t actually use them to realize their goals? Some people even go as far as to say: the more packages, the better. Gentoo needs to have popular packages, because many users will want them. Gentoo needs to have unique packages, because that gives it an edge over other distributions.

However, having a lot of packages is also a curse. All packages require at least some maintenance effort. Some packages require very little of it, others require a lot. When packages aren’t being maintained properly, they stop serving users well. They become outdated, they accumulate bugs. Users spend time building dependencies just to discover that the package itself fails to build for months now. Users try different alternatives to discover that half of them don’t work at all, or perhaps are so outdated that they don’t actually have the functions upstream advertises, or even have data loss bugs.

Sometimes maintenance deficit is not that bad, but it usually is. Skipping every few releases of a frequently released package may have no ill effects, and save some work. Or it could mean that instead of dealing with trivial diffs (especially if upstream cared to make the changes atomic), you end up having to untangle a complex backlog. Or bisect bugs introduced a few releases ago. Or deal with an urgent security bump combined with major API changes.

If the demand for maintenance isn’t met for a long time, bitrot accumulates. And getting things going straight again becomes harder and harder. On top of that, if we can’t handle our current workload, how are we supposed to find energy to deal with all the backlog? Things quickly spiral out of control.

People want to do what they want to do

We all have packages that we find important. Sometimes, these packages require little maintenance, sometimes they are pain the ass. Sometimes, they end up being unmaintained, and we really wish someone would take care of them. Sometimes, we may end up going as far as to be angry that people are taking care of less important stuff, or that they keep adding new stuff while the existing packages rot.

The thing is, in a project that’s almost entirely driven by volunteer work, you can’t expect people to do what you want. The best you can achieve with that attitude is alienating them, and actively stopping them from doing anything. I’m not saying that there aren’t cases when this isn’t actually preferable but that’s beside the point. If you want something done, you either have to convince people to do it, do it yourself, or pay someone to do it. But even that might not suffice. People may agree with you, but not have the energy or time, or skills to do the work, or to review your work.

On top of that, there will always be an inevitable push towards adding new packages rather than dealing with abandoned ones. Users expect new software too. They don’t want to learn that Gentoo can’t have a single Matrix client, because we’re too busy keeping 20 old IRC clients alive. Or that they can’t have Fediverse software, because we’re overwhelmed with 30 minor window managers. And while this push is justified, it also means that the pile of unmaintained packages will still be there, and at the same time people will put effort into creating even more packages that may eventually end up on that pile.

The job market really sucks today

Perhaps it’s the nostalgia talking, but situation in the job market is getting worse and worse. As I’ve mentioned before, the vast majority of Gentoo developers and contributors are volunteers. They are people who generally need to work full-time to keep themselves alive. Perhaps they work overtime. Perhaps they work in toxic work places. Perhaps they are sucked dry out of their energy by problems. And they need to find time and energy to do Gentoo on top of that.

There are a handful of developers hired to do Gentoo. However, they are hired by corporations, and this obviously limits what they can do for Gentoo. To the best of my knowledge, there is no longer such a thing as “time to do random stuff in work time”. Their work can be beneficial to Gentoo users. Or it may not be. They may maintain important and useful packages, or they may end up adding lots of packages that they aren’t allowed to properly maintain afterwards, and that create extra work for others in the end.

Perhaps an option would be for Gentoo to actually pay someone to do stuff. However, this is a huge mess. Even provided that we’d be able to do afford it, how to choose what to pay for? And whom to pay? In the end, the necessary proceedings also require a lot of effort and energy, and the inevitable bikeshed is quite likely to drain it of anyone daring enough to try.

Proxy maintenance is not a long-term solution

Let’s be honest: proxy maintenance was supposed to make things better, but there’s only as much that it can do. In the end, someone needs to review stuff, and while it pays back greatly, it is more effort than “just doing it”. And there’s no guarantee that the contributor will respond timely, especially if we weren’t able to review stuff timely. Things can easily extend over time, or get stalled entirely, and that’s just one problem.

We’ve stopped accepting new packages via proxy-maint a long time ago, because we weren’t able to cope with it. I’ve created GURU to let people work together without being blocked by developers, but that’s not a perfect solution either.

And proxy-maint is just one facet of pull requests. Many pull requests are affecting packages maintaining by a variety of developers, and handling them is even harder, as they getting the developer to review or acknowledge the change.

So what is the long-term solution? Treecleaning?

I’m afraid it’s time to come to an unfortunate conclusion: the only real long-term solution is to keep removing packages. There’s only as many packages that we can maintain, and we need to make hard decisions. Keeping unmaintained and broken packages is bad for users. Spending effort fixing them ends up biting us back.

The joke is, most of the time it’s actually less effort to fix the immediate problem than to last rite and remove a package. Especially when someone already provided a fix. However, fixing the immediate issue doesn’t resolve the larger problem of the package being unmaintained. There will be another issue, and then another, and you will keep pouring energy into it.

Of course, things can get worse. You can actually pour all that energy into last rites, just to have someone “rescue” the package last minute. Just to leave it unmaintained afterwards, and then you end up going through the whole effort again. And don’t forget that in the end you’re the “villain” who wants to take away a precious package from the users, and they were the “hero” who saved it, and now the users have to deal with a back-and-forth. It’s a thankless job.

However, there’s one advantage to removing packages: they can be moved to GURU afterwards. They can have another shot at finding an active maintainer there. There, they can actually be easily made available to users without adding to developers’ workload. Of course, I’m not saying that GURU should be a dump for packages removed from Gentoo — but it’s a good choice if someone actually wants to maintain it afterwards.

So there is hope — but it is also a lot of effort. But perhaps that’s a better way to spend our energy than trying to deal with an endless influx of pull requests, and with developers adding tons of new packages that nobody will be able to take over afterwards.

You’ve probably noticed it already: Gentoo developers are overwhelmed. There is a lot of unresolved bugs. There is a lot of unmaintained packages. There is a lot of open pull requests. This is all true, but it’s all part of a larger problem, and a problem that doesn’t affect Gentoo alone.

It’s a problem that any major project is going to face sooner or later, and especially a project that’s almost entirely relying on volunteer work. It’s a problem of bitrot, of different focus, of energy deficit. And it is a very hard problem to solve.

A lot of packages — a lot of effort

Packages are at the core of a Linux distribution. After all, what would be any of Gentoo’s advantages worth if people couldn’t actually use them to realize their goals? Some people even go as far as to say: the more packages, the better. Gentoo needs to have popular packages, because many users will want them. Gentoo needs to have unique packages, because that gives it an edge over other distributions.

However, having a lot of packages is also a curse. All packages require at least some maintenance effort. Some packages require very little of it, others require a lot. When packages aren’t being maintained properly, they stop serving users well. They become outdated, they accumulate bugs. Users spend time building dependencies just to discover that the package itself fails to build for months now. Users try different alternatives to discover that half of them don’t work at all, or perhaps are so outdated that they don’t actually have the functions upstream advertises, or even have data loss bugs.

Sometimes maintenance deficit is not that bad, but it usually is. Skipping every few releases of a frequently released package may have no ill effects, and save some work. Or it could mean that instead of dealing with trivial diffs (especially if upstream cared to make the changes atomic), you end up having to untangle a complex backlog. Or bisect bugs introduced a few releases ago. Or deal with an urgent security bump combined with major API changes.

If the demand for maintenance isn’t met for a long time, bitrot accumulates. And getting things going straight again becomes harder and harder. On top of that, if we can’t handle our current workload, how are we supposed to find energy to deal with all the backlog? Things quickly spiral out of control.

People want to do what they want to do

We all have packages that we find important. Sometimes, these packages require little maintenance, sometimes they are pain the ass. Sometimes, they end up being unmaintained, and we really wish someone would take care of them. Sometimes, we may end up going as far as to be angry that people are taking care of less important stuff, or that they keep adding new stuff while the existing packages rot.

The thing is, in a project that’s almost entirely driven by volunteer work, you can’t expect people to do what you want. The best you can achieve with that attitude is alienating them, and actively stopping them from doing anything. I’m not saying that there aren’t cases when this isn’t actually preferable but that’s beside the point. If you want something done, you either have to convince people to do it, do it yourself, or pay someone to do it. But even that might not suffice. People may agree with you, but not have the energy or time, or skills to do the work, or to review your work.

On top of that, there will always be an inevitable push towards adding new packages rather than dealing with abandoned ones. Users expect new software too. They don’t want to learn that Gentoo can’t have a single Matrix client, because we’re too busy keeping 20 old IRC clients alive. Or that they can’t have Fediverse software, because we’re overwhelmed with 30 minor window managers. And while this push is justified, it also means that the pile of unmaintained packages will still be there, and at the same time people will put effort into creating even more packages that may eventually end up on that pile.

The job market really sucks today

Perhaps it’s the nostalgia talking, but situation in the job market is getting worse and worse. As I’ve mentioned before, the vast majority of Gentoo developers and contributors are volunteers. They are people who generally need to work full-time to keep themselves alive. Perhaps they work overtime. Perhaps they work in toxic work places. Perhaps they are sucked dry out of their energy by problems. And they need to find time and energy to do Gentoo on top of that.

There are a handful of developers hired to do Gentoo. However, they are hired by corporations, and this obviously limits what they can do for Gentoo. To the best of my knowledge, there is no longer such a thing as “time to do random stuff in work time”. Their work can be beneficial to Gentoo users. Or it may not be. They may maintain important and useful packages, or they may end up adding lots of packages that they aren’t allowed to properly maintain afterwards, and that create extra work for others in the end.

Perhaps an option would be for Gentoo to actually pay someone to do stuff. However, this is a huge mess. Even provided that we’d be able to do afford it, how to choose what to pay for? And whom to pay? In the end, the necessary proceedings also require a lot of effort and energy, and the inevitable bikeshed is quite likely to drain it of anyone daring enough to try.

Proxy maintenance is not a long-term solution

Let’s be honest: proxy maintenance was supposed to make things better, but there’s only as much that it can do. In the end, someone needs to review stuff, and while it pays back greatly, it is more effort than “just doing it”. And there’s no guarantee that the contributor will respond timely, especially if we weren’t able to review stuff timely. Things can easily extend over time, or get stalled entirely, and that’s just one problem.

We’ve stopped accepting new packages via proxy-maint a long time ago, because we weren’t able to cope with it. I’ve created GURU to let people work together without being blocked by developers, but that’s not a perfect solution either.

And proxy-maint is just one facet of pull requests. Many pull requests are affecting packages maintaining by a variety of developers, and handling them is even harder, as they getting the developer to review or acknowledge the change.

So what is the long-term solution? Treecleaning?

I’m afraid it’s time to come to an unfortunate conclusion: the only real long-term solution is to keep removing packages. There’s only as many packages that we can maintain, and we need to make hard decisions. Keeping unmaintained and broken packages is bad for users. Spending effort fixing them ends up biting us back.

The joke is, most of the time it’s actually less effort to fix the immediate problem than to last rite and remove a package. Especially when someone already provided a fix. However, fixing the immediate issue doesn’t resolve the larger problem of the package being unmaintained. There will be another issue, and then another, and you will keep pouring energy into it.

Of course, things can get worse. You can actually pour all that energy into last rites, just to have someone “rescue” the package last minute. Just to leave it unmaintained afterwards, and then you end up going through the whole effort again. And don’t forget that in the end you’re the “villain” who wants to take away a precious package from the users, and they were the “hero” who saved it, and now the users have to deal with a back-and-forth. It’s a thankless job.

However, there’s one advantage to removing packages: they can be moved to GURU afterwards. They can have another shot at finding an active maintainer there. There, they can actually be easily made available to users without adding to developers’ workload. Of course, I’m not saying that GURU should be a dump for packages removed from Gentoo — but it’s a good choice if someone actually wants to maintain it afterwards.

So there is hope — but it is also a lot of effort. But perhaps that’s a better way to spend our energy than trying to deal with an endless influx of pull requests, and with developers adding tons of new packages that nobody will be able to take over afterwards.

April 10 2024

Gentoo Linux becomes an SPI associated project

Gentoo News (GentooNews) April 10, 2024, 5:00

As of this March, Gentoo Linux has become an Associated Project of Software in the Public Interest, see also the formal invitation by the Board of Directors of SPI. Software in the Public Interest (SPI) is a non-profit corporation founded to act as a fiscal sponsor for organizations that develop open source software and hardware. It provides services such as accepting donations, holding funds and assets, … SPI qualifies for 501(c)(3) (U.S. non-profit organization) status. This means that all donations made to SPI and its supported projects are tax deductible for donors in the United States. Read on for more details…

Questions & Answers Why become an SPI Associated Project?

Gentoo Linux, as a collective of software developers, is pretty good at being a Linux distribution. However, becoming a US federal non-profit organization would increase the non-technical workload.

The current Gentoo Foundation has bylaws restricting its behavior to that of a non-profit, is a recognized non-profit only in New Mexico, but a for-profit entity at the US federal level. A direct conversion to a federally recognized non-profit would be unlikely to succeed without significant effort and cost.

Finding Gentoo Foundation trustees to take care of the non-technical work is an ongoing challenge. Robin Johnson (robbat2), our current Gentoo Foundation treasurer, spent a huge amount of time and effort with getting bookkeeping and taxes in order after the prior treasurers lost interest and retired from Gentoo.

For these reasons, Gentoo is moving the non-technical organization overhead to Software in the Public Interest (SPI). As noted above, SPI is already now recognized at US federal level as a full-fleged non-profit 501(c)(3). It also handles several projects of similar type and size (e.g., Arch and Debian) and as such has exactly the experience and background that Gentoo needs.

What are the advantages of becoming an SPI Associated Project in detail?

Financial benefits to donors:

  • tax deductions [1]

Financial benefits to Gentoo:

  • matching fund programs [2]
  • reduced organizational complexity
  • reduced administration costs [3]
  • reduced taxes [4]
  • reduced fees [5]
  • increased access to non-profit-only sponsorship [6]

Non-financial benefits to Gentoo:

  • reduced organizational complexity, no “double-headed beast” any more
  • less non-technical work required

[1] Presently, almost no donations to the Gentoo Foundation provide a tax benefit for donors anywhere in the world. Becoming a SPI Associated Project enables tax benefits for donors located in the USA. Some other countries do recognize donations made to non-profits in other jurisdictions and provide similar tax credits.

[2] This also depends on jurisdictions and local tax laws of the donor, and is often tied to tax deductions.

[3] The Gentoo Foundation currently pays $1500/year in tax preparation costs.

[4] In recent fiscal years, through careful budgetary planning on the part of the Treasurer and advice of tax professionals, the Gentoo Foundation has used depreciation expenses to offset taxes owing; however, this is not a sustainable strategy.

[5] Non-profits are eligible for reduced fees, e.g., of Paypal (savings of 0.9-1.29% per donation) and other services.

[6] Some sponsorship programs are only available to verified 501(c)(3) organizations

Can I still donate to Gentoo, and how?

Yes, of course, and please do so! For the start, you can go to SPI’s Gentoo page and scroll down to the Paypal and Click&Pledge donation links. More information and more ways will be set up soon. Keep in mind, donations to Gentoo via SPI are tax-deductible in the US!

In time, Gentoo will contact existing recurring donors, to aid transitions to SPI’s donation systems.

What will happen to the Gentoo Foundation?

Our intention is to eventually transfer the existing assets to SPI and dissolve the Gentoo Foundation. The precise steps needed on the way to this objective are still under discussion.

Does this affect in any way the European Gentoo e.V.?

No. Förderverein Gentoo e.V. will continue to exist independently. It is also recognized to serve public-benefit purposes (§ 52 Fiscal Code of Germany), meaning that donations are tax-deductible in the E.U.

SPI Inc. logo

As of this March, Gentoo Linux has become an Associated Project of Software in the Public Interest, see also the formal invitation by the Board of Directors of SPI. Software in the Public Interest (SPI) is a non-profit corporation founded to act as a fiscal sponsor for organizations that develop open source software and hardware. It provides services such as accepting donations, holding funds and assets, … SPI qualifies for 501(c)(3) (U.S. non-profit organization) status. This means that all donations made to SPI and its supported projects are tax deductible for donors in the United States. Read on for more details…

Questions & Answers

Why become an SPI Associated Project?

Gentoo Linux, as a collective of software developers, is pretty good at being a Linux distribution. However, becoming a US federal non-profit organization would increase the non-technical workload.

The current Gentoo Foundation has bylaws restricting its behavior to that of a non-profit, is a recognized non-profit only in New Mexico, but a for-profit entity at the US federal level. A direct conversion to a federally recognized non-profit would be unlikely to succeed without significant effort and cost.

Finding Gentoo Foundation trustees to take care of the non-technical work is an ongoing challenge. Robin Johnson (robbat2), our current Gentoo Foundation treasurer, spent a huge amount of time and effort with getting bookkeeping and taxes in order after the prior treasurers lost interest and retired from Gentoo.

For these reasons, Gentoo is moving the non-technical organization overhead to Software in the Public Interest (SPI). As noted above, SPI is already now recognized at US federal level as a full-fleged non-profit 501(c)(3). It also handles several projects of similar type and size (e.g., Arch and Debian) and as such has exactly the experience and background that Gentoo needs.

What are the advantages of becoming an SPI Associated Project in detail?

Financial benefits to donors:

  • tax deductions [1]

Financial benefits to Gentoo:

  • matching fund programs [2]
  • reduced organizational complexity
  • reduced administration costs [3]
  • reduced taxes [4]
  • reduced fees [5]
  • increased access to non-profit-only sponsorship [6]

Non-financial benefits to Gentoo:

  • reduced organizational complexity, no “double-headed beast” any more
  • less non-technical work required

[1] Presently, almost no donations to the Gentoo Foundation provide a tax benefit for donors anywhere in the world. Becoming a SPI Associated Project enables tax benefits for donors located in the USA. Some other countries do recognize donations made to non-profits in other jurisdictions and provide similar tax credits.

[2] This also depends on jurisdictions and local tax laws of the donor, and is often tied to tax deductions.

[3] The Gentoo Foundation currently pays $1500/year in tax preparation costs.

[4] In recent fiscal years, through careful budgetary planning on the part of the Treasurer and advice of tax professionals, the Gentoo Foundation has used depreciation expenses to offset taxes owing; however, this is not a sustainable strategy.

[5] Non-profits are eligible for reduced fees, e.g., of Paypal (savings of 0.9-1.29% per donation) and other services.

[6] Some sponsorship programs are only available to verified 501(c)(3) organizations

Can I still donate to Gentoo, and how?

Yes, of course, and please do so! For the start, you can go to SPI’s Gentoo page and scroll down to the Paypal and Click&Pledge donation links. More information and more ways will be set up soon. Keep in mind, donations to Gentoo via SPI are tax-deductible in the US!

In time, Gentoo will contact existing recurring donors, to aid transitions to SPI’s donation systems.

What will happen to the Gentoo Foundation?

Our intention is to eventually transfer the existing assets to SPI and dissolve the Gentoo Foundation. The precise steps needed on the way to this objective are still under discussion.

Does this affect in any way the European Gentoo e.V.?

No. Förderverein Gentoo e.V. will continue to exist independently. It is also recognized to serve public-benefit purposes (§ 52 Fiscal Code of Germany), meaning that donations are tax-deductible in the E.U.

April 01 2024

The interpersonal side of the xz-utils compromise

Andreas K. Hüttel (dilfridge) April 01, 2024, 14:54

While everyone is busy analyzing the highly complex technical details of the recently discovered xz-utils compromise that is currently rocking the internet, it is worth looking at the underlying non-technical problems that make such a compromise possible. A very good write-up can be found on the blog of Rob Mensching...

"A Microcosm of the interactions in Open Source projects"

While everyone is busy analyzing the highly complex technical details of the recently discovered xz-utils compromise that is currently rocking the internet, it is worth looking at the underlying non-technical problems that make such a compromise possible. A very good write-up can be found on the blog of Rob Mensching...

"A Microcosm of the interactions in Open Source projects"

March 15 2024

Optimizing parallel extension builds in PEP517 builds

Michał Górny (mgorny) March 15, 2024, 15:41

The distutils (and therefore setuptools) build system supports building C extensions in parallel, through the use of -j (--parallel) option, passed either to build_ext or build command. Gentoo distutils-r1.eclass has always passed these options to speed up builds of packages that feature multiple C files.

However, the switch to PEP517 build backend made this problematic. While the backend uses the respective commands internally, it doesn’t provide a way to pass options to them. In this post, I’d like to explore the different ways we attempted to resolve this problem, trying to find an optimal solution that would let us benefit from parallel extension builds while preserving minimal overhead for packages that wouldn’t benefit from it (e.g. pure Python packages). I will also include a fresh benchmark results to compare these methods.

The history

The legacy build mode utilized two ebuild phases: the compile phase during which the build command was invoked, and the install phase during which install command was invoked. An explicit command invocation made it possible to simply pass the -j option.

When we initially implemented the PEP517 mode, we simply continued calling esetup.py build, prior to calling the PEP517 backend. The former call built all the extensions in parallel, and the latter simply reused the existing build directory.

This was a bit ugly, but it worked most of the time. However, it suffered from a significant overhead from calling the build command. This meant significantly slower builds in the vast majority of packages that did not feature multiple C source files that could benefit from parallel builds.

The next optimization was to replace the build command invocation with more specific build_ext. While the former also involved copying all .py files to the build directory, the latter only built C extensions — and therefore could be pretty much a no-op if there were none. As a side effect, we’ve started hitting rare bugs when custom setup.py scripts assumed that build_ext is never called directly. For a relatively recent example, there is my pull request to fix build_ext -j… in pyzmq.

I’ve followed this immediately with another optimization: skipping the call if there were no source files. To be honest, the code started looking messy at this point, but it was an optimization nevertheless. For the no-extension case, the overhead of calling esetup.py build_ext was replaced by the overhead of calling find to scan the source tree. Of course, this still had some risk of false positives and false negatives.

The next optimization was to call build_ext only if there were at least two files to compile. This mostly addressed the added overhead for packages building only one C file — but of course it couldn’t resolve all false positives.

One more optimization was to make the call conditional to DISTUTILS_EXT variable. While the variable was introduced for another reason (to control adding debug flag), it provided a nice solution to avoid both most of the false positives (even if they were extremely rare) and the overhead of calling find.

The last step wasn’t mine. It was Eli Schwartz’s patch to pass build options via DIST_EXTRA_CONFIG. This provided the ultimate optimization — instead of trying to hack a build_ext call around, we were finally able to pass the necessary options to the PEP517 backend. Needless to say, it meant not only no false positives and no false negatives, but it effectively almost eliminated the overhead in all cases (except for the cost of writing the configuration file).

The timings Table 1. Benchmark results for various modes of package builds Django 5.0.3 Cython 3.0.9 Serial PEP517 build 5.4 s 46.7 s build total 3.1 s 8.4 s 20.8 s 23.5 s PEP517 5.3 s 2.7 s build_ext total 0.6 s 6 s 20.8 s 23.5 s PEP517 5.4 s 2.7 s find + build_ext total 0.06 s 5.5 s 20.9 s 23.6 s PEP517 5.4 s 2.7 s Parallel PEP517 build 5.4 s 22.8 s ♦Figure 1. Benchmark times plot

For a pure Python package (django here), the table clearly shows how successive iterations have reduced the overhead from parallel build supports, from roughly 3 seconds in the earliest approach, resulting in 8.4 s total build time, to the same 5.4 s as the regular PEP517 build.

For Cython, all but the ultimate solution result in roughly 23.5 s total, half of the time needed for a serial build (46.7 s). The ultimate solution saves another 0.8 s on the double invocation overhead, giving the final result of 22.8 s.

Test data and methodology

The methods were tested against two packages:

  1. Django 5.0.3, representing a moderate size pure Python package, and
  2. Cython 3.0.9, representing a package with a moderate number of C extensions.

Python 3.12.2_p1 was used for testing. The timings were done using time command from bash. The results were averaged from 5 warm cache test runs. Testing was done on AMD Ryzen 5 3600, with pstates boost disabled.

The PEP517 builds were performed using the following command:

python3.12 -m build -nwx

The remaining commands and conditions were copied from the eclass. The test scripts, along with the results, spreadsheet and plot source can be found in the distutils-build-bench repository.

The distutils (and therefore setuptools) build system supports building C extensions in parallel, through the use of -j (--parallel) option, passed either to build_ext or build command. Gentoo distutils-r1.eclass has always passed these options to speed up builds of packages that feature multiple C files.

However, the switch to PEP517 build backend made this problematic. While the backend uses the respective commands internally, it doesn’t provide a way to pass options to them. In this post, I’d like to explore the different ways we attempted to resolve this problem, trying to find an optimal solution that would let us benefit from parallel extension builds while preserving minimal overhead for packages that wouldn’t benefit from it (e.g. pure Python packages). I will also include a fresh benchmark results to compare these methods.

The history

The legacy build mode utilized two ebuild phases: the compile phase during which the build command was invoked, and the install phase during which install command was invoked. An explicit command invocation made it possible to simply pass the -j option.

When we initially implemented the PEP517 mode, we simply continued calling esetup.py build, prior to calling the PEP517 backend. The former call built all the extensions in parallel, and the latter simply reused the existing build directory.

This was a bit ugly, but it worked most of the time. However, it suffered from a significant overhead from calling the build command. This meant significantly slower builds in the vast majority of packages that did not feature multiple C source files that could benefit from parallel builds.

The next optimization was to replace the build command invocation with more specific build_ext. While the former also involved copying all .py files to the build directory, the latter only built C extensions — and therefore could be pretty much a no-op if there were none. As a side effect, we’ve started hitting rare bugs when custom setup.py scripts assumed that build_ext is never called directly. For a relatively recent example, there is my pull request to fix build_ext -j… in pyzmq.

I’ve followed this immediately with another optimization: skipping the call if there were no source files. To be honest, the code started looking messy at this point, but it was an optimization nevertheless. For the no-extension case, the overhead of calling esetup.py build_ext was replaced by the overhead of calling find to scan the source tree. Of course, this still had some risk of false positives and false negatives.

The next optimization was to call build_ext only if there were at least two files to compile. This mostly addressed the added overhead for packages building only one C file — but of course it couldn’t resolve all false positives.

One more optimization was to make the call conditional to DISTUTILS_EXT variable. While the variable was introduced for another reason (to control adding debug flag), it provided a nice solution to avoid both most of the false positives (even if they were extremely rare) and the overhead of calling find.

The last step wasn’t mine. It was Eli Schwartz’s patch to pass build options via DIST_EXTRA_CONFIG. This provided the ultimate optimization — instead of trying to hack a build_ext call around, we were finally able to pass the necessary options to the PEP517 backend. Needless to say, it meant not only no false positives and no false negatives, but it effectively almost eliminated the overhead in all cases (except for the cost of writing the configuration file).

The timings

Table 1. Benchmark results for various modes of package builds
Django 5.0.3 Cython 3.0.9
Serial PEP517 build 5.4 s 46.7 s
build total 3.1 s 8.4 s 20.8 s 23.5 s
PEP517 5.3 s 2.7 s
build_ext total 0.6 s 6 s 20.8 s 23.5 s
PEP517 5.4 s 2.7 s
find + build_ext total 0.06 s 5.5 s 20.9 s 23.6 s
PEP517 5.4 s 2.7 s
Parallel PEP517 build 5.4 s 22.8 s
Plot of the results from the table
Figure 1. Benchmark times plot

For a pure Python package (django here), the table clearly shows how successive iterations have reduced the overhead from parallel build supports, from roughly 3 seconds in the earliest approach, resulting in 8.4 s total build time, to the same 5.4 s as the regular PEP517 build.

For Cython, all but the ultimate solution result in roughly 23.5 s total, half of the time needed for a serial build (46.7 s). The ultimate solution saves another 0.8 s on the double invocation overhead, giving the final result of 22.8 s.

Test data and methodology

The methods were tested against two packages:

  1. Django 5.0.3, representing a moderate size pure Python package, and
  2. Cython 3.0.9, representing a package with a moderate number of C extensions.

Python 3.12.2_p1 was used for testing. The timings were done using time command from bash. The results were averaged from 5 warm cache test runs. Testing was done on AMD Ryzen 5 3600, with pstates boost disabled.

The PEP517 builds were performed using the following command:

python3.12 -m build -nwx

The remaining commands and conditions were copied from the eclass. The test scripts, along with the results, spreadsheet and plot source can be found in the distutils-build-bench repository.

March 13 2024

The story of distutils build directory in Gentoo

Michał Górny (mgorny) March 13, 2024, 10:18

The Python distutils build system, as well as setuptools (that it was later merged into), used a two-stage build: first, a build command would prepare a built package version (usually just copy the .py files, sometimes compile Python extensions) into a build directory, then an install command would copy them to the live filesystem, or a staging directory. Curious enough, distutils were an early adopter of out-of-source builds — when used right (which often enough wasn’t the case), no writes would occur in the source directory and all modifications would be done directly in the build directory.

Today, in the PEP517 era, two-stage builds aren’t really relevant anymore. Build systems were turned into black boxes that spew wheels. However, setuptools still internally uses the two-stage build and the build directory, and therefore it still remains relevant to Gentoo eclasses. In this post, I’d like to shortly tell how we dealt with it over the years.

Act 1: The first overrides

Normally, distutils would use a build directory of build/lib*, optionally suffixed for platform and Python version. This was reasonably good most of the time, but not good enough for us. On one hand, it didn’t properly distinguish CPython and PyPy (and it wouldn’t for a long time, until Use cache_tag in default build_platlib dir PR). On the other, the directory name would be hard to get, if ebuilds ever needed to do something about it (and we surely did).

Therefore, the eclass would start overriding build directories quite early on. We would start by passing --build-base to the build command, then add --build-lib to make the lib subdirectory path simpler, then replace it with separate --build-platlib and --build-purelib to workaround build systems overriding one of them (wxPython, if I recall correctly).

The eclass would class this mode “out-of-source build” and use a dedicated BUILD_DIR variable to refer to the dedicated build directory. Confusingly, “in-source build” would actually indicate a distutils-style out-of-source build in the default build subdirectory, and the eclass would create a separate copy of the sources for every Python target (effectively permitting in-source modifications).

The last version of code passing --build* options.

Act 2: .pydistutils.cfg

The big problem with the earlier approach is that you’d have to pass the options every time setup.py is invoked. Given the design of option passing in distutils, this effectively meant that you needed to repeatedly invoke the build commands (otherwise you couldn’t pass options to it).

The next step would be to replace this logic by using .pydistutils.cfg configuration file. The file, placed in HOME (also overridden in eclass) would allow us to set option values without actually having to pass specific commands on the command-line. The relevant logic, added in September 2013 (commit: Use pydistutils.cfg to set build-dirs instead of passing commands explicitly…), remains in the eclass even today. However, since the PEP517 build mode stopped using this file, it is used only in legacy mode.

The latest version of the code writing .pydistutils.cfg.

Act 3: Messy PEP517 mode

One of the changes caused by building in PEP517 mode was that .pydistutils.cfg started being ignored. This implied that setuptools were using the default build directory again. It wasn’t such a big deal anymore — since we no longer used proper separation between the two build stages, and we no longer needed to have any awareness of the intermediate build directory, the path didn’t matter per se. However, it meant CPython and PyPy started sharing the same build directory again — and since setuptools install stage picks everything up from that directory, it meant that extensions built for PyPy3.10 would be installed to CPython3.10 directory!

How did we deal with that? Well, at first I’ve tried calling setup.py clean -a. It was kinda ugly, especially that it meant combining setup.py calls with PEP517 invocations — but then, we were already calling setup.py build to take advantage of parallel build jobs when building extensions, and it worked. For a time.

Unfortunately, it turned out that some packages override the clean command and break our code, or even literally block calling it. So the next step was to stop being fancy and literally call rm -rf build. Well, this was ugly, but — again — it worked.

Act 4: Back to the config files

As I’ve mentioned before, we continued to call the build command in PEP517 mode, in order to enable building C extensions in parallel via the -j option. Over time, this code grew in complexity — we’ve replaced the call with more specific build_ext, then started adding heuristics to avoid calling it when unnecessary (a no-op setup.py build_ext call slowed pure Python package builds substantially).

Eventually, Eli Schwartz came up with a great alternative — using DIST_EXTRA_CONFIG to provide a configuration file. This meant that we could replace both setup.py invocations — by using the configuration file both to specify the job count for extension builds, and to use a dedicated build directory.

The change originally was done only for the explicit use of setuptools build backend. As a result, we’ve missed a bunch of “indirect” setuptools uses — other setuptools-backed PEP517 backends (jupyter-builder, pbr), backends using setuptools conditionally (pdm-backend), custom wrappers over setuptools and… dev-python/setuptools package itself (“standalone” backend). We’ve learned about it the hard way when setuptools stopped implicitly ignoring the build directory as a package name — and effectively a subsequent build collected a copy of the previous build as a build package. Yep, we’ve ended up with a monster of /usr/lib/python3.12/site-packages/build/lib/build/lib/setuptools.

So we approach the most recent change: enabling the config for all backends. After all, we’re just setting an environment variable, so others build backends will just ignore it.

And so, we’ve came full circle. We’ve enabled configuration files early on, switched to other hacks when PEP517 builds broke that and eventually returned to unconditionally using configuration files.

The Python distutils build system, as well as setuptools (that it was later merged into), used a two-stage build: first, a build command would prepare a built package version (usually just copy the .py files, sometimes compile Python extensions) into a build directory, then an install command would copy them to the live filesystem, or a staging directory. Curious enough, distutils were an early adopter of out-of-source builds — when used right (which often enough wasn’t the case), no writes would occur in the source directory and all modifications would be done directly in the build directory.

Today, in the PEP517 era, two-stage builds aren’t really relevant anymore. Build systems were turned into black boxes that spew wheels. However, setuptools still internally uses the two-stage build and the build directory, and therefore it still remains relevant to Gentoo eclasses. In this post, I’d like to shortly tell how we dealt with it over the years.

Act 1: The first overrides

Normally, distutils would use a build directory of build/lib*, optionally suffixed for platform and Python version. This was reasonably good most of the time, but not good enough for us. On one hand, it didn’t properly distinguish CPython and PyPy (and it wouldn’t for a long time, until Use cache_tag in default build_platlib dir PR). On the other, the directory name would be hard to get, if ebuilds ever needed to do something about it (and we surely did).

Therefore, the eclass would start overriding build directories quite early on. We would start by passing --build-base to the build command, then add --build-lib to make the lib subdirectory path simpler, then replace it with separate --build-platlib and --build-purelib to workaround build systems overriding one of them (wxPython, if I recall correctly).

The eclass would class this mode “out-of-source build” and use a dedicated BUILD_DIR variable to refer to the dedicated build directory. Confusingly, “in-source build” would actually indicate a distutils-style out-of-source build in the default build subdirectory, and the eclass would create a separate copy of the sources for every Python target (effectively permitting in-source modifications).

The last version of code passing --build* options.

Act 2: .pydistutils.cfg

The big problem with the earlier approach is that you’d have to pass the options every time setup.py is invoked. Given the design of option passing in distutils, this effectively meant that you needed to repeatedly invoke the build commands (otherwise you couldn’t pass options to it).

The next step would be to replace this logic by using .pydistutils.cfg configuration file. The file, placed in HOME (also overridden in eclass) would allow us to set option values without actually having to pass specific commands on the command-line. The relevant logic, added in September 2013 (commit: Use pydistutils.cfg to set build-dirs instead of passing commands explicitly…), remains in the eclass even today. However, since the PEP517 build mode stopped using this file, it is used only in legacy mode.

The latest version of the code writing .pydistutils.cfg.

Act 3: Messy PEP517 mode

One of the changes caused by building in PEP517 mode was that .pydistutils.cfg started being ignored. This implied that setuptools were using the default build directory again. It wasn’t such a big deal anymore — since we no longer used proper separation between the two build stages, and we no longer needed to have any awareness of the intermediate build directory, the path didn’t matter per se. However, it meant CPython and PyPy started sharing the same build directory again — and since setuptools install stage picks everything up from that directory, it meant that extensions built for PyPy3.10 would be installed to CPython3.10 directory!

How did we deal with that? Well, at first I’ve tried calling setup.py clean -a. It was kinda ugly, especially that it meant combining setup.py calls with PEP517 invocations — but then, we were already calling setup.py build to take advantage of parallel build jobs when building extensions, and it worked. For a time.

Unfortunately, it turned out that some packages override the clean command and break our code, or even literally block calling it. So the next step was to stop being fancy and literally call rm -rf build. Well, this was ugly, but — again — it worked.

Act 4: Back to the config files

As I’ve mentioned before, we continued to call the build command in PEP517 mode, in order to enable building C extensions in parallel via the -j option. Over time, this code grew in complexity — we’ve replaced the call with more specific build_ext, then started adding heuristics to avoid calling it when unnecessary (a no-op setup.py build_ext call slowed pure Python package builds substantially).

Eventually, Eli Schwartz came up with a great alternative — using DIST_EXTRA_CONFIG to provide a configuration file. This meant that we could replace both setup.py invocations — by using the configuration file both to specify the job count for extension builds, and to use a dedicated build directory.

The change originally was done only for the explicit use of setuptools build backend. As a result, we’ve missed a bunch of “indirect” setuptools uses — other setuptools-backed PEP517 backends (jupyter-builder, pbr), backends using setuptools conditionally (pdm-backend), custom wrappers over setuptools and… dev-python/setuptools package itself (“standalone” backend). We’ve learned about it the hard way when setuptools stopped implicitly ignoring the build directory as a package name — and effectively a subsequent build collected a copy of the previous build as a build package. Yep, we’ve ended up with a monster of /usr/lib/python3.12/site-packages/build/lib/build/lib/setuptools.

So we approach the most recent change: enabling the config for all backends. After all, we’re just setting an environment variable, so others build backends will just ignore it.

And so, we’ve came full circle. We’ve enabled configuration files early on, switched to other hacks when PEP517 builds broke that and eventually returned to unconditionally using configuration files.

February 23 2024

Gentoo RISC-V Image for the Allwinner Nezha D1

Florian Schmaus (flow) February 23, 2024, 0:00
Posted on February 23, 2024 Tags: gentoo Motivation

The Allwinner Nezha D1 SoC was one of the first available RISC-V single-board computers (SBC) crowdfounded and released in 2021. According to the manufacturer, “it is the world’s first mass-produced development board that supports 64bit RISC-V instruction set and Linux system.”.

Installing Gentoo on this system usually involved grabbing one existing image, like the Fedora one, and swapping the userland with a Gentoo stage3.

Bootstrapping via a third-party image is now no longer necessary.

A Gentoo RISC-V Image for the Nezha D1

I have uploaded a, for now, experimental Gentoo RISCV-V Image for the Nezha D1 at

dev.gentoo.org/~flow/gymage/

Simply dd(rescue) the image onto a SD-Card and plug that card into your board.

Now, you could either connect to the UART or plug in a Ethernet cable to get to a login prompt.

UART

You typically want to connect a USB-to-UART adapter to the board. Unlike other SBCs, the debug UART on the Nezha D1 is clearly labeled with GND, RX, and TX. Using the standard ThunderFly color scheme, this resolves to black for ground (GND), green for RX, and white for TX.

Then fire up your favorite serial terminal

$ minicom --device /dev/ttyUSB0

and power on the board.

Note: Your milleage may vary. For example, you probably want your user to be a member of the ‘dialout’ group to access the serial port. The device name of your USB-to-UART adapter may not be /dev/ttyUSB0.

SSH

Ethernet port of the board is configured to use DHCP for network configuration. A SSH daemon is listening on port 22.

Login

The image comes with a ‘root’ user whose password is set to ‘root’. Note that you should change this password as soon as possible.

gymage

The image was created using the gymage tool.

I envision the gymage to become an easy-to-use tool that allows users to create up-to-date Gentoo images for single-board computers. The tool is in an early stage with some open questions. However, you are free to try it. The source code of gymage is hosted at gitlab.com/flow/gymage, and feedback is, as always, appreciated.

Stay tuned for another blog post about gymage once it matures further.

var discourseUrl = "forums.geekplace.eu/", discourseEmbedUrl = 'geekplace.eu/flow/posts/2024-02-23-allwinner-nezha-d1-riscv-gentoo-image.html'; (function() { if (window.location.href.indexOf("/drafts/") > -1) { return; } var d = document.createElement('script'); d.type = 'text/javascript'; d.async = true; d.src = discourseUrl + 'javascripts/embed.js'; (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(d); })();
Posted on February 23, 2024
Tags: gentoo

Motivation

The Allwinner Nezha D1 SoC was one of the first available RISC-V single-board computers (SBC) crowdfounded and released in 2021. According to the manufacturer, “it is the world’s first mass-produced development board that supports 64bit RISC-V instruction set and Linux system.”.

Installing Gentoo on this system usually involved grabbing one existing image, like the Fedora one, and swapping the userland with a Gentoo stage3.

Bootstrapping via a third-party image is now no longer necessary.

A Gentoo RISC-V Image for the Nezha D1

I have uploaded a, for now, experimental Gentoo RISCV-V Image for the Nezha D1 at

https://dev.gentoo.org/~flow/gymage/

Simply dd(rescue) the image onto a SD-Card and plug that card into your board.

Now, you could either connect to the UART or plug in a Ethernet cable to get to a login prompt.

UART

You typically want to connect a USB-to-UART adapter to the board. Unlike other SBCs, the debug UART on the Nezha D1 is clearly labeled with GND, RX, and TX. Using the standard ThunderFly color scheme, this resolves to black for ground (GND), green for RX, and white for TX.

Then fire up your favorite serial terminal

$ minicom --device /dev/ttyUSB0

and power on the board.

Note: Your milleage may vary. For example, you probably want your user to be a member of the ‘dialout’ group to access the serial port. The device name of your USB-to-UART adapter may not be /dev/ttyUSB0.

SSH

Ethernet port of the board is configured to use DHCP for network configuration. A SSH daemon is listening on port 22.

Login

The image comes with a ‘root’ user whose password is set to ‘root’. Note that you should change this password as soon as possible.

gymage

The image was created using the gymage tool.

I envision the gymage to become an easy-to-use tool that allows users to create up-to-date Gentoo images for single-board computers. The tool is in an early stage with some open questions. However, you are free to try it. The source code of gymage is hosted at https://gitlab.com/flow/gymage, and feedback is, as always, appreciated.

Stay tuned for another blog post about gymage once it matures further.

February 04 2024

Gentoo x86-64-v3 binary packages available

Gentoo News (GentooNews) February 04, 2024, 6:00

End of December 2023 we already made our official announcement of binary Gentoo package hosting. The initial package set for amd64 was and is base-line x86-64, i.e., it should work on any 64bit Intel or AMD machine. Now, we are happy to announce that there is also a separate package set using the extended x86-64-v3 ISA (i.e., microarchitecture level) available for the same software. If your hardware supports it, use it and enjoy the speed-up! Read on for more details…

Questions & Answers How can I check if my machine supports x86-64-v3?

The easiest way to do this is to use glibc’s dynamic linker:

larry@noumea ~ $ ld.so --help
Usage: ld.so [OPTION]... EXECUTABLE-FILE [ARGS-FOR-PROGRAM...]
You have invoked 'ld.so', the program interpreter for dynamically-linked
ELF programs.  Usually, the program interpreter is invoked automatically
when a dynamically-linked executable is started.
[...]
[...]

Subdirectories of glibc-hwcaps directories, in priority order:
  x86-64-v4
  x86-64-v3 (supported, searched)
  x86-64-v2 (supported, searched)
larry@noumea ~ $ 

As you can see, this laptop supports x86-64-v2 and x86-64-v3, but not x86-64-v4.

How do I use the new x86-64-v3 packages?

On your amd64 machine, edit the configuration file in /etc/portage/binrepos.conf/ that defines the URI from where the packages are downloaded, and replace x86-64 with x86-64-v3. E.g., if you have so far

sync-uri = distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64/

then you change the URI to

sync-uri = distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64-v3/

That’s all.

Why don’t you have x86-64-v4 packages?

There’s not yet enough hardware and people out there that could use them.

We could start building such packages at any time (our build host is new and shiny), but for now we recommend you build from source and use your own CFLAGS then. After all, if your machine supports x86-64-v4, it’s definitely fast…

Why is there recently so much noise about x86-64-v3 support in Linux distros?

Beats us. The ISA is 9 years old (just the tag x86-64-v3 was slapped onto it recently), so you’d think binaries would have been generated by now. With Gentoo you could’ve done (and probably have done) it all the time.

That said, in some processor lines (i.e. Atom), support for this instruction set was introduced rather late (2021).

Larry the cow with packages

End of December 2023 we already made our official announcement of binary Gentoo package hosting. The initial package set for amd64 was and is base-line x86-64, i.e., it should work on any 64bit Intel or AMD machine. Now, we are happy to announce that there is also a separate package set using the extended x86-64-v3 ISA (i.e., microarchitecture level) available for the same software. If your hardware supports it, use it and enjoy the speed-up! Read on for more details…

Questions & Answers

How can I check if my machine supports x86-64-v3?

The easiest way to do this is to use glibc’s dynamic linker:

larry@noumea ~ $ ld.so --help
Usage: ld.so [OPTION]... EXECUTABLE-FILE [ARGS-FOR-PROGRAM...]
You have invoked 'ld.so', the program interpreter for dynamically-linked
ELF programs.  Usually, the program interpreter is invoked automatically
when a dynamically-linked executable is started.
[...]
[...]

Subdirectories of glibc-hwcaps directories, in priority order:
  x86-64-v4
  x86-64-v3 (supported, searched)
  x86-64-v2 (supported, searched)
larry@noumea ~ $ 

As you can see, this laptop supports x86-64-v2 and x86-64-v3, but not x86-64-v4.

How do I use the new x86-64-v3 packages?

On your amd64 machine, edit the configuration file in /etc/portage/binrepos.conf/ that defines the URI from where the packages are downloaded, and replace x86-64 with x86-64-v3. E.g., if you have so far

sync-uri = https://distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64/

then you change the URI to

sync-uri = https://distfiles.gentoo.org/releases/amd64/binpackages/17.1/x86-64-v3/

That’s all.

Why don’t you have x86-64-v4 packages?

There’s not yet enough hardware and people out there that could use them.

We could start building such packages at any time (our build host is new and shiny), but for now we recommend you build from source and use your own CFLAGS then. After all, if your machine supports x86-64-v4, it’s definitely fast…

Why is there recently so much noise about x86-64-v3 support in Linux distros?

Beats us. The ISA is 9 years old (just the tag x86-64-v3 was slapped onto it recently), so you’d think binaries would have been generated by now. With Gentoo you could’ve done (and probably have done) it all the time.

That said, in some processor lines (i.e. Atom), support for this instruction set was introduced rather late (2021).

VIEW

SCOPE

FILTER
  from
  to