November 27 2021

Unexpected database server downtime, affecting bugs, forums, wiki

Gentoo News (GentooNews) November 27, 2021, 6:00

Due to an unexpected breakage on our database servers, several Gentoo websites are currently down. In particular, this includes Forums, Wiki, and Bugzilla. Please visit our Infrastructure status page for real-time monitoring and eventual outage notices.

Gentoo logo

Due to an unexpected breakage on our database servers, several Gentoo websites are currently down. In particular, this includes Forums, Wiki, and Bugzilla. Please visit our Infrastructure status page for real-time monitoring and eventual outage notices.

November 07 2021

The future of Python build systems and Gentoo

Michał Górny (mgorny) November 07, 2021, 19:45

Anyone following my Twitter could have seen me complaining about things happening around Python build systems frequently. The late changes feel like people around the Python packaging ecosystem have been strongly focused on building a new infrastructure focused on Python-specific package manages such as pip and flit. Unfortunately, there seems to be very little concern on distribution packagers or backwards compatibility in this process.

In this post, I’d like to discuss how the Python packaging changes are going to affect Gentoo, and what is my suggested plan on dealing with them. In particular, I’d like to focus on three important changes:

  1. Python upstream deprecating the distutils module (and build system), and planning to remove it in Python 3.12.
  2. The overall rise of PEP 517-based build systems and the potential for setuptools dropping UI entirely.
  3. Setuptools upstream deprecating the setup.py install command, and potentially removing it in the future.

distutils deprecation

Over the years, the distutils stdlib module has been used to build setup.py scripts for Python packages. In addition to the baseline functions providing a build system CLI for the package, it provided the ability to easily extend the build system. This led both to growth of heavily customized setup.py scripts as part of some packages, as well as third-party build systems based on distutils, most notably setuptools.

This eventually led to deprecation of distutils themselves (see: PEP 632). Python 3.10 is already warning of distutils deprecation, and the current plan is to remove it in Python 3.12. Ahead of that, the development has moved to a dedicated pypa/distutils repository, and the copy of that is bundled within setuptools.

setuptools still uses the stdlib distutils by default. However, some packages already switch to the bundled copy, and upstream plans on using it by default in the future (see: Porting from Distutils).

At this point, I don’t think there is an explicit need for Gentoo to act here. However, it seems reasonable to avoid using distutils as the build system for Gentoo projects. Since the setuptools copy of distutils is different from the one included in CPython (and PyPy) and at the moment it does not carry the full set of historical Gentoo patches, it probably makes sense to test package compatibility with it nevertheless.

The use of bundled distutils copy can be forced using the following environment variable:

SETUPTOOLS_USE_DISTUTILS=local

This can be set both in the specific ebuild or in make.conf to force globally. However, please note that you can’t change the variable in place without a version bump (revision bump is insufficient). This is because switching to the local variant involves replacing the .egg-info file with a directory that is not supported by the PMS and not handled well by Portage.

Presuming that upstream is going to change the default sooner than later (and therefore unleash the breakage upon us), I think the cleanest way forward is to:

  1. Perform some initial testing (via tinderboxes).
  2. Enable SETUPTOOLS_USE_DISTUTILS=local when DISTUTILS_USE_SETUPTOOLS!=no (variable name similarity is coincidental) via eclass.
  3. Deprecate DISTUTILS_USE_SETUPTOOLS=no, requesting maintainers to switch when bumping packages to new versions.

The purpose of this plan is to have a good chance of testing the new default and migrating as many packages as possible before upstream forces it in place. The change of distutils provider on packages already using setuptools should be relatively safe. On the other hand, for packages using pure distutils it should happen through version bumps, in order to avoid file-directory collisions mentioned before. At the same time, the change of DISTUTILS_USE_SETUPTOOLS value will be necessary since setuptools dependency will now be necessary to provide the distutils override.

I have requested the initial tinderbox testing already. If everything goes good and we decide to follow with the plan, I will provide detailed instructions later. Please do not update the ebuilds yet.

The rise of PEP 517

PEP 517 (and a few more related PEPs) define a new infrastructure for installing Python packages. Long story short, they define a consistent API that can be exposed by an arbitrary build system to support using it from any package manager. Sounds great, right? Well, I’m not that enthusiastic.

Before I get to my reasons, let’s shortly summarize how building packages is supposed to work in PEP 517 world. Every project supplies at least a minimal pyproject.toml file that specifies the package providing the build system and the path to a module providing its entry points. You read that file, install the necessary packages, then call the appropriate entry point to get a wheel. Then you install the wheel. Roughly.

Firstly, TOML. This is something I’ve been repeating for quite some time already, so I’ll just quickly go over it. I like TOML, I think it’s a reasonable choice for markup. However, without a TOML parser in stdlib (and there’s no progress in providing one), this means that every single build system now depends on tomli, and involves a circular dependency. A few months back, every single build system depended on toml instead but that package became unmaintained. Does that make you feel confident?

Secondly, customization. We do pretty heavy customization of distutils/setuptools behavior at this point — build paths, install paths, the toolchain. It is understandable that PEP 517 utilizes the black box approach and doesn’t attempt to do it all. Unfortunately, the build systems built on top of PEP 517 so far seem to focus on providing an all-in-one package manager rather than a good build tool with customization support.

Thirdly, wheels. PEP 517 pretty much forces everyone into using the wheel package format, completely ignoring the fact that it’s neither the simplest solution, nor a good fit for distributions. What we lack is a trivial “put all files into a directory” entry point. What we get instead if “pack everything into a zip, and then use the next tool to unzip them”. Sure, that’s not a big deal for most packages but I just hate the idea of wasting electricity and user’s time to compress something just so it gets uncompressed back afterwards.

PEP 660 gives some hope of avoiding that by providing “editable install” support. Unfortunately, it’s so bleak it practically doesn’t specify anything. In practice, a PEP 660 editable install is usually a .dist-info + .pth file that adds source directory to sys.path — which means no files are actually installed, and it does not make it any easier for us to find the right files to install. In other words, it’s completely useless.

I have spent significant time looking for a good solution and found none so far. Back in the day, I wrote pyproject2setuppy as a stop-gap solution to install PEP 517-based packages via setuptools without having to package the new build systems (including their NIH dependencies) and figure out how to make them work sanely within our package framework. As of today, I still don’t see a better solution.

Given that setuptools seems to be aiming towards removing the CLI entirely and distutils is no longer maintained, I suspect that it is inevitable that at some point we’re going to have to bite the bullet one way or another. However, I don’t plan on making any changes for the time being — as long as setup.py install continues working, that is. When this is no longer feasible, we can research our options again.

setup.py install deprecation

At last, the final event that puts everything else into perspective: the setuptools upstream has deprecated the install command. While normally I would say “it’s not going to be removed anytime soon”, the indiscriminate use_2to3 removal suggests otherwise.

Just a quick recap: setuptools removed the use_2to3 support after it being deprecated for some time, summarizing it with “projects should port to a unified codebase or pin to an older version of Setuptools”. Surely, nose, a project that hasn’t seen a single commit (or accepted user patches) since 2016 is going to suddenly make a release to fix this. In the end, all the breakage is dumped on distribution packagers.

The install command removal is a bigger deal than that. It’s not just few old packages being broken, it’s whole workflows. I’ve been considering switching Gentoo to a different workflow for some time, without much effect. Even if we bite the bullet and go full PEP 517, there’s another major problem: there are projects that override the install command.

I mean, if we indiscriminately switched to installing without the install command, some packages would effectively be broken silently — they would e.g. stop installing some files. The biggest issue is that it’s non-trivial to find such packages. One I know about is called Portage.

At this point, I don’t think it’s worthwhile to put our effort into finding a replacement for setup.py install. We can cross that bridge when we get to it. Until then, it seems an unnecessary work with a fair breakage potential.

In the end, it’s still unclear what would be the best solution. It is possible we’re going to continue converting flit and poetry into setuptools to avoid having to maintain support for multiple build processes. It is possible we’re going to hack on top of existing PEP 517 tooling, or build something or own. It’s quite probable that if I find no other solution, I’m going to try monkey-patching the build system to copy files instead of zipping them, or at least disable compression.

Summary

The Python ecosystem is changing constantly, and the packaging aspect of it is no different. The original distutils build system has eventually evolved into setuptools, and is now being subsumed by it. Setuptools seems to be moving in the direction of becoming yet another PEP 517 build backend and indiscriminately removing features.

Unfortunately, this is all happening without much of a concern for backwards compatibility or feature parity. The Python developers are focused on building their own packaging infrastructure and have no interest in providing a single good workflow for distribution packagers. It is really unfortunate given that many of them rely on our work to build the environments they use to work.

At this point, our immediate goal is to get ready for distutils removal and the setuptools switch to the bundled distutils copy. This switch has real breakage potential for Gentoo users (because of the egg-info file/directory collision), and we need to handle the migration gracefully ahead of time. The other issues. notably setup.py install removal will also need to be handled in the future but right now the gain does not justify the effort.

Update (2021-11-10): data file support

While writing this post, I have missed an important limitation of PEP 517 builds. Distutils and setuptools both have a data_files feature that can be used to install arbitrary files into the system — either into subdirectories of sys.prefix (i.e. /usr) or via absolute paths. This was often used to install data files for the package but also to install manpages, .desktop files and so on.

The wheel specification as of today simply doesn’t support installing files outside the few Python-specific directories. Setuptools/wheel/pip seem to include them in wheels but it’s outside the specification and therefore likely to suffer from portability problems.

Unfortunately, there doesn’t seem to be an interest to actually resolve this. Unless I’m mistaken, both flit and poetry do not support installing files outside standard Python directories.

Anyone following my Twitter could have seen me complaining about things happening around Python build systems frequently. The late changes feel like people around the Python packaging ecosystem have been strongly focused on building a new infrastructure focused on Python-specific package manages such as pip and flit. Unfortunately, there seems to be very little concern on distribution packagers or backwards compatibility in this process.

In this post, I’d like to discuss how the Python packaging changes are going to affect Gentoo, and what is my suggested plan on dealing with them. In particular, I’d like to focus on three important changes:

  1. Python upstream deprecating the distutils module (and build system), and planning to remove it in Python 3.12.
  2. The overall rise of PEP 517-based build systems and the potential for setuptools dropping UI entirely.
  3. Setuptools upstream deprecating the setup.py install command, and potentially removing it in the future.

distutils deprecation

Over the years, the distutils stdlib module has been used to build setup.py scripts for Python packages. In addition to the baseline functions providing a build system CLI for the package, it provided the ability to easily extend the build system. This led both to growth of heavily customized setup.py scripts as part of some packages, as well as third-party build systems based on distutils, most notably setuptools.

This eventually led to deprecation of distutils themselves (see: PEP 632). Python 3.10 is already warning of distutils deprecation, and the current plan is to remove it in Python 3.12. Ahead of that, the development has moved to a dedicated pypa/distutils repository, and the copy of that is bundled within setuptools.

setuptools still uses the stdlib distutils by default. However, some packages already switch to the bundled copy, and upstream plans on using it by default in the future (see: Porting from Distutils).

At this point, I don’t think there is an explicit need for Gentoo to act here. However, it seems reasonable to avoid using distutils as the build system for Gentoo projects. Since the setuptools copy of distutils is different from the one included in CPython (and PyPy) and at the moment it does not carry the full set of historical Gentoo patches, it probably makes sense to test package compatibility with it nevertheless.

The use of bundled distutils copy can be forced using the following environment variable:

SETUPTOOLS_USE_DISTUTILS=local

This can be set both in the specific ebuild or in make.conf to force globally. However, please note that you can’t change the variable in place without a version bump (revision bump is insufficient). This is because switching to the local variant involves replacing the .egg-info file with a directory that is not supported by the PMS and not handled well by Portage.

Presuming that upstream is going to change the default sooner than later (and therefore unleash the breakage upon us), I think the cleanest way forward is to:

  1. Perform some initial testing (via tinderboxes).
  2. Enable SETUPTOOLS_USE_DISTUTILS=local when DISTUTILS_USE_SETUPTOOLS!=no (variable name similarity is coincidental) via eclass.
  3. Deprecate DISTUTILS_USE_SETUPTOOLS=no, requesting maintainers to switch when bumping packages to new versions.

The purpose of this plan is to have a good chance of testing the new default and migrating as many packages as possible before upstream forces it in place. The change of distutils provider on packages already using setuptools should be relatively safe. On the other hand, for packages using pure distutils it should happen through version bumps, in order to avoid file-directory collisions mentioned before. At the same time, the change of DISTUTILS_USE_SETUPTOOLS value will be necessary since setuptools dependency will now be necessary to provide the distutils override.

I have requested the initial tinderbox testing already. If everything goes good and we decide to follow with the plan, I will provide detailed instructions later. Please do not update the ebuilds yet.

The rise of PEP 517

PEP 517 (and a few more related PEPs) define a new infrastructure for installing Python packages. Long story short, they define a consistent API that can be exposed by an arbitrary build system to support using it from any package manager. Sounds great, right? Well, I’m not that enthusiastic.

Before I get to my reasons, let’s shortly summarize how building packages is supposed to work in PEP 517 world. Every project supplies at least a minimal pyproject.toml file that specifies the package providing the build system and the path to a module providing its entry points. You read that file, install the necessary packages, then call the appropriate entry point to get a wheel. Then you install the wheel. Roughly.

Firstly, TOML. This is something I’ve been repeating for quite some time already, so I’ll just quickly go over it. I like TOML, I think it’s a reasonable choice for markup. However, without a TOML parser in stdlib (and there’s no progress in providing one), this means that every single build system now depends on tomli, and involves a circular dependency. A few months back, every single build system depended on toml instead but that package became unmaintained. Does that make you feel confident?

Secondly, customization. We do pretty heavy customization of distutils/setuptools behavior at this point — build paths, install paths, the toolchain. It is understandable that PEP 517 utilizes the black box approach and doesn’t attempt to do it all. Unfortunately, the build systems built on top of PEP 517 so far seem to focus on providing an all-in-one package manager rather than a good build tool with customization support.

Thirdly, wheels. PEP 517 pretty much forces everyone into using the wheel package format, completely ignoring the fact that it’s neither the simplest solution, nor a good fit for distributions. What we lack is a trivial “put all files into a directory” entry point. What we get instead if “pack everything into a zip, and then use the next tool to unzip them”. Sure, that’s not a big deal for most packages but I just hate the idea of wasting electricity and user’s time to compress something just so it gets uncompressed back afterwards.

PEP 660 gives some hope of avoiding that by providing “editable install” support. Unfortunately, it’s so bleak it practically doesn’t specify anything. In practice, a PEP 660 editable install is usually a .dist-info + .pth file that adds source directory to sys.path — which means no files are actually installed, and it does not make it any easier for us to find the right files to install. In other words, it’s completely useless.

I have spent significant time looking for a good solution and found none so far. Back in the day, I wrote pyproject2setuppy as a stop-gap solution to install PEP 517-based packages via setuptools without having to package the new build systems (including their NIH dependencies) and figure out how to make them work sanely within our package framework. As of today, I still don’t see a better solution.

Given that setuptools seems to be aiming towards removing the CLI entirely and distutils is no longer maintained, I suspect that it is inevitable that at some point we’re going to have to bite the bullet one way or another. However, I don’t plan on making any changes for the time being — as long as setup.py install continues working, that is. When this is no longer feasible, we can research our options again.

setup.py install deprecation

At last, the final event that puts everything else into perspective: the setuptools upstream has deprecated the install command. While normally I would say “it’s not going to be removed anytime soon”, the indiscriminate use_2to3 removal suggests otherwise.

Just a quick recap: setuptools removed the use_2to3 support after it being deprecated for some time, summarizing it with “projects should port to a unified codebase or pin to an older version of Setuptools”. Surely, nose, a project that hasn’t seen a single commit (or accepted user patches) since 2016 is going to suddenly make a release to fix this. In the end, all the breakage is dumped on distribution packagers.

The install command removal is a bigger deal than that. It’s not just few old packages being broken, it’s whole workflows. I’ve been considering switching Gentoo to a different workflow for some time, without much effect. Even if we bite the bullet and go full PEP 517, there’s another major problem: there are projects that override the install command.

I mean, if we indiscriminately switched to installing without the install command, some packages would effectively be broken silently — they would e.g. stop installing some files. The biggest issue is that it’s non-trivial to find such packages. One I know about is called Portage.

At this point, I don’t think it’s worthwhile to put our effort into finding a replacement for setup.py install. We can cross that bridge when we get to it. Until then, it seems an unnecessary work with a fair breakage potential.

In the end, it’s still unclear what would be the best solution. It is possible we’re going to continue converting flit and poetry into setuptools to avoid having to maintain support for multiple build processes. It is possible we’re going to hack on top of existing PEP 517 tooling, or build something or own. It’s quite probable that if I find no other solution, I’m going to try monkey-patching the build system to copy files instead of zipping them, or at least disable compression.

Summary

The Python ecosystem is changing constantly, and the packaging aspect of it is no different. The original distutils build system has eventually evolved into setuptools, and is now being subsumed by it. Setuptools seems to be moving in the direction of becoming yet another PEP 517 build backend and indiscriminately removing features.

Unfortunately, this is all happening without much of a concern for backwards compatibility or feature parity. The Python developers are focused on building their own packaging infrastructure and have no interest in providing a single good workflow for distribution packagers. It is really unfortunate given that many of them rely on our work to build the environments they use to work.

At this point, our immediate goal is to get ready for distutils removal and the setuptools switch to the bundled distutils copy. This switch has real breakage potential for Gentoo users (because of the egg-info file/directory collision), and we need to handle the migration gracefully ahead of time. The other issues. notably setup.py install removal will also need to be handled in the future but right now the gain does not justify the effort.

Update (2021-11-10): data file support

While writing this post, I have missed an important limitation of PEP 517 builds. Distutils and setuptools both have a data_files feature that can be used to install arbitrary files into the system — either into subdirectories of sys.prefix (i.e. /usr) or via absolute paths. This was often used to install data files for the package but also to install manpages, .desktop files and so on.

The wheel specification as of today simply doesn’t support installing files outside the few Python-specific directories. Setuptools/wheel/pip seem to include them in wheels but it’s outside the specification and therefore likely to suffer from portability problems.

Unfortunately, there doesn’t seem to be an interest to actually resolve this. Unless I’m mistaken, both flit and poetry do not support installing files outside standard Python directories.

October 17 2021

[Gentoo] Quick and dirty way to fix broken pam on a machine that runs fine otherwise

Thomas Raschbacher (lordvan) October 17, 2021, 13:48

Due to some unfortunate events I ended up with a broken pam library on a VM I am running. Everything else worked just fine .. except that login of course (so a bit of an issue if you need to do stuff like update letsencrypt certificates quickly cuz you forgot and are on holiday...

In th end I did just use VNC to access the server, reboot it, and copy the certificates where they need to be after mounting the LVM Volume directly on the host (the VM uses a raw LVM volume for data storage - fortunately, which made it really easy to get the new certificates to the VM) then just reboot it with init=/bin/sh and copy the certificates where they need to be .. and holiday can continue..(after a few reboots since I accidentally had copied the symlinks certbot creates first time XD)

Been thinking of how to fix this the easiest way .. At first I considered updating pam from the single-user env booted before, but there are just too many issues (proc, dev,..) more effort than it is worth it .. turned out it is easier than thought .. since the system booted just fine otherwise I just (ab-)used Gentoos local service, which starts scripts placed in /etc/local.d on bootup. just a quick script in there to emerge pam and after it was done login worked fine again.

Due to some unfortunate events I ended up with a broken pam library on a VM I am running. Everything else worked just fine .. except that login of course (so a bit of an issue if you need to do stuff like update letsencrypt certificates quickly cuz you forgot and are on holiday...

In th end I did just use VNC to access the server, reboot it, and copy the certificates where they need to be after mounting the LVM Volume directly on the host (the VM uses a raw LVM volume for data storage - fortunately, which made it really easy to get the new certificates to the VM) then just reboot it with init=/bin/sh and copy the certificates where they need to be .. and holiday can continue..(after a few reboots since I accidentally had copied the symlinks certbot creates first time XD)

Been thinking of how to fix this the easiest way .. At first I considered updating pam from the single-user env booted before, but there are just too many issues (proc, dev,..) more effort than it is worth it .. turned out it is easier than thought .. since the system booted just fine otherwise I just (ab-)used Gentoos local service, which starts scripts placed in /etc/local.d on bootup. just a quick script in there to emerge pam and after it was done login worked fine again.

September 21 2021

Experimental binary Gentoo package hosting (amd64)

Andreas K. Hüttel (dilfridge) September 21, 2021, 16:34

♦As an experiment, I've started assembling a simple binary package hosting mechanism for Gentoo. Right now this comes with some serious limitations and should not be used for security or mission critical applications (more on this below). The main purpose of this experiment is to find out how well it works and where we need improvements in Portage's binary package handling.

So what do we have, and how can you use it?

  • The server builds an assortment of stable amd64 packages, with the use-flags as present in an unmodified 17.1/desktop/plasma/systemd profile (the only necessary change is USE=bindist).
  • The packages can be used on all amd64 profiles that differ from desktop/plasma/systemd only by use-flag settings. This includes 17.1, 17.1/desktop/*, 17.1/no-multilib, 17.1/systemd, but not anything containing selinx, hardened, developer, musl, or a different profile version such as 17.0.
  • Right now, the package set includes kde-plasma/plasma-meta, kde-apps/kde-apps-meta, app-office/libreoffice, media-gfx/gimp, media-gfx/inkscape, and of course all their dependencies. More will possibly be added.
  • CFLAGS are chosen such that the packages will be usable on all amd64 (i.e., x86-64) machines. 

To use the packages, I recommend the following steps: First, create a file /etc/portage/binrepos.conf with the following content:

[binhost]
priority = 9999
sync-uri = gentoo.osuosl.org/experimental/amd64/binpkg/default/linux/17.1/x86-64/

You can pick a different mirror according to your preferences (but also see the remarks below). Then, edit /etc/portage/make.conf, and add the following EMERGE_DEFAULT_OPTS (in addition to flags that you might already have there):

EMERGE_DEFAULT_OPTS="--binpkg-respect-use=y --getbinpkg=y"

And that's it. Your next update should download the package index and use binary packages whenever the versions and use-flag settings match. Everything else is compiled as usual.

What is still missing, and what are the limitations and caveats?

  • Obviously, the packages are not optimized for your processor.
  • Right now, the server only carries packages for the use-flag settings in an unmodified 17.1/desktop/plasma/systemd profile. If you use other settings, you will end up compiling part of your packages (which is not really a probem, you just lose the benefit of the binary download). It is technically possible to provide binary packages for different use-flag settings at the same URL, and eventually it will be implemented if this experiment succeeds.
  • At the moment, no cryptographic signing of the binary packages is in place yet. This is the main reason why I'm talking about an experiment. Effectively you trust our mirror admins and the https protocol. Package signing and verification is in preparation, and before the binary package hosting "moves into production", it will be enforced.
That's it. Enjoy! And don't forget to leave feedback in the comments.

As an experiment, I've started assembling a simple binary package hosting mechanism for Gentoo. Right now this comes with some serious limitations and should not be used for security or mission critical applications (more on this below). The main purpose of this experiment is to find out how well it works and where we need improvements in Portage's binary package handling.

So what do we have, and how can you use it?

  • The server builds an assortment of stable amd64 packages, with the use-flags as present in an unmodified 17.1/desktop/plasma/systemd profile (the only necessary change is USE=bindist).
  • The packages can be used on all amd64 profiles that differ from desktop/plasma/systemd only by use-flag settings. This includes 17.1, 17.1/desktop/*, 17.1/no-multilib, 17.1/systemd, but not anything containing selinx, hardened, developer, musl, or a different profile version such as 17.0.
  • Right now, the package set includes kde-plasma/plasma-meta, kde-apps/kde-apps-meta, app-office/libreoffice, media-gfx/gimp, media-gfx/inkscape, and of course all their dependencies. More will possibly be added.
  • CFLAGS are chosen such that the packages will be usable on all amd64 (i.e., x86-64) machines. 

To use the packages, I recommend the following steps: First, create a file /etc/portage/binrepos.conf with the following content:

[binhost]
priority = 9999
sync-uri = https://gentoo.osuosl.org/experimental/amd64/binpkg/default/linux/17.1/x86-64/

You can pick a different mirror according to your preferences (but also see the remarks below). Then, edit /etc/portage/make.conf, and add the following EMERGE_DEFAULT_OPTS (in addition to flags that you might already have there):

EMERGE_DEFAULT_OPTS="--binpkg-respect-use=y --getbinpkg=y"

And that's it. Your next update should download the package index and use binary packages whenever the versions and use-flag settings match. Everything else is compiled as usual.

What is still missing, and what are the limitations and caveats?

  • Obviously, the packages are not optimized for your processor.
  • Right now, the server only carries packages for the use-flag settings in an unmodified 17.1/desktop/plasma/systemd profile. If you use other settings, you will end up compiling part of your packages (which is not really a probem, you just lose the benefit of the binary download). It is technically possible to provide binary packages for different use-flag settings at the same URL, and eventually it will be implemented if this experiment succeeds.
  • At the moment, no cryptographic signing of the binary packages is in place yet. This is the main reason why I'm talking about an experiment. Effectively you trust our mirror admins and the https protocol. Package signing and verification is in preparation, and before the binary package hosting "moves into production", it will be enforced.
That's it. Enjoy! And don't forget to leave feedback in the comments.

September 08 2021

.1.gz? No thanks!

Sam James (sam) September 08, 2021, 17:00
Every so often, I’ll be working on updating a package for Gentoo, and suddenly I’ll: * QA Notice: One or more compressed files were found in docompress-ed * directories. Please fix the ebuild not to install compressed files * (manpages, documentation) when automatic compression is used: * * /usr/share/man/man6/warzone2100.6.gz “What’s the problem?", upstreams cry! They are trying to help us packagers out – it’s one less thing to worry about!
sam (sam ) September 08, 2021, 17:00
Every so often, I’ll be working on updating a package for Gentoo, and suddenly I’ll: * QA Notice: One or more compressed files were found in docompress-ed * directories. Please fix the ebuild not to install compressed files * (manpages, documentation) when automatic compression is used: * * /usr/share/man/man6/warzone2100.6.gz “What’s the problem?", upstreams cry! They are trying to help us packagers out – it’s one less thing to worry about!

August 16 2021

The stablereq workflow for Python packages

Michał Górny (mgorny) August 16, 2021, 13:07

I have been taking care of periodic mass stabilization of Python packages in Gentoo for some time already. Per Guilherme Amadio‘s suggestion, I’d like to share the workflow I use for this. I think it could be helpful to others dealing with large sets of heterogeneous packages.

The workflow requires:

– app-portage/mgorny-dev-scripts, v10
– dev-util/pkgcheck

Grabbing candidate list from pkgcheck

One of the features of pkgcheck is that it can report ebuilds that haven’t been changed in 30 days and therefore are due for stabilization. This isn’t perfect but in the end, it gets the job done.

I start by opening two terminals side-by-side and entering the clone of ::gentoo on both. On one of them, I run:

stablereq-eshowkw 'dev-python/*'

On the other, I do:

stablereq-find-pkg-bugs 'dev-python/*'
stablereq-make-list 'dev-python/*'



This gets me three things:

1. An open Bugzilla search for all stabilization candidates.
2. A script to call file-stablereq for all stabilization candidates open in the editor.
3. eshowkw output for all stabilization candidates in the other terminal.

The three scripts pass their arguments through to pkgcheck. Instead of passing package specifications directly, you can use a simple pipeline to grab all packages with a specific maintainer:

git grep -l python@gentoo.org '**/metadata.xml' | cut -d/ -f1-2 | xargs stablereq-eshowkw
Filtering the candidate list

The candidate list given by pkgcheck is pretty rough. Now it’s time to mangle it a bit.

For a start, I go through the eshowkw list to see if the packages have any newer versions that can be stabilized. Roughly speaking, I ignore all packages that have only one stabilization candidate and I check the rest.

Checking usually means looking at git log and/or pkgdiff to see if a newer version would not be a better stabilization candidate. I update the list in the editor accordingly, either changing the desired version or removing some packages altogether (e.g. because they are release candidates or to go straight for a newer version later).

I close the eshowkw results then and do the next round of filtering via Bugzilla search. I look at the Bugzilla search for bugs affecting the stabilization candidates. Once again, I update the list accordingly. Most importantly, this means removing packages that have their stablereq filed already. This is also a good opportunity to resolve obsolete bugs.

I close the search result tabs but leave the browser open (e.g. with an empty tab) for the next step.

Filing the stablereqs

Now I save the list into a file, and run it via shell. This generally involves a lot of file-stablereq calls that open lots of browser tabs with pre-filled stablereqs. I suppose it would be much better to use Bugzilla API to file bugs directly but I’ve never gotten around to implement that.

I use bug-assign-user-js to assign the bugs, then submit them. With some skill, you can do it pretty fast. Point the mouse at the ‘A’ box for the package, click, shift-tab, enter, ctrl-tab, repeat.

If everything went correctly, you get a lot of new bugs filed. Now it’s a good time to look into your e-mail client and mark the mails for newly filed bugs read, before NATTkA starts processing them.

Post-processing the bugs

The last step is to go through bug mail resulting from NATTkA operations.

If sanity check fails, it is necessary to either add dependencies on other bugs already filed, add additional packages to the package list or file additional stablereqs.

For more complex problems, app-portage/nattka 0.2.15 provides a nattka make-package-list -s CATEGORY/PACKAGE-VERSION subcommand that can prepare a package list with dependencies. However, note that it unconditionally takes newest versions available, so you will need to verify the result and replace versions whenever necessary.

Additionally, I generally look at ALLARCHES keyword being added to bugs. If a bug is missing it, I verify whether the package is suitable, and add <stabilize-allarches/> to its metadata.xml.

I have been taking care of periodic mass stabilization of Python packages in Gentoo for some time already. Per Guilherme Amadio‘s suggestion, I’d like to share the workflow I use for this. I think it could be helpful to others dealing with large sets of heterogeneous packages.

The workflow requires:

app-portage/mgorny-dev-scripts, v10
dev-util/pkgcheck

Grabbing candidate list from pkgcheck

One of the features of pkgcheck is that it can report ebuilds that haven’t been changed in 30 days and therefore are due for stabilization. This isn’t perfect but in the end, it gets the job done.

I start by opening two terminals side-by-side and entering the clone of ::gentoo on both. On one of them, I run:

stablereq-eshowkw 'dev-python/*'

On the other, I do:

stablereq-find-pkg-bugs 'dev-python/*'
stablereq-make-list 'dev-python/*'


Screenshot of desktop with the described three windows open

This gets me three things:

1. An open Bugzilla search for all stabilization candidates.
2. A script to call file-stablereq for all stabilization candidates open in the editor.
3. eshowkw output for all stabilization candidates in the other terminal.

The three scripts pass their arguments through to pkgcheck. Instead of passing package specifications directly, you can use a simple pipeline to grab all packages with a specific maintainer:

git grep -l python@gentoo.org '**/metadata.xml' | cut -d/ -f1-2 | xargs stablereq-eshowkw

Filtering the candidate list

The candidate list given by pkgcheck is pretty rough. Now it’s time to mangle it a bit.

For a start, I go through the eshowkw list to see if the packages have any newer versions that can be stabilized. Roughly speaking, I ignore all packages that have only one stabilization candidate and I check the rest.

Checking usually means looking at git log and/or pkgdiff to see if a newer version would not be a better stabilization candidate. I update the list in the editor accordingly, either changing the desired version or removing some packages altogether (e.g. because they are release candidates or to go straight for a newer version later).

I close the eshowkw results then and do the next round of filtering via Bugzilla search. I look at the Bugzilla search for bugs affecting the stabilization candidates. Once again, I update the list accordingly. Most importantly, this means removing packages that have their stablereq filed already. This is also a good opportunity to resolve obsolete bugs.

I close the search result tabs but leave the browser open (e.g. with an empty tab) for the next step.

Filing the stablereqs

Now I save the list into a file, and run it via shell. This generally involves a lot of file-stablereq calls that open lots of browser tabs with pre-filled stablereqs. I suppose it would be much better to use Bugzilla API to file bugs directly but I’ve never gotten around to implement that.

I use bug-assign-user-js to assign the bugs, then submit them. With some skill, you can do it pretty fast. Point the mouse at the ‘A’ box for the package, click, shift-tab, enter, ctrl-tab, repeat.

If everything went correctly, you get a lot of new bugs filed. Now it’s a good time to look into your e-mail client and mark the mails for newly filed bugs read, before NATTkA starts processing them.

Post-processing the bugs

The last step is to go through bug mail resulting from NATTkA operations.

If sanity check fails, it is necessary to either add dependencies on other bugs already filed, add additional packages to the package list or file additional stablereqs.

For more complex problems, app-portage/nattka 0.2.15 provides a nattka make-package-list -s CATEGORY/PACKAGE-VERSION subcommand that can prepare a package list with dependencies. However, note that it unconditionally takes newest versions available, so you will need to verify the result and replace versions whenever necessary.

Additionally, I generally look at ALLARCHES keyword being added to bugs. If a bug is missing it, I verify whether the package is suitable, and add <stabilize-allarches/> to its metadata.xml.

July 25 2021

Getting DTS 5.1+ sound via S/PDIF or HDMI using PulseAudio

Michał Górny (mgorny) July 25, 2021, 17:16

While PCs still usually provide a full set of analog jacks capable of outputting a 5.1 audio, other modern hardware (such as TVs) is usually limited to digital audio outputs (and sometimes analog outputs limited to stereo sound). These outputs are either S/PDIF (coaxial or optical) or HDMI. When the PC is connected to a TV, a pretty logical setup is to carry the sound via HDMI to the TV, and from there via S/PDIF or HDMI ARC to a 5.1 amplifier. However, it isn’t always as simple as it sounds.

For a start, S/PDIF is a pretty antiquated interface originally designed to carry stereo PCM audio. The modern versions of the interface have sufficient bandwidth for up to 192 kHz sampling rate and up to 24 bit audio depth. However, in order to support more than two audio channels, the transmitted sound needs to be compressed. S/PDIF hardware usually supports MPEG, AC3 and DTS formats.

HDMI is better there. HDMI 1.2 technically supports up to 8 channels of PCM audio, 2.0 up to 32 channels. However, not all hardware actually supports that. In particular, my TV seems to only support stereo PCM input, and ignores additional channels when passed 5.1 audio. Fortunately, additional audio channels work when compressed input is used. HDMI supports more audio formats, including DTS-HD MA and TrueHD.

In this post, I’d like to shortly explore our options for making a PulseAudio-enabled Linux system output compressed 5.1 over S/PDIF or HDMI (apparently both are treated the same from ALSA/PulseAudio perspective).

Enabling S/PDIF / HDMI passthrough in mpv

It’s rather unlikely that you’ll be playing uncompressed audio these days. When playing movies, you’ll often find that the audio tracks are encoded using one of the formats supported by S/PDIF or HDMI. Rather than having mpv decode them just to have ALSA compress them again (naturally with a quality loss), why not pass the encoded audio through to the output?

If you’re using HDMI, the first prerequisite is to set the PulseAudio’s configuration profile to digital stereo (found on Configuration tab of pavucontrol). This could be a bit confusing but it actually enables you to transfer compressed surround sound. Of course, this implies that you’ll no longer be able to output surround PCM sound via HDMI but if you’re going to enable compressed audio output anyway, it doesn’t matter.

Then, you need to enable support for additional output formats. If you’re using pavucontrol, the relevant checkboxes can be found on Output Devices tab, hidden under Advanced. Tick off all that your connected device supports (usually all).

Finally, you have to enable S/PDIF passthrough (the same option is used for HDMI) in mpv, via ~/.config/mpv/mpv.conf:

audio-spdif=ac3,dts,eac3
audio-channels=5.1

The full list of formats can be found in mpv(1) manpage.

If everything works fine, you’re going to see something like the following in mpv output:

AO: [alsa] 48000Hz stereo 2ch spdif-ac3

(ignore the stereo part, it is shown like this when passing compressed surround sound through)

Note that audio passthrough requires exclusive access to the sound card, i.e. you won’t be able to use it simultaneously with sound from other apps.

Enabling transparent AC3/DTS compression of audio output

While passthrough is often good enough for watching movies, it is not a universal solution. If, say, you’d like to play a game with surround sound, you need the regular audio output to support it. Fortunately, there is a relatively easy way to use ALSA plugins to enable transparent compression and make your S/PDIF / HDMI output 5.1-friendly.

For a start, you need to install an appropriate ALSA plugin. If you’d like to use AC3 audio, the plugin is found in media-plugins/alsa-plugins[ffmpeg]. For DTS audio, the package is media-sound/dcaenc[alsa].

The next step is adding the appropriate configuration to /etc/asound.conf. The snippet for AC3 is:

pcm.a52 {
  @args [CARD]
  @args.CARD {
    type string
  }
  type rate
  slave {
    pcm {
      type a52
      bitrate 448
      channels 6
      card $CARD
    }
  rate 48000
  }
}

The version modified for DTS is:

pcm.dca {
  @args [CARD]
  @args.CARD {
    type string
  }
  type rate
  slave {
    pcm {
      type dca
      channels 6
      card $CARD
    }
  rate 48000
  }
}

Honestly, it’s some black magic how it works but somehow PulseAudio just picks it up and starts accepting 5.1 sound, and the TV happily plays it.

Finally, the Ubuntu Community wiki suggests explicitly setting sampling rate in PA to avoid compatibility issues. In /etc/pulse/daemon.conf:

default-sample-rate = 48000
References
  • The Well-Tempered Computer: S/PDIF
  • Wikipedia: HDMI
  • Kodi Wiki: PulseAudio
  • Reddit: HD Audio HDMI passthrough setup
  • Ubuntu Community Help Wiki; DigitalAC-3Pulseaudio

While PCs still usually provide a full set of analog jacks capable of outputting a 5.1 audio, other modern hardware (such as TVs) is usually limited to digital audio outputs (and sometimes analog outputs limited to stereo sound). These outputs are either S/PDIF (coaxial or optical) or HDMI. When the PC is connected to a TV, a pretty logical setup is to carry the sound via HDMI to the TV, and from there via S/PDIF or HDMI ARC to a 5.1 amplifier. However, it isn’t always as simple as it sounds.

For a start, S/PDIF is a pretty antiquated interface originally designed to carry stereo PCM audio. The modern versions of the interface have sufficient bandwidth for up to 192 kHz sampling rate and up to 24 bit audio depth. However, in order to support more than two audio channels, the transmitted sound needs to be compressed. S/PDIF hardware usually supports MPEG, AC3 and DTS formats.

HDMI is better there. HDMI 1.2 technically supports up to 8 channels of PCM audio, 2.0 up to 32 channels. However, not all hardware actually supports that. In particular, my TV seems to only support stereo PCM input, and ignores additional channels when passed 5.1 audio. Fortunately, additional audio channels work when compressed input is used. HDMI supports more audio formats, including DTS-HD MA and TrueHD.

In this post, I’d like to shortly explore our options for making a PulseAudio-enabled Linux system output compressed 5.1 over S/PDIF or HDMI (apparently both are treated the same from ALSA/PulseAudio perspective).

Enabling S/PDIF / HDMI passthrough in mpv

It’s rather unlikely that you’ll be playing uncompressed audio these days. When playing movies, you’ll often find that the audio tracks are encoded using one of the formats supported by S/PDIF or HDMI. Rather than having mpv decode them just to have ALSA compress them again (naturally with a quality loss), why not pass the encoded audio through to the output?

pavuconfig configuration tab

If you’re using HDMI, the first prerequisite is to set the PulseAudio’s configuration profile to digital stereo (found on Configuration tab of pavucontrol). This could be a bit confusing but it actually enables you to transfer compressed surround sound. Of course, this implies that you’ll no longer be able to output surround PCM sound via HDMI but if you’re going to enable compressed audio output anyway, it doesn’t matter.

pavuconfig output devices tab

Then, you need to enable support for additional output formats. If you’re using pavucontrol, the relevant checkboxes can be found on Output Devices tab, hidden under Advanced. Tick off all that your connected device supports (usually all).

Finally, you have to enable S/PDIF passthrough (the same option is used for HDMI) in mpv, via ~/.config/mpv/mpv.conf:

audio-spdif=ac3,dts,eac3
audio-channels=5.1

The full list of formats can be found in mpv(1) manpage.

If everything works fine, you’re going to see something like the following in mpv output:

AO: [alsa] 48000Hz stereo 2ch spdif-ac3

(ignore the stereo part, it is shown like this when passing compressed surround sound through)

Note that audio passthrough requires exclusive access to the sound card, i.e. you won’t be able to use it simultaneously with sound from other apps.

Enabling transparent AC3/DTS compression of audio output

While passthrough is often good enough for watching movies, it is not a universal solution. If, say, you’d like to play a game with surround sound, you need the regular audio output to support it. Fortunately, there is a relatively easy way to use ALSA plugins to enable transparent compression and make your S/PDIF / HDMI output 5.1-friendly.

For a start, you need to install an appropriate ALSA plugin. If you’d like to use AC3 audio, the plugin is found in media-plugins/alsa-plugins[ffmpeg]. For DTS audio, the package is media-sound/dcaenc[alsa].

The next step is adding the appropriate configuration to /etc/asound.conf. The snippet for AC3 is:

pcm.a52 {
  @args [CARD]
  @args.CARD {
    type string
  }
  type rate
  slave {
    pcm {
      type a52
      bitrate 448
      channels 6
      card $CARD
    }
  rate 48000
  }
}

The version modified for DTS is:

pcm.dca {
  @args [CARD]
  @args.CARD {
    type string
  }
  type rate
  slave {
    pcm {
      type dca
      channels 6
      card $CARD
    }
  rate 48000
  }
}

Honestly, it’s some black magic how it works but somehow PulseAudio just picks it up and starts accepting 5.1 sound, and the TV happily plays it.

Finally, the Ubuntu Community wiki suggests explicitly setting sampling rate in PA to avoid compatibility issues. In /etc/pulse/daemon.conf:

default-sample-rate = 48000

References

July 20 2021

Additional stage downloads for amd64, ppc, x86, arm available

Gentoo News (GentooNews) July 20, 2021, 5:00

Following some technical reorganization and the introduction of new hardware, the Gentoo Release Engineering team is happy to offer a much-expanded set of stage files for download. Highlights are in particular the inclusion of musl-based stages and of POWER9-optimized ppc64 downloads, as well as additional systemd-based variants for many architectures.

For amd64, Hardened/SELinux stages are now available directly from the download page, as are stages based on the lightweight C standard library musl. Note that musl requires using the musl overlay, as described on the page of the Hardened musl project.

For ppc, little-endian stages optimized for the POWER9 CPU series have been added, as have been big- and little-endian Hardened musl downloads.

Additionally, for all of amd64, ppc64, x86, and arm, stages are now available in both an OpenRC and a systemd init system / service manager variant wherever that makes sense.

This all has become possible via the introduction of new build hosts. The amd64, x86 (natively), arm (via QEMU), and riscv (via QEMU) archives are built on an AMD Ryzen™ 7 3700X 8-core machine with 64GByte of RAM, located in Hetzner’s Helsinki datacentre. The ppc, ppc64, and ppc64le / power9le builds are handled by two 16-core POWER9 machines with 32GByte of RAM, provided by OSUOSL POWER Development Hosting.

Further, at the moment an arm64 (aka aarch64) machine with an 80-core Ampere Altra CPU and 256GByte of RAM, provided by Equinix through the Works On Arm program, is being prepared for improved native arm64 and arm support, so expect updates there soon!

Gentoo logo

Following some technical reorganization and the introduction of new hardware, the Gentoo Release Engineering team is happy to offer a much-expanded set of stage files for download. Highlights are in particular the inclusion of musl-based stages and of POWER9-optimized ppc64 downloads, as well as additional systemd-based variants for many architectures.

For amd64, Hardened/SELinux stages are now available directly from the download page, as are stages based on the lightweight C standard library musl. Note that musl requires using the musl overlay, as described on the page of the Hardened musl project.

For ppc, little-endian stages optimized for the POWER9 CPU series have been added, as have been big- and little-endian Hardened musl downloads.

Additionally, for all of amd64, ppc64, x86, and arm, stages are now available in both an OpenRC and a systemd init system / service manager variant wherever that makes sense.

This all has become possible via the introduction of new build hosts. The amd64, x86 (natively), arm (via QEMU), and riscv (via QEMU) archives are built on an AMD Ryzen™ 7 3700X 8-core machine with 64GByte of RAM, located in Hetzner’s Helsinki datacentre. The ppc, ppc64, and ppc64le / power9le builds are handled by two 16-core POWER9 machines with 32GByte of RAM, provided by OSUOSL POWER Development Hosting.

Further, at the moment an arm64 (aka aarch64) machine with an 80-core Ampere Altra CPU and 256GByte of RAM, provided by Equinix through the Works On Arm program, is being prepared for improved native arm64 and arm support, so expect updates there soon!

June 16 2021

The ultimate guide to EAPI 8

Michał Górny (mgorny) June 16, 2021, 22:23

Three years ago, I had the pleasure of announcing EAPI 7 as a major step forward in our ebuild language. It introduced preliminary support for cross-compilation, it finally provided good replacements for the last Portagisms in ebuilds and it included many small changes that made ebuilds simpler.

Only a year and a half later, I have started working on the initial EAPI 8 feature set. Similarly to EAPI 6, EAPI 8 was supposed to focus on small changes and improvements. The two killer features listed below were already proposed at the time. I have prepared a few patches to the specification, as well as the initial implementation of the respective features for Portage. Unfortunately, the work stalled at the time.

Finally, as a result of surplus of free time last month, I was able to resume the work. Along with Ulrich Müller, we have quickly prepared the EAPI 8 feature set, got it pre-approved, prepared the specification and implemented all the features in Portage and pkgcore. Last Sunday, the Council has approved EAPI 8 and it’s now ready for ~arch use.

What’s there in EAPI 8? Well, for a start we have install-time dependencies (IDEPEND) that fill a gap in our cross-compilation design. Then, selective fetch/mirror restriction make it easier to combine proprietary and free distfiles in a single package. PROPERTIES and RESTRICT are now accumulated across eclasses reducing confusion for eclass writers. There’s dosym -r to create relative symlinks conveniently from dynamic paths. Plus bunch of other improvements, updates and cleanups.

Read the full article

Three years ago, I had the pleasure of announcing EAPI 7 as a major step forward in our ebuild language. It introduced preliminary support for cross-compilation, it finally provided good replacements for the last Portagisms in ebuilds and it included many small changes that made ebuilds simpler.

Only a year and a half later, I have started working on the initial EAPI 8 feature set. Similarly to EAPI 6, EAPI 8 was supposed to focus on small changes and improvements. The two killer features listed below were already proposed at the time. I have prepared a few patches to the specification, as well as the initial implementation of the respective features for Portage. Unfortunately, the work stalled at the time.

Finally, as a result of surplus of free time last month, I was able to resume the work. Along with Ulrich Müller, we have quickly prepared the EAPI 8 feature set, got it pre-approved, prepared the specification and implemented all the features in Portage and pkgcore. Last Sunday, the Council has approved EAPI 8 and it’s now ready for ~arch use.

What’s there in EAPI 8? Well, for a start we have install-time dependencies (IDEPEND) that fill a gap in our cross-compilation design. Then, selective fetch/mirror restriction make it easier to combine proprietary and free distfiles in a single package. PROPERTIES and RESTRICT are now accumulated across eclasses reducing confusion for eclass writers. There’s dosym -r to create relative symlinks conveniently from dynamic paths. Plus bunch of other improvements, updates and cleanups.

Read the full article

June 03 2021

Retiring the multilib project

Michał Górny (mgorny) June 03, 2021, 20:41

I created the Multilib project back in November 2013 (though the effort itself started roughly a year earlier) with the goal of maintaining the multilib eclasses and porting Gentoo packages to them. Back in the day, we were even requested to co-maintain a few packages whose maintainers were opposed to multilib ports. In June 2015, last of the emul-linux-x86 packages were removed and our work has concluded.

The project continued to exist for the purpose of maintaining the eclasses and providing advice. Today, I can say that the project has served its purpose and it is time to retire it. Most of the team members have already left, the multilib knowledge that we advised on before is now common developer knowledge. I am planning to take care of the project-maintained eclasses personally, and move the relevant documentation to the general wiki space.

At the same time, I would like to take this opportunity to tell the history of our little multilib project.

Gentoo before gx86-multilib

In the old days, the multilib as seen by the majority of Gentoo users consisted of two components: multilib toolchain packages and emul-linux-x86 packages.

The toolchain multilib support exists pretty much in its original form to this day. It consists of a multilib USE flag and an ABI list stored in the selected profile. The rough idea is that bootstrapping a toolchain with a superset of its current ABIs is non-trivial, so the users generally choose a particular multilib or non-multilib variant when installing Gentoo, and do not change it afterwards. The multilib project didn’t really touch this part.

The emul-linux-x86 packages were specifically focused on non-toolchain packages. Back in the day, they consisted of a few sets of precompiled 32-bit libraries for amd64. If you needed to run a proprietary 32-bit app or compile wine, they had to depend on a few sets, e.g.:

amd64? (
    app-emulation/emul-linux-x86-xlibs
    app-emulation/emul-linux-x86-soundlibs
)

The sets generally included the current stable versions of packages and were rebuilt every few months.

Simultaneously, an alternative to this solution was developed (and is being developed to this day): multilib-portage, a Portage fork that was designed specifically to build all packages for multiple ABIs. Unlike the other solutions, multilib-portage minimized development effort and worked on top of regular Gentoo packages. However, it never reached production readiness.

The gx86-multilib design

The gx86-multilib effort was intended to provide a multilib solution entirely within the scope of the Gentoo repository (still named gentoo-x86 at the time, hence the name), i.e. without having to modify EAPI or package managers. It was supposed to be something between emul-linux-x86 and multilib-portage, building all non-native libraries from source but requiring explicit support from packages.

It only seemed natural to utilize USE_EXPAND the same way as PYTHON_TARGETS did for Python. At the same time, splitting ABIs per architecture made it possible to use USE_EXPAND_HIDDEN to hide irrelevant flags from users. So e.g. amd64 multilib users see only ABI_X86, PPC64 users see ABI_PPC and so on.

The default ABI for a given platform is always forced on. This made it possible to keep things working for non-multilib packages without adding any multilib awareness to them, and at the same time cleanly handle profiles that do not do multilib at all. Multilib packages use ${MULTILIB_USEDEP} to enforce ABI match on their multilib depdencies; non-multilib packages just use plain deps and can expect the native ABI to be always enabled.

Eclasses were a natural place to implement all this logic. In the end, they formed a hierarchical structure. The pre-existing multilib.eclass already provided a few low-level functions needed to set up multilib builds. On top of it, multilib-build.eclass was created that provided low-level functions specific to the gx86-multilib — handling USE flags, running the builds and some specific helper functions. On top of it, high-level sub-phase-based multilib-minimal.eclass was created that made writing generic ebuilds easy. Then, on top of that the specific autotools-multilib.eclass and cmake-multilib.eclass existed.

Historically, the order was a little different. autotools-multilib.eclass came first. Then, the common logic was split into multilib-build.eclass and cmake-multilib.eclass came to be. Finally, multilib-minimal.eclass was introduced and a few months later the other eclasses started reusing it.

The reception and porting efforts

The eclasses had a mixed reception. They followed my philosophy of getting things done, today. This disagreed with purists who believed we should look for a better solution. Many of the developers believed that multilib-portage was the way forward (after all, it did not require changing ebuilds), though they did not seem to be that much concerned about having a clear plan of action. When I’ve pointed out that things need to be formally specified, the answer was roughly to dump whatever’s in multilib-portage at the time into the spec. As you can guess, no spec was ever written.

Nevertheless, porting ebuilds to the new framework proceeded over time. In some cases, we had to deal with varying level of opposition. In the most extreme cases, we had to work out a compromise and become co-maintainers of these packages in order to provide direct support for any port-related problems. However, as time went by more people joined the cause, and today it is pretty natural that maintainers add multilib support themselves. In fact, I believe that things went a bit out of control, as multilib is being added to packages where it is not really needed.

In its early years, gx86-multilib had to coexist with the older emul-linux-x86 packages. Since both groups of packages installed the same files, collisions were inevitable. Every few converted packages, we had to revbump the respective emul-linux-x86 sets dropping the colliding libraries. Later on, we had to start replacing old dependencies on emul-linux-x86 packages (now metapackages) with the actual lists of needed libraries. Naturally, this meant that someone actually had to figure out what the dependencies were — often for fetch-restricted packages that we simply didn’t have distfiles for.

In the end, everything went fine. All relevant packages were ported, emul-linux-x86 sets were retired. The team stayed around for a few years, updating the eclasses as need be. Many new packages gained multilib support even though it wasn’t strictly needed for anything. Multilib-foo became common knowledge.

The future

Our multilib effort is still alive and kicking. At the very least, it serves as the foundation for 32-bit Wine. While the Multilib project itself has been disbanded, its legacy lives on and it is not likely to become obsolete anytime soon. From a quick grep, there are around 600 multilib-enabled packages in ::gentoo at the moment and it is quite likely that there will be more.

The multilib-portage project is still being developed but it does not seem likely to be able to escape its niche. The eclass approach is easier, more portable and more efficient. You don’t have to modify the package manager, you don’t have to build everything multiple times; ideally, you only build library parts for all ABIs.

Support for multilib on non-x86 platforms is an open question. After all, the whole multilib effort was primarily focused on providing compatibility with old 32-bit executables on x86. While some platforms technically can provide multilib, it is not clear how much of that is actually useful to the users, and how much is a cargo cult. Support for additional targets has historically proven troublesome by causing exponential explosion of USE flags.

Some people were proposing switching to Debian-style multiarch layout (e.g. /usr/lib/x86_64-linux-gnu instead of /usr/lib64). However, I have never seen a strong reason to do that. After all, traditional libdirs are well-defined in the ABI specifications while multiarch is a custom Debian invention. In the end, it would be about moving things around and then patching packages into supporting non-standard locations. It would go against one of the primary Gentoo principles of providing a vanilla development environment. And that only shortly after we’ve finally gotten rid of the custom /usr/lib32 in favor of backwards-compatible /usr/lib.

So, while the Multilib project has been retired now, multilib itself is all but dead. We still use it, we still need it and we will probably still work on it in the future.

I created the Multilib project back in November 2013 (though the effort itself started roughly a year earlier) with the goal of maintaining the multilib eclasses and porting Gentoo packages to them. Back in the day, we were even requested to co-maintain a few packages whose maintainers were opposed to multilib ports. In June 2015, last of the emul-linux-x86 packages were removed and our work has concluded.

The project continued to exist for the purpose of maintaining the eclasses and providing advice. Today, I can say that the project has served its purpose and it is time to retire it. Most of the team members have already left, the multilib knowledge that we advised on before is now common developer knowledge. I am planning to take care of the project-maintained eclasses personally, and move the relevant documentation to the general wiki space.

At the same time, I would like to take this opportunity to tell the history of our little multilib project.

Gentoo before gx86-multilib

In the old days, the multilib as seen by the majority of Gentoo users consisted of two components: multilib toolchain packages and emul-linux-x86 packages.

The toolchain multilib support exists pretty much in its original form to this day. It consists of a multilib USE flag and an ABI list stored in the selected profile. The rough idea is that bootstrapping a toolchain with a superset of its current ABIs is non-trivial, so the users generally choose a particular multilib or non-multilib variant when installing Gentoo, and do not change it afterwards. The multilib project didn’t really touch this part.

The emul-linux-x86 packages were specifically focused on non-toolchain packages. Back in the day, they consisted of a few sets of precompiled 32-bit libraries for amd64. If you needed to run a proprietary 32-bit app or compile wine, they had to depend on a few sets, e.g.:

amd64? (
    app-emulation/emul-linux-x86-xlibs
    app-emulation/emul-linux-x86-soundlibs
)

The sets generally included the current stable versions of packages and were rebuilt every few months.

Simultaneously, an alternative to this solution was developed (and is being developed to this day): multilib-portage, a Portage fork that was designed specifically to build all packages for multiple ABIs. Unlike the other solutions, multilib-portage minimized development effort and worked on top of regular Gentoo packages. However, it never reached production readiness.

The gx86-multilib design

The gx86-multilib effort was intended to provide a multilib solution entirely within the scope of the Gentoo repository (still named gentoo-x86 at the time, hence the name), i.e. without having to modify EAPI or package managers. It was supposed to be something between emul-linux-x86 and multilib-portage, building all non-native libraries from source but requiring explicit support from packages.

It only seemed natural to utilize USE_EXPAND the same way as PYTHON_TARGETS did for Python. At the same time, splitting ABIs per architecture made it possible to use USE_EXPAND_HIDDEN to hide irrelevant flags from users. So e.g. amd64 multilib users see only ABI_X86, PPC64 users see ABI_PPC and so on.

The default ABI for a given platform is always forced on. This made it possible to keep things working for non-multilib packages without adding any multilib awareness to them, and at the same time cleanly handle profiles that do not do multilib at all. Multilib packages use ${MULTILIB_USEDEP} to enforce ABI match on their multilib depdencies; non-multilib packages just use plain deps and can expect the native ABI to be always enabled.

Eclasses were a natural place to implement all this logic. In the end, they formed a hierarchical structure. The pre-existing multilib.eclass already provided a few low-level functions needed to set up multilib builds. On top of it, multilib-build.eclass was created that provided low-level functions specific to the gx86-multilib — handling USE flags, running the builds and some specific helper functions. On top of it, high-level sub-phase-based multilib-minimal.eclass was created that made writing generic ebuilds easy. Then, on top of that the specific autotools-multilib.eclass and cmake-multilib.eclass existed.

Historically, the order was a little different. autotools-multilib.eclass came first. Then, the common logic was split into multilib-build.eclass and cmake-multilib.eclass came to be. Finally, multilib-minimal.eclass was introduced and a few months later the other eclasses started reusing it.

The reception and porting efforts

The eclasses had a mixed reception. They followed my philosophy of getting things done, today. This disagreed with purists who believed we should look for a better solution. Many of the developers believed that multilib-portage was the way forward (after all, it did not require changing ebuilds), though they did not seem to be that much concerned about having a clear plan of action. When I’ve pointed out that things need to be formally specified, the answer was roughly to dump whatever’s in multilib-portage at the time into the spec. As you can guess, no spec was ever written.

Nevertheless, porting ebuilds to the new framework proceeded over time. In some cases, we had to deal with varying level of opposition. In the most extreme cases, we had to work out a compromise and become co-maintainers of these packages in order to provide direct support for any port-related problems. However, as time went by more people joined the cause, and today it is pretty natural that maintainers add multilib support themselves. In fact, I believe that things went a bit out of control, as multilib is being added to packages where it is not really needed.

In its early years, gx86-multilib had to coexist with the older emul-linux-x86 packages. Since both groups of packages installed the same files, collisions were inevitable. Every few converted packages, we had to revbump the respective emul-linux-x86 sets dropping the colliding libraries. Later on, we had to start replacing old dependencies on emul-linux-x86 packages (now metapackages) with the actual lists of needed libraries. Naturally, this meant that someone actually had to figure out what the dependencies were — often for fetch-restricted packages that we simply didn’t have distfiles for.

In the end, everything went fine. All relevant packages were ported, emul-linux-x86 sets were retired. The team stayed around for a few years, updating the eclasses as need be. Many new packages gained multilib support even though it wasn’t strictly needed for anything. Multilib-foo became common knowledge.

The future

Our multilib effort is still alive and kicking. At the very least, it serves as the foundation for 32-bit Wine. While the Multilib project itself has been disbanded, its legacy lives on and it is not likely to become obsolete anytime soon. From a quick grep, there are around 600 multilib-enabled packages in ::gentoo at the moment and it is quite likely that there will be more.

The multilib-portage project is still being developed but it does not seem likely to be able to escape its niche. The eclass approach is easier, more portable and more efficient. You don’t have to modify the package manager, you don’t have to build everything multiple times; ideally, you only build library parts for all ABIs.

Support for multilib on non-x86 platforms is an open question. After all, the whole multilib effort was primarily focused on providing compatibility with old 32-bit executables on x86. While some platforms technically can provide multilib, it is not clear how much of that is actually useful to the users, and how much is a cargo cult. Support for additional targets has historically proven troublesome by causing exponential explosion of USE flags.

Some people were proposing switching to Debian-style multiarch layout (e.g. /usr/lib/x86_64-linux-gnu instead of /usr/lib64). However, I have never seen a strong reason to do that. After all, traditional libdirs are well-defined in the ABI specifications while multiarch is a custom Debian invention. In the end, it would be about moving things around and then patching packages into supporting non-standard locations. It would go against one of the primary Gentoo principles of providing a vanilla development environment. And that only shortly after we’ve finally gotten rid of the custom /usr/lib32 in favor of backwards-compatible /usr/lib.

So, while the Multilib project has been retired now, multilib itself is all but dead. We still use it, we still need it and we will probably still work on it in the future.

May 26 2021

Gentoo Freenode channels have been hijacked

Gentoo News (GentooNews) May 26, 2021, 5:00

Today (2021-05-26) a large number of Gentoo channels have been hijacked by Freenode staff, including channels that were not yet migrated to Libera.chat. We cannot perceive this otherwise than as an open act of hostility and we have effectively left Freenode.

Please note that at this point the only official Gentoo IRC channels, as well as developer accounts, can be found on Libera Chat.

2021-06-15 update

As a part of an unannounced switch to a different IRC daemon, the Freenode staff has removed all channel and nickname registrations. Since many Gentoo developers have left Freenode permanently and are not interested in registering their nicknames again, this opens up further possibilities of malicious impersonation.

Today (2021-05-26) a large number of Gentoo channels have been hijacked by Freenode staff, including channels that were not yet migrated to Libera.chat. We cannot perceive this otherwise than as an open act of hostility and we have effectively left Freenode.

Please note that at this point the only official Gentoo IRC channels, as well as developer accounts, can be found on Libera Chat.

2021-06-15 update

As a part of an unannounced switch to a different IRC daemon, the Freenode staff has removed all channel and nickname registrations. Since many Gentoo developers have left Freenode permanently and are not interested in registering their nicknames again, this opens up further possibilities of malicious impersonation.

Gentoo IRC presence moving to Libera Chat

Gentoo News (GentooNews) May 23, 2021, 5:00

The Gentoo Council held an emergency single agenda item meeting today. At this meeting, we have decided to move the official IRC presence of Gentoo to the Libera Chat IRC network. We intend to have this move complete at latest by 13/June/2021. A full log of the meeting will be available for download soon.

At the moment it is unclear whether we will retain any presence on Freenode at all; we urge all users of the #gentoo channel namespace to move to Libera Chat immediately. IRC channel names will (mostly) remain identical. You will be able to recognize Gentoo developers on Libera Chat by their IRC cloak in the usual form gentoo/developer/*. All other technical aspects will feel rather familiar to all of us as well. Detailed instructions for setting up various IRC clients can be found on the help pages of the IRC network.

Libera.Chat logo

The Gentoo Council held an emergency single agenda item meeting today. At this meeting, we have decided to move the official IRC presence of Gentoo to the Libera Chat IRC network. We intend to have this move complete at latest by 13/June/2021. A full log of the meeting will be available for download soon.

At the moment it is unclear whether we will retain any presence on Freenode at all; we urge all users of the #gentoo channel namespace to move to Libera Chat immediately. IRC channel names will (mostly) remain identical. You will be able to recognize Gentoo developers on Libera Chat by their IRC cloak in the usual form gentoo/developer/*. All other technical aspects will feel rather familiar to all of us as well. Detailed instructions for setting up various IRC clients can be found on the help pages of the IRC network.

May 21 2021

From build-dir to venv — testing Python packages in Gentoo

Michał Górny (mgorny) May 21, 2021, 19:40

A lot of Python packages assume that their tests will be run after installing the package. This is quite a reasonable assumption if you take that the tests are primarily run in dedicated testing environments such as CI deployments or test runners such as tox. However, this does not necessarily fit the Gentoo packaging model where packages are installed system-wide, and the tests are run between compile and install phases.

In great many cases, things work out of the box (because the modules are found relatively to the current directory), or require only minimal PYTHONPATH adjustments. In others, we found it necessary to put a varying amount of effort to create a local installation of the package that is suitable for testing.

In this post, I would like to shortly explore the various solutions to the problem we’ve used over the years, from simple uses of build directory to the newest ideas based on virtual environments.

Testing against the build directory

As I have indicated above, great many packages work just fine with the correct PYTHONPATH setting. However, not all packages provide ready-to-use source trees and even if they do, there’s the matter of either having to manually specify the path to them or have more or less reliable automation guess it. Fortunately, there’s a simple solution.

The traditional distutils/setuptools build process consists of two phases: the build phase and the install phase. The build phase is primarily about copying the files from their respective source directories to a unified package tree in a build directory, while the install phase is generally about installing the files found in the build directory. Besides just reintegrating sources, the build phase may also involve other important taks: compiling the extensions written in C or converting sources from Python 2 to Python 3 (which is becoming rare). Given that the build command is run in src_compile, this makes the build directory a good candidate for use in tests.

This is precisely what distutils-r1.eclass does out of the box. It ensures that the build commands write to a predictable location, and it adds that location to PYTHONPATH. This ensures that the just-built package is found by Python when trying to import its modules. That is — unless the package residing in the current directory takes precedence. In either case, it means that most of the time things just work, and sometimes just have to restort to simple hacks such as changing the current directory.

distutils_install_for_testing (home layout)

While the build directory method worked for many packages, it had its limitation. To list a few I can think of:

  • Script wrappers for entry points were not created (and even regular scripts were not added to PATH due to a historical mistake), so tests that relied on being able to call installed executables did not work.
  • Package metadata (.egg-info) was not included, so pkg_resources (and now the more modern importlib.metadata) modules may have had trouble finding the package.
  • Namespace packages were not handled properly.

The last point was the deal breaker here. Remember that we’re talking of the times when Python 2.7 was still widely supported. If we were testing a zope.foo package that happened to depend on zope.bar, then we were in trouble. The top-level zope package that we’ve just added to PYTHONPATH had only the foo submodule but bar had to be gotten from system site-packages!

Back in the day, I did not know much about the internals of these things. I was looking for an easy working solution, and I have found one. I have discovered that using setup.py install --home=... (vs setup.py install --root=... that we used to install into D) happened to install a layout that made namespaces just work! This was just great!

This how the original implementation of distutils_install_for_testing came around. The rough idea was to put this –home install layout on PYTHONPATH and reap all the benefits of having the package installed before running tests.

Root layout

The original dift layout was good while it worked. But then it stopped. I don’t know the exact version of setuptools or the exact change but the magic just stopped working. Good news is that it was just a few months ago, and we were already deep in removing Python 2.7, so we did not have to worry about namespaces that much (namespaces are much easier in Python 3 as they work via empty directories without special magic).

The simplest solution I could think of was to stop relying on the home layout, and instead use the same root layout as used for our regular installs. This did not include as much magic but solved the important problems nevertheless. Entry point wrappers were installed, namespaces worked of their own accord most of the time.

I’ve added a new --via-root parameter to change dift mode, and --via-home to force the old behavior. By the end of January, I have flipped the default and we were happily using the new layout since then. Except that it didn’t really solve all the problems.

Virtualenv layout

The biggest limitations of the both dift layouts is that they’ve relied on PYTHONPATH. However, not everything in the Python world respects path overrides. To list just two examples: the test suite of werkzeug relies on overwriting PYTHONPATH for spawned processes, and tox fails to find its own installed package.

I have tried various hacks to resolve this, to no avail. The solution that somewhat worked was to require the package to be actually installed before running the tests but that was really inconvenient. Interestingly enough, virtualenvs rely on some internal Python magic to actually override module search path without relying on PYTHONPATH.

The most recent dift --via-venv variant that I’ve just submitted for mailing list review uses exactly this. That is, it uses the built-in Python 3 venv module (not to be confused with the third-party virtualenv).

Now, normally a virtualenv creates an isolated environment where all dependencies have to be installed explicitly. However, there is a --system-site-packages option that avoids this. The packages installed inside the virtualenv (i.e. the tested package) will take precedence but other packages will be imported from the system site-packages directory. That’s just what we need!

I have so far tested this new method on two problematic packages (werkzeug and tox). It might be just the thing that resolves all the problems that were previously resolved via the home layout. Or it might not. I do not know yet whether we’ll be switching default again. Time will tell.

A lot of Python packages assume that their tests will be run after installing the package. This is quite a reasonable assumption if you take that the tests are primarily run in dedicated testing environments such as CI deployments or test runners such as tox. However, this does not necessarily fit the Gentoo packaging model where packages are installed system-wide, and the tests are run between compile and install phases.

In great many cases, things work out of the box (because the modules are found relatively to the current directory), or require only minimal PYTHONPATH adjustments. In others, we found it necessary to put a varying amount of effort to create a local installation of the package that is suitable for testing.

In this post, I would like to shortly explore the various solutions to the problem we’ve used over the years, from simple uses of build directory to the newest ideas based on virtual environments.

Testing against the build directory

As I have indicated above, great many packages work just fine with the correct PYTHONPATH setting. However, not all packages provide ready-to-use source trees and even if they do, there’s the matter of either having to manually specify the path to them or have more or less reliable automation guess it. Fortunately, there’s a simple solution.

The traditional distutils/setuptools build process consists of two phases: the build phase and the install phase. The build phase is primarily about copying the files from their respective source directories to a unified package tree in a build directory, while the install phase is generally about installing the files found in the build directory. Besides just reintegrating sources, the build phase may also involve other important taks: compiling the extensions written in C or converting sources from Python 2 to Python 3 (which is becoming rare). Given that the build command is run in src_compile, this makes the build directory a good candidate for use in tests.

This is precisely what distutils-r1.eclass does out of the box. It ensures that the build commands write to a predictable location, and it adds that location to PYTHONPATH. This ensures that the just-built package is found by Python when trying to import its modules. That is — unless the package residing in the current directory takes precedence. In either case, it means that most of the time things just work, and sometimes just have to restort to simple hacks such as changing the current directory.

distutils_install_for_testing (home layout)

While the build directory method worked for many packages, it had its limitation. To list a few I can think of:

  • Script wrappers for entry points were not created (and even regular scripts were not added to PATH due to a historical mistake), so tests that relied on being able to call installed executables did not work.
  • Package metadata (.egg-info) was not included, so pkg_resources (and now the more modern importlib.metadata) modules may have had trouble finding the package.
  • Namespace packages were not handled properly.

The last point was the deal breaker here. Remember that we’re talking of the times when Python 2.7 was still widely supported. If we were testing a zope.foo package that happened to depend on zope.bar, then we were in trouble. The top-level zope package that we’ve just added to PYTHONPATH had only the foo submodule but bar had to be gotten from system site-packages!

Back in the day, I did not know much about the internals of these things. I was looking for an easy working solution, and I have found one. I have discovered that using setup.py install --home=... (vs setup.py install --root=... that we used to install into D) happened to install a layout that made namespaces just work! This was just great!

This how the original implementation of distutils_install_for_testing came around. The rough idea was to put this –home install layout on PYTHONPATH and reap all the benefits of having the package installed before running tests.

Root layout

The original dift layout was good while it worked. But then it stopped. I don’t know the exact version of setuptools or the exact change but the magic just stopped working. Good news is that it was just a few months ago, and we were already deep in removing Python 2.7, so we did not have to worry about namespaces that much (namespaces are much easier in Python 3 as they work via empty directories without special magic).

The simplest solution I could think of was to stop relying on the home layout, and instead use the same root layout as used for our regular installs. This did not include as much magic but solved the important problems nevertheless. Entry point wrappers were installed, namespaces worked of their own accord most of the time.

I’ve added a new --via-root parameter to change dift mode, and --via-home to force the old behavior. By the end of January, I have flipped the default and we were happily using the new layout since then. Except that it didn’t really solve all the problems.

Virtualenv layout

The biggest limitations of the both dift layouts is that they’ve relied on PYTHONPATH. However, not everything in the Python world respects path overrides. To list just two examples: the test suite of werkzeug relies on overwriting PYTHONPATH for spawned processes, and tox fails to find its own installed package.

I have tried various hacks to resolve this, to no avail. The solution that somewhat worked was to require the package to be actually installed before running the tests but that was really inconvenient. Interestingly enough, virtualenvs rely on some internal Python magic to actually override module search path without relying on PYTHONPATH.

The most recent dift --via-venv variant that I’ve just submitted for mailing list review uses exactly this. That is, it uses the built-in Python 3 venv module (not to be confused with the third-party virtualenv).

Now, normally a virtualenv creates an isolated environment where all dependencies have to be installed explicitly. However, there is a --system-site-packages option that avoids this. The packages installed inside the virtualenv (i.e. the tested package) will take precedence but other packages will be imported from the system site-packages directory. That’s just what we need!

I have so far tested this new method on two problematic packages (werkzeug and tox). It might be just the thing that resolves all the problems that were previously resolved via the home layout. Or it might not. I do not know yet whether we’ll be switching default again. Time will tell.

May 20 2021

Freenode IRC and Gentoo

Gentoo News (GentooNews) May 20, 2021, 5:00

According to the information published recently, there have been major changes in the way the Freenode IRC network is administered. This has resulted in a number of staff members raising concerns about the new administration and/or resigning. A large number of open source projects have already announced the transition to other IRC networks, or are actively discussing it.

It is not yet clear whether and how these changes will affect Gentoo. We are observing as the situation develops. It is possible that we will decide to move the official Gentoo channels to another network in the best interest of our users. At the same time, we realize that such a move will be an inconvenience to them.

At the same time, it has came to our attention that certain individuals have been using the situation to impersonate Gentoo developers on other IRC networks. The official Gentoo developers can be identified on Freenode by their gentoo/developer cloak. If we move to another network, we will announce claiming a respective cloak.

Please check this page for future updates.

More information on the Freenode situation can be found at:

  • Christian (Fuchs)’s Freenode resignation
  • @freenodestaff tweet
  • Open Letter On freenode’s independence
  • Andrew Lee, We grew up with IRC. Let’s take it further.
2021-05-22 update

The Gentoo Council will be meeting tomorrow (Sunday, 2021-05-23) at 19:00 UTC to discuss the problem and the possible solutions.

The Gentoo Group Contacts team has been taking steps in order to ensure readiness for the most likely options.

According to the information published recently, there have been major changes in the way the Freenode IRC network is administered. This has resulted in a number of staff members raising concerns about the new administration and/or resigning. A large number of open source projects have already announced the transition to other IRC networks, or are actively discussing it.

It is not yet clear whether and how these changes will affect Gentoo. We are observing as the situation develops. It is possible that we will decide to move the official Gentoo channels to another network in the best interest of our users. At the same time, we realize that such a move will be an inconvenience to them.

At the same time, it has came to our attention that certain individuals have been using the situation to impersonate Gentoo developers on other IRC networks. The official Gentoo developers can be identified on Freenode by their gentoo/developer cloak. If we move to another network, we will announce claiming a respective cloak.

Please check this page for future updates.

More information on the Freenode situation can be found at:

2021-05-22 update

The Gentoo Council will be meeting tomorrow (Sunday, 2021-05-23) at 19:00 UTC to discuss the problem and the possible solutions.

The Gentoo Group Contacts team has been taking steps in order to ensure readiness for the most likely options.

May 18 2021

Google Summer of Code 2021 students welcome

Gentoo News (GentooNews) May 18, 2021, 5:00

We are glad to welcome Leo and Mark to the Google Summer of Code 2021.

Mark will work on improving Catalyst, our release building tool. Leo will work on improving our Java packaging support, with a special focus on big-data and scientific software.

We are glad to welcome Leo and Mark to the Google Summer of Code 2021.

Mark will work on improving Catalyst, our release building tool. Leo will work on improving our Java packaging support, with a special focus on big-data and scientific software.

VIEW

SCOPE

FILTER
  from
  to