January 16 2021

Distribution Kernels: module rebuilds, better ZFS support and UEFI executables

Michał Górny (mgorny) January 16, 2021, 10:12

The primary goal of the Distribution Kernel project is provide a seamless kernel upgrade experience to Gentoo users. Initially, this meant configuring, building and installing the kernel during the @world upgrade. However, you had to manually rebuild the installed kernel modules (and @module-rebuild is still broken), and sometimes additionally rebuild the initramfs after doing that.

To address this, we have introduced a new dist-kernel USE flag. This flag is automatically added to all ebuilds installing kernel modules. When it is enabled, the linux-mod eclass adds a dependency on virtual/dist-kernel package. This virtual, in turn, is bound to the newest version of dist-kernel installed. As a result, whenever you upgrade your dist-kernel all the module packages will also be rebuilt via slot rebuilds. The manual @module-rebuild should no longer be necessary!

ZFS users have pointed out that after rebuilding sys-fs/zfs-kmod package, they need to rebuild the initramfs for Dracut to include the new module. We have combined the dist-kernel rebuild feature with pkg_postinst() to rebuild the initramfs whenever zfs-kmod is being rebuilt (and the dist-kernel is used). As a result, ZFS should no longer require any manual attention — as long as rebuilds succeed, the new kernel and initramfs should be capable of running on ZFS root once the @world upgrade finishes.

Finally, we have been asked to provide support for uefi=yes Dracut option. When this option is enabled, Dracut combines the EFI stub, kernel and generated initramfs into a single UEFI executable that can be booted directly. The dist-kernels now detect this scenario, and install the generated executable in place of the kernel, so everything works as expected. Note that due to implementation limitations, we also install an empty initramfs as otherwise kernel-install.d scripts would insist on creating another initramfs. Also note that until Dracut is fixed to use correct EFI stub path, you have to set the path manually in /etc/dracut.conf:

uefi_stub=/usr/lib/systemd/boot/efi/linuxx64.efi.stub

The primary goal of the Distribution Kernel project is provide a seamless kernel upgrade experience to Gentoo users. Initially, this meant configuring, building and installing the kernel during the @world upgrade. However, you had to manually rebuild the installed kernel modules (and @module-rebuild is still broken), and sometimes additionally rebuild the initramfs after doing that.

To address this, we have introduced a new dist-kernel USE flag. This flag is automatically added to all ebuilds installing kernel modules. When it is enabled, the linux-mod eclass adds a dependency on virtual/dist-kernel package. This virtual, in turn, is bound to the newest version of dist-kernel installed. As a result, whenever you upgrade your dist-kernel all the module packages will also be rebuilt via slot rebuilds. The manual @module-rebuild should no longer be necessary!

ZFS users have pointed out that after rebuilding sys-fs/zfs-kmod package, they need to rebuild the initramfs for Dracut to include the new module. We have combined the dist-kernel rebuild feature with pkg_postinst() to rebuild the initramfs whenever zfs-kmod is being rebuilt (and the dist-kernel is used). As a result, ZFS should no longer require any manual attention — as long as rebuilds succeed, the new kernel and initramfs should be capable of running on ZFS root once the @world upgrade finishes.

Finally, we have been asked to provide support for uefi=yes Dracut option. When this option is enabled, Dracut combines the EFI stub, kernel and generated initramfs into a single UEFI executable that can be booted directly. The dist-kernels now detect this scenario, and install the generated executable in place of the kernel, so everything works as expected. Note that due to implementation limitations, we also install an empty initramfs as otherwise kernel-install.d scripts would insist on creating another initramfs. Also note that until Dracut is fixed to use correct EFI stub path, you have to set the path manually in /etc/dracut.conf:

uefi_stub=/usr/lib/systemd/boot/efi/linuxx64.efi.stub

January 15 2021

2020 in retrospect & happy new year 2021!

Gentoo News (GentooNews) January 15, 2021, 6:00

♦ Happy New Year 2021! Due to the COVID pandemic, 2020 was a year unlike any other, and this has also impacted many open source projects. Nevertheless, at Gentoo we have made some great strides forward. While we now start into 2021 with fresh energy (and maybe soon antibodies), let’s also take a look back. We’re happy to share with our community the most exciting news of the past 12 months – including numbers on Gentoo activity, our new developers, and featured changes and improvements!

Gentoo in numbers

2020 has featured a major increase in commits to the ::gentoo repository, and especially commits from non-developers. The overall number of commits has grown from 73400 to 104500 (by 42%), while the number of commits made by non-developers has grown from 5700 (8% of total) to 11000 (10.5% of total). The latter group has featured 333 unique authors in 2019, and 391 in 2020.

The ::guru repository has thrived in 2020. While 2019 left it with merely 7 contributors and a total of 86 commits, 2020 has featured 55 different contributors and 2725 commits. GURU is a user-curated repository with a trusted user model. Come join us!

There was also a major increase in Bugzilla activity. 2020 featured almost 25500 bugs reported, compared to 15000 in 2019. This is probably largely thanks to Agostino Sarubbo’s new tinderboxing effort. The total number of bugs closed in 2020 was 23500, compared to 15000 in 2019.

New developers

We’ve finished 2020 with three significant additions to the Gentoo family (in chronological order):

  1. Max Magorsch (arzano)

    ♦ Max joined us in February to help out with Gentoo Infrastructure. Since then, he already did tons of work. Just to list a few things, he has redesigned and modernized the Gentoo websites and rewritten packages.gentoo.org into the super cool form we have today.

  2. Sam James (sam)

    ♦ Sam joined us in July, and has contributed to a lot of different projects since. He is known as an active member of the Security team and multiple arch teams, as well as someone who fixes lots of bugs in different packages.

  3. Stephan Hartmann (sultan)

    ♦ Stephan joined us in September, and immediately started working on our Chromium-related packages. He has pushed commits to upstream Chromium; hopefully he’ll deal with all the specific problems that come up in Gentoo here. Thanks to him we also have finally caught up with Windows, offering our users a packaged version of Microsoft Edge.

Featured changes

The following major changes and improvements have happened in 2020:

Packages
  • ♦ Distribution Kernels: Gentoo now supports building and installing kernels entirely via the package manager. The new kernel packages also come with an (optional) stock configuration based on well-tested Fedora kernels, to ease the entry barrier and maintenance effort of Gentoo systems.

  • ♦ Wayland: Wayland support in Gentoo has progressed greatly, making it possible to run an Xorg-free desktop. Wayland is supported with large desktop environments such as KDE Plasma and GNOME, as well as with lightweight alternatives such as Sway and Wayfire. The latter makes it also possible to use Wayland to a large extent without resorting to XWayland.

  • ♦ Lua: A new framework has been created that permits multiple versions of Lua to be installed side-by-side. The vast majority of ~arch packages have already been migrated to this framework. This way, we have finally been able to unmask new (slotted!) Lua versions.

  • ♦ Python: We have managed to almost withdraw Python 2.7 from Gentoo, and upgrade the default to Python 3.8. Python 2.7 is still available as a build-time dependency for a few packages. We have additionally patched all the vulnerabilities known from later versions of Python.

Architectures
  • ARM64: ARM64 (AArch64) support has been elevated to stable status and is no longer experimental. The ARM64 project now provides automatically generated stage3 files, and is usually one of the fastest arch teams to test packages. We have worked to bring more packages to ARM64 and make it more feasible to run a full desktop!

  • PPC64: KDE Plasma is now available on PPC64, thanks to extensive testing and keywording efforts by Georgy Yakovlev.

  • RISC-V: Work on RISC-V support has started, with particular focus on the riscv64 architecture. The RISC-V project provides stage3 files and stable profiles for the soft-float (rv64imac/lp64) and hard-float (rv64gc/lp64d) ABIs, in both systemd and OpenRC variants. The arch team has managed to run Xorg already!

  • Prefix: Gentoo Prefix is once again capable of bootstrapping on the latest macOS releases, and work is underway to modernise prefix-specific ebuilds and merge them back into the main tree - this way ensuring that users get the latest software and that maintenance burden is reduced.

  • ♦ Android: The Gentoo Android project has released a new 64bit Android prefix tarball, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31 in your pocket!

Infrastructure
  • ♦ packages.gentoo.org: The packages website has received many improvements towards being a central source of information on Gentoo packages. It now shows the results of QA checks, bugs, pull requests referencing a package, and a maintainer dashboard indicating stabilization candidates and outdated versions (according to Repology). Additionally, the display can be configured for your personal preferences!

  • ♦ Bugzilla: The Infrastructure team has implemented a major improvement to Gentoo Bugzilla performance. The database has been migrated to a newer database cluster, and the backend has been switched to mod_perl.

  • CI / Tinderbox: A second active tinderboxing (build testing) effort has been started, resulting in more bugs being detected and fixed early. This also includes running a variety of QA checks, as well as minimal environment builds that are helpful in detecting missing dependencies.

Other news
  • ♦ HPC adoption: The Prefix Project has published a conference proceeding on a case study of Gentoo in high energy physics. Gentoo also sees wider adoption in the HPC community such as Compute Canada and EESSI.
Discontinued projects

While Gentoo would like to support as much as our users wish for, we could not manage to continue all of the projects we’ve started in the past. With limited resources, we had to divert our time and effort from projects showing little promise and activity. The most important projects discontinued in 2020 were:

  • Architectures: Alpha and IA64 keywords were reduced to ~arch (i.e. unstable/testing only). HPPA stable keywords were limited to the most important packages only. SH (SuperH) was removed entirely. With very small number of users of these architectures, our arch teams decided that the effort in maintaining them is too great. In case of SuperH, our last available hardware died.

  • LibreSSL: By the end of 2020, we have decided to discontinue support for LibreSSL. With little to no support from various upstream projects, the effort necessary to maintain package compatibility exceeded the gain, especially given that OpenSSL has made a lot of progress since the forking point.

Thank you!

We can here describe only a few major items, and these cover by far not all that is going on. We would like to thank all Gentoo developers for their relentless everyday Gentoo work. While they are often not recognized for this work, Gentoo could not exist without them. Cheers, and let’s make 2021 even more productive!

Gentoo Fireworks Happy New Year 2021! Due to the COVID pandemic, 2020 was a year unlike any other, and this has also impacted many open source projects. Nevertheless, at Gentoo we have made some great strides forward. While we now start into 2021 with fresh energy (and maybe soon antibodies), let’s also take a look back. We’re happy to share with our community the most exciting news of the past 12 months – including numbers on Gentoo activity, our new developers, and featured changes and improvements!

Gentoo in numbers

2020 has featured a major increase in commits to the ::gentoo repository, and especially commits from non-developers. The overall number of commits has grown from 73400 to 104500 (by 42%), while the number of commits made by non-developers has grown from 5700 (8% of total) to 11000 (10.5% of total). The latter group has featured 333 unique authors in 2019, and 391 in 2020.

The ::guru repository has thrived in 2020. While 2019 left it with merely 7 contributors and a total of 86 commits, 2020 has featured 55 different contributors and 2725 commits. GURU is a user-curated repository with a trusted user model. Come join us!

There was also a major increase in Bugzilla activity. 2020 featured almost 25500 bugs reported, compared to 15000 in 2019. This is probably largely thanks to Agostino Sarubbo’s new tinderboxing effort. The total number of bugs closed in 2020 was 23500, compared to 15000 in 2019.

New developers

We’ve finished 2020 with three significant additions to the Gentoo family (in chronological order):

  1. Max Magorsch (arzano)

    Max joined us in February to help out with Gentoo Infrastructure. Since then, he already did tons of work. Just to list a few things, he has redesigned and modernized the Gentoo websites and rewritten packages.gentoo.org into the super cool form we have today.

  2. Sam James (sam)

    Sam joined us in July, and has contributed to a lot of different projects since. He is known as an active member of the Security team and multiple arch teams, as well as someone who fixes lots of bugs in different packages.

  3. Stephan Hartmann (sultan)

    Stephan joined us in September, and immediately started working on our Chromium-related packages. He has pushed commits to upstream Chromium; hopefully he’ll deal with all the specific problems that come up in Gentoo here. Thanks to him we also have finally caught up with Windows, offering our users a packaged version of Microsoft Edge.

The following major changes and improvements have happened in 2020:

Packages

  • Distribution Kernels: Gentoo now supports building and installing kernels entirely via the package manager. The new kernel packages also come with an (optional) stock configuration based on well-tested Fedora kernels, to ease the entry barrier and maintenance effort of Gentoo systems.

  • Wayland: Wayland support in Gentoo has progressed greatly, making it possible to run an Xorg-free desktop. Wayland is supported with large desktop environments such as KDE Plasma and GNOME, as well as with lightweight alternatives such as Sway and Wayfire. The latter makes it also possible to use Wayland to a large extent without resorting to XWayland.

  • Lua: A new framework has been created that permits multiple versions of Lua to be installed side-by-side. The vast majority of ~arch packages have already been migrated to this framework. This way, we have finally been able to unmask new (slotted!) Lua versions.

  • Python: We have managed to almost withdraw Python 2.7 from Gentoo, and upgrade the default to Python 3.8. Python 2.7 is still available as a build-time dependency for a few packages. We have additionally patched all the vulnerabilities known from later versions of Python.

Architectures

  • ARM64: ARM64 (AArch64) support has been elevated to stable status and is no longer experimental. The ARM64 project now provides automatically generated stage3 files, and is usually one of the fastest arch teams to test packages. We have worked to bring more packages to ARM64 and make it more feasible to run a full desktop!

  • PPC64: KDE Plasma is now available on PPC64, thanks to extensive testing and keywording efforts by Georgy Yakovlev.

  • RISC-V: Work on RISC-V support has started, with particular focus on the riscv64 architecture. The RISC-V project provides stage3 files and stable profiles for the soft-float (rv64imac/lp64) and hard-float (rv64gc/lp64d) ABIs, in both systemd and OpenRC variants. The arch team has managed to run Xorg already!

  • Prefix: Gentoo Prefix is once again capable of bootstrapping on the latest macOS releases, and work is underway to modernise prefix-specific ebuilds and merge them back into the main tree - this way ensuring that users get the latest software and that maintenance burden is reduced.

  • Android: The Gentoo Android project has released a new 64bit Android prefix tarball, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31 in your pocket!

Infrastructure

  • packages.gentoo.org: The packages website has received many improvements towards being a central source of information on Gentoo packages. It now shows the results of QA checks, bugs, pull requests referencing a package, and a maintainer dashboard indicating stabilization candidates and outdated versions (according to Repology). Additionally, the display can be configured for your personal preferences!

  • Bugzilla: The Infrastructure team has implemented a major improvement to Gentoo Bugzilla performance. The database has been migrated to a newer database cluster, and the backend has been switched to mod_perl.

  • CI / Tinderbox: A second active tinderboxing (build testing) effort has been started, resulting in more bugs being detected and fixed early. This also includes running a variety of QA checks, as well as minimal environment builds that are helpful in detecting missing dependencies.

Other news

Discontinued projects

While Gentoo would like to support as much as our users wish for, we could not manage to continue all of the projects we’ve started in the past. With limited resources, we had to divert our time and effort from projects showing little promise and activity. The most important projects discontinued in 2020 were:

  • Architectures: Alpha and IA64 keywords were reduced to ~arch (i.e. unstable/testing only). HPPA stable keywords were limited to the most important packages only. SH (SuperH) was removed entirely. With very small number of users of these architectures, our arch teams decided that the effort in maintaining them is too great. In case of SuperH, our last available hardware died.

  • LibreSSL: By the end of 2020, we have decided to discontinue support for LibreSSL. With little to no support from various upstream projects, the effort necessary to maintain package compatibility exceeded the gain, especially given that OpenSSL has made a lot of progress since the forking point.

Thank you!

We can here describe only a few major items, and these cover by far not all that is going on. We would like to thank all Gentoo developers for their relentless everyday Gentoo work. While they are often not recognized for this work, Gentoo could not exist without them. Cheers, and let’s make 2021 even more productive!

December 29 2020

OpenSSL, LibreSSL, LibreTLS and all the terminological irony

Michał Górny (mgorny) December 29, 2020, 18:11

While we’re discussing the fate of LibreSSL, it’s worth noting how confusing the names of these packages became. I’d like to take this opportunity to provide a short note on what’s what.

First of all, SSL and its successor TLS are protocols used to implement network connection security. For historical reasons, many libraries carry ‘SSL’ in their name (OpenSSL, LibreSSL, PolarSSL) but nowadays they all support TLS.

OpenSSL is the ‘original’ crypto/SSL/TLS library. It is maintained independently of a specific operating system. It provides two main libraries: libcrypto and libssl (that also implements TLS).

LibreSSL is a fork of OpenSSL. It is maintained by OpenBSD as part of its base system. However, the upstream also maintains LibreSSL-portable repository that provides build system and portability glue for using it on other systems. LibreSSL provides partially compatible versions of libcrypto and libssl, and a new libtls library. Both libssl and libtls can be used for TLS support in your applications.

LibreTLS is a lightweight fork of libtls from LibreSSL that builds it against OpenSSL. This makes it possible to build programs written for libtls against OpenSSL+LibreTLS instead of LibreSSL.

So, to summarize. OpenSSL is the original, while LibreSSL is the OpenBSD fork. libtls is the LibreSSL original library, while LibreTLS is its fork for OpenSSL. Makes sense, right? And finally, despite the name, they all implement TLS.

While we’re discussing the fate of LibreSSL, it’s worth noting how confusing the names of these packages became. I’d like to take this opportunity to provide a short note on what’s what.

First of all, SSL and its successor TLS are protocols used to implement network connection security. For historical reasons, many libraries carry ‘SSL’ in their name (OpenSSL, LibreSSL, PolarSSL) but nowadays they all support TLS.

OpenSSL is the ‘original’ crypto/SSL/TLS library. It is maintained independently of a specific operating system. It provides two main libraries: libcrypto and libssl (that also implements TLS).

LibreSSL is a fork of OpenSSL. It is maintained by OpenBSD as part of its base system. However, the upstream also maintains LibreSSL-portable repository that provides build system and portability glue for using it on other systems. LibreSSL provides partially compatible versions of libcrypto and libssl, and a new libtls library. Both libssl and libtls can be used for TLS support in your applications.

LibreTLS is a lightweight fork of libtls from LibreSSL that builds it against OpenSSL. This makes it possible to build programs written for libtls against OpenSSL+LibreTLS instead of LibreSSL.

So, to summarize. OpenSSL is the original, while LibreSSL is the OpenBSD fork. libtls is the LibreSSL original library, while LibreTLS is its fork for OpenSSL. Makes sense, right? And finally, despite the name, they all implement TLS.

November 06 2020

Renaming and reshaping Scylla tables using scylla-migrator

Alexys Jacob (ultrabug) November 06, 2020, 20:11

We have recently faced a problem where some of the first Scylla tables we created on our main production cluster were not in line any more with the evolved schemas that recent tables are using.

This typical engineering problem requires either to keep those legacy tables and data queries or to migrate it to the more optimal model with the bandwagon of applications to be modified to query the data the new way… That’s something nobody likes doing but hey, we don’t like legacy at Numberly so let’s kill that one!

To overcome this challenge we used the scylla-migrator project and I thought it could be useful to share this experience.

How and why our schema evolved

When we first approached ID matching tables we chose to answer two problems at the same time: query the most recent data and keep the history of the changes per source ID.

This means that those tables included a date as part of their PRIMARY KEY while the partition key was obviously the matching table ID we wanted to lookup from:

CREATE TABLE IF NOT EXISTS ids_by_partnerid(
partnerid text,
id text,
date timestamp,
PRIMARY KEY ((partnerid), date, id)
)
WITH CLUSTERING ORDER BY (date DESC)

Making a table with an ever changing date in the clustering key creates what we call a history table. In the schema above the uniqueness of a row is not only defined by a partner_id / id couple but also by its date!

Quick caveat: you have to be careful about the actual date timestamp resolution since you may not want to create a row for every second of the same partner_id / id couple (we use an hour resolution).

History tables are good for analytics and we also figured we could use them for batch and real time queries where we would be interested in the “most recent ids for the given partner_id” (sometimes flavored with a LIMIT):

SELECT id FROM ids_by_partnerid WHERE partner_id = "AXZAZLKDJ" ORDER BY date DESC;

As time passed, real time Kafka pipelines started to query these tables hard and were mostly interested in “all the ids known for the given partner_id“.

A sort of DISTINCT(id) is out of the scope of our table! For this we need a table schema that represents a condensed view of the data. We call them compact tables and the only difference with the history table is that the date timestamp is simply not part of the PRIMARY KEY:

CREATE TABLE IF NOT EXISTS ids_by_partnerid(
partnerid text,
id text,
seen_date timestamp,
PRIMARY KEY ((partnerid), id)
)

To make that transition happen we thus wanted to:

  • rename history tables with an _history suffix so that they are clearly identified as such
  • get a compacted version of the tables (by keeping their old name) while renaming the date column name to seen_date
  • do it as fast as possible since we will need to stop our feeding pipeline and most of our applications during the process…

STOP: it’s not possible to rename a table in CQL!

♦ Scylla-migrator to the rescue

We decided to abuse the scylla-migrator to perform this perilous migration.

As it was originally designed to help users migrate from Cassandra to Scylla by leveraging Spark it seemed like a good fit for the task since we happen to own Spark clusters powered by Hadoop YARN.

Building scylla-migrator for Spark < 2.4

Recent scylla-migrator does not support older Spark versions. The trick is to look at the README.md git log and checkout the hopefully right commit that supports your Spark cluster version.

In our case for Spark 2.3 we used git commit bc82a57e4134452f19a11cd127bd4c6a25f75020.

On Gentoo, make sure to use dev-java/sbt-bin since the non binary version is vastly out of date and won’t build the project. You need at least version 1.3.

The scylla-migrator plan

The documentation explains that we need a config file that points to a source cluster+table and a destination cluster+table as long as they have the same schema structure…

Renaming is then as simple as duplicating the schema using CQLSH and running the migrator!

But what about our compacted version of our original table? The schema is different from the source table!…

Good news is that as long as all your columns remain present, you can also change the PRIMARY KEY of your destination table and it will still work!

This make the scylla-migrator an amazing tool to reshape or pivot tables!

  • the column date is renamed to seen_date: that’s okay, scylla-migrator supports column renaming (it’s a Spark dataframe after all)!
  • the PRIMARY KEY is different in the compacted table since we removed the ‘date‘ from the clustering columns: we’ll get a compacted table for free!
Using scylla-migrator

The documentation is a bit poor on how to submit your application to a Hadoop YARN cluster but that’s kind of expected.

It also did not mention how to connect to a SSL enabled cluster (are there people really not using SSL on the wire in their production environment?)… anyway let’s not start a flame war ♦

The trick that will save you is to know that you can append all the usual Spark options that are available in the spark-cassandra-connector!

Submitting to a Kerberos protected Hadoop YARN cluster targeting a SSL enabled Scylla cluster then looks like this:

export JAR_NAME=target/scala-2.11/scylla-migrator-assembly-0.0.1.jar
export KRB_PRINCIPAL=USERNAME

spark2-submit \
 --name ScyllaMigratorApplication \
 --class com.scylladb.migrator.Migrator  \
 --conf spark.cassandra.connection.ssl.clientAuth.enabled=True  \
 --conf spark.cassandra.connection.ssl.enabled=True  \
 --conf spark.cassandra.connection.ssl.trustStore.path=jssecacerts  \
 --conf spark.cassandra.connection.ssl.trustStore.password=JKS_PASSWORD  \
 --conf spark.cassandra.input.consistency.level=LOCAL_QUORUM \
 --conf spark.cassandra.output.consistency.level=LOCAL_QUORUM \
 --conf spark.scylla.config=config.yaml \
 --conf spark.yarn.executor.memoryOverhead=1g \
 --conf spark.blacklist.enabled=true  \
 --conf spark.blacklist.task.maxTaskAttemptsPerExecutor=1  \
 --conf spark.blacklist.task.maxTaskAttemptsPerNode=1  \
 --conf spark.blacklist.stage.maxFailedTasksPerExecutor=1  \
 --conf spark.blacklist.stage.maxFailedExecutorsPerNode=1  \
 --conf spark.executor.cores=16 \
 --deploy-mode client \
 --files jssecacerts \
 --jars ${JAR_NAME}  \
 --keytab ${KRB_PRINCIPAL}.keytab  \
 --master yarn \
 --principal ${KRB_PRINCIPAL}  \
 ${JAR_NAME}

Note that we chose to apply a higher consistency level to our reads using a LOCAL_QUORUM instead of the default LOCAL_ONE. I strongly encourage you to do the same since it’s appropriate when you’re using this kind of tool!

Column renaming is simply expressed in the configuration file like this:

# Column renaming configuration.
renames:
  - from: date
    to: seen_date
Tuning scylla-migrator

While easy to use, tuning scylla-migrator to operate those migrations as fast as possible turned out to be a real challenge (remember we have some production applications shut down during the process).

Even using 300+ Spark executors I couldn’t get my Scylla cluster utilization to more than 50% and migrating a single table with a bit more than 1B rows took almost 2 hours…

We found the best knobs to play with thanks to the help of Lubos Kosco and this blog post from ScyllaDB:

  • Increase the splitCount setting: more splits means more Spark executors will be spawned and more tasks out of it. While it might be magic on a pure Spark deployment it’s not that amazing on a Hadoop YARN one where executors are scheduled in containers with 1 core by default. We simply moved it from 256 to 384.
  • Disable compaction on destination tables schemas. This gave us a big boost and saved the day since it avoids adding the overhead of compacting while you’re pushing down data hard!

To disable compaction on a table simply:

ALTER TABLE ids_by_partnerid_history WITH compaction = {'class': 'NullCompactionStrategy'};

Remember to run a manual compaction (nodetool compact <keyspace> <table>) and to enable compaction back on your tables once you’re done!

Happy Scylla tables mangling!

We have recently faced a problem where some of the first Scylla tables we created on our main production cluster were not in line any more with the evolved schemas that recent tables are using.

This typical engineering problem requires either to keep those legacy tables and data queries or to migrate it to the more optimal model with the bandwagon of applications to be modified to query the data the new way… That’s something nobody likes doing but hey, we don’t like legacy at Numberly so let’s kill that one!

To overcome this challenge we used the scylla-migrator project and I thought it could be useful to share this experience.

How and why our schema evolved

When we first approached ID matching tables we chose to answer two problems at the same time: query the most recent data and keep the history of the changes per source ID.

This means that those tables included a date as part of their PRIMARY KEY while the partition key was obviously the matching table ID we wanted to lookup from:

CREATE TABLE IF NOT EXISTS ids_by_partnerid(
partnerid text,
id text,
date timestamp,
PRIMARY KEY ((partnerid), date, id)
)
WITH CLUSTERING ORDER BY (date DESC)

Making a table with an ever changing date in the clustering key creates what we call a history table. In the schema above the uniqueness of a row is not only defined by a partner_id / id couple but also by its date!

Quick caveat: you have to be careful about the actual date timestamp resolution since you may not want to create a row for every second of the same partner_id / id couple (we use an hour resolution).

History tables are good for analytics and we also figured we could use them for batch and real time queries where we would be interested in the “most recent ids for the given partner_id” (sometimes flavored with a LIMIT):

SELECT id FROM ids_by_partnerid WHERE partner_id = "AXZAZLKDJ" ORDER BY date DESC;

As time passed, real time Kafka pipelines started to query these tables hard and were mostly interested in “all the ids known for the given partner_id“.

A sort of DISTINCT(id) is out of the scope of our table! For this we need a table schema that represents a condensed view of the data. We call them compact tables and the only difference with the history table is that the date timestamp is simply not part of the PRIMARY KEY:

CREATE TABLE IF NOT EXISTS ids_by_partnerid(
partnerid text,
id text,
seen_date timestamp,
PRIMARY KEY ((partnerid), id)
)

To make that transition happen we thus wanted to:

  • rename history tables with an _history suffix so that they are clearly identified as such
  • get a compacted version of the tables (by keeping their old name) while renaming the date column name to seen_date
  • do it as fast as possible since we will need to stop our feeding pipeline and most of our applications during the process…

STOP: it’s not possible to rename a table in CQL!

Scylla-migrator to the rescue

We decided to abuse the scylla-migrator to perform this perilous migration.

As it was originally designed to help users migrate from Cassandra to Scylla by leveraging Spark it seemed like a good fit for the task since we happen to own Spark clusters powered by Hadoop YARN.

Building scylla-migrator for Spark < 2.4

Recent scylla-migrator does not support older Spark versions. The trick is to look at the README.md git log and checkout the hopefully right commit that supports your Spark cluster version.

In our case for Spark 2.3 we used git commit bc82a57e4134452f19a11cd127bd4c6a25f75020.

On Gentoo, make sure to use dev-java/sbt-bin since the non binary version is vastly out of date and won’t build the project. You need at least version 1.3.

The scylla-migrator plan

The documentation explains that we need a config file that points to a source cluster+table and a destination cluster+table as long as they have the same schema structure…

Renaming is then as simple as duplicating the schema using CQLSH and running the migrator!

But what about our compacted version of our original table? The schema is different from the source table!…

Good news is that as long as all your columns remain present, you can also change the PRIMARY KEY of your destination table and it will still work!

This make the scylla-migrator an amazing tool to reshape or pivot tables!

  • the column date is renamed to seen_date: that’s okay, scylla-migrator supports column renaming (it’s a Spark dataframe after all)!
  • the PRIMARY KEY is different in the compacted table since we removed the ‘date‘ from the clustering columns: we’ll get a compacted table for free!

Using scylla-migrator

The documentation is a bit poor on how to submit your application to a Hadoop YARN cluster but that’s kind of expected.

It also did not mention how to connect to a SSL enabled cluster (are there people really not using SSL on the wire in their production environment?)… anyway let’s not start a flame war 🙂

The trick that will save you is to know that you can append all the usual Spark options that are available in the spark-cassandra-connector!

Submitting to a Kerberos protected Hadoop YARN cluster targeting a SSL enabled Scylla cluster then looks like this:

export JAR_NAME=target/scala-2.11/scylla-migrator-assembly-0.0.1.jar
export KRB_PRINCIPAL=USERNAME

spark2-submit \
 --name ScyllaMigratorApplication \
 --class com.scylladb.migrator.Migrator  \
 --conf spark.cassandra.connection.ssl.clientAuth.enabled=True  \
 --conf spark.cassandra.connection.ssl.enabled=True  \
 --conf spark.cassandra.connection.ssl.trustStore.path=jssecacerts  \
 --conf spark.cassandra.connection.ssl.trustStore.password=JKS_PASSWORD  \
 --conf spark.cassandra.input.consistency.level=LOCAL_QUORUM \
 --conf spark.cassandra.output.consistency.level=LOCAL_QUORUM \
 --conf spark.scylla.config=config.yaml \
 --conf spark.yarn.executor.memoryOverhead=1g \
 --conf spark.blacklist.enabled=true  \
 --conf spark.blacklist.task.maxTaskAttemptsPerExecutor=1  \
 --conf spark.blacklist.task.maxTaskAttemptsPerNode=1  \
 --conf spark.blacklist.stage.maxFailedTasksPerExecutor=1  \
 --conf spark.blacklist.stage.maxFailedExecutorsPerNode=1  \
 --conf spark.executor.cores=16 \
 --deploy-mode client \
 --files jssecacerts \
 --jars ${JAR_NAME}  \
 --keytab ${KRB_PRINCIPAL}.keytab  \
 --master yarn \
 --principal ${KRB_PRINCIPAL}  \
 ${JAR_NAME}

Note that we chose to apply a higher consistency level to our reads using a LOCAL_QUORUM instead of the default LOCAL_ONE. I strongly encourage you to do the same since it’s appropriate when you’re using this kind of tool!

Column renaming is simply expressed in the configuration file like this:

# Column renaming configuration.
renames:
  - from: date
    to: seen_date

Tuning scylla-migrator

While easy to use, tuning scylla-migrator to operate those migrations as fast as possible turned out to be a real challenge (remember we have some production applications shut down during the process).

Even using 300+ Spark executors I couldn’t get my Scylla cluster utilization to more than 50% and migrating a single table with a bit more than 1B rows took almost 2 hours…

We found the best knobs to play with thanks to the help of Lubos Kosco and this blog post from ScyllaDB:

  • Increase the splitCount setting: more splits means more Spark executors will be spawned and more tasks out of it. While it might be magic on a pure Spark deployment it’s not that amazing on a Hadoop YARN one where executors are scheduled in containers with 1 core by default. We simply moved it from 256 to 384.
  • Disable compaction on destination tables schemas. This gave us a big boost and saved the day since it avoids adding the overhead of compacting while you’re pushing down data hard!

To disable compaction on a table simply:

ALTER TABLE ids_by_partnerid_history WITH compaction = {'class': 'NullCompactionStrategy'};

Remember to run a manual compaction (nodetool compact <keyspace> <table>) and to enable compaction back on your tables once you’re done!

Happy Scylla tables mangling!

October 21 2020

DISTUTILS_USE_SETUPTOOLS, QA spam and… more QA spam?

Michał Górny (mgorny) October 21, 2020, 19:52

I suppose that most of the Gentoo developers have seen at least one of the ‘uses a probably incorrect DISTUTILS_USE_SETUPTOOLS value’ bugs by now. Over 350 have been filed so far, and new ones are filed practically daily. The truth is, I’ve never intended for this QA check to result in bugs being filed against packages, and certainly not that many bugs.

This is not an important problem to be fixed immediately. The vast majority of Python packages depend on setuptools at build time (this is why the build-time dependency is the eclass’ default), and being able to unmerge setuptools is not a likely scenario. The underlying idea was that the QA check would make it easier to update DISTUTILS_USE_SETUPTOOLS when bumping packages.

Nobody has asked me for my opinion, and now we have hundreds of bugs that are not very helpful. In fact, the effort involved in going through all the bugmail, updating packages and closing the bugs greatly exceeds the negligible gain. Nevertheless, some people actually did it. I have bad news for them: setuptools upstream has changed entry point mechanism, and most of the values will have to change again. Let me elaborate on that.

The current logic

The current eclass logic revolves around three primary values:

  • no indicating that the package does not use setuptools
  • bdepend indicating that the package uses setuptools at build time only
  • rdepend indicating that the package uses setuptools at build- and runtime

There’s also support for pyproject.toml but it’s tangential to the problem at hand, so let’s ignore it.

The setuptools package — besides the build system — includes a pkg_resources sub-package that can be used to access package’s metadata and resources. The two primary uses of rdepend revolves around this. These are:

  1. console_scripts entry points — i.e. autogenerated executable scripts that call a function within the installed package rather than containing the program code itself.
  2. Direct uses of pkg_resources in the modules installed by the package.

Both of these cases were equivalent from dependency standpoint. Well, not anymore.

Entry points via importlib.metadata

Well, the big deal is the importlib.metadata module that was added in Python 3.8 (there’s also a relevant importlib.resources module since Python 3.7). It is a built-in module that provides routines to access the installed package metadata, and therefore renders another part of pkg_resources redundant.

The big deal is that the new versions of setuptools have embraced it, and no longer require pkg_resources to run entry points. To be more precise, the new logic selects the built-in module as the first choice, with fallback to the importlib_metadata backport and finally to pkg_resources.

This means that the vast majority of packages that used to depend on setuptools at runtime, no longer does strictly that. With Python 3.8 and newer, they have no additional runtime dependencies and just require setuptools at build time. With older versions of Python, they prefer importlib_metadata over it. In both cases, the packages can still use pkg_resources directly though.

How to resolve it via the eclass?

Now, technically speaking this means replacing rdepend with three new variants:

  • scripts — that means build-time dependency on setuptools + runtime impl-conditional dep on importlib_metadata, for pure entry point usage.
  • rdepend — that means runtime dependency on setuptools, for pure pkg_resources usage.
  • scripts+rdepend — for packages that combine both.

Of course, this means that the existing packages would get a humongous number of new bug reports, often requesting a change to the value that was updated recently. The number could be smaller if we changed the existing meaning of rdepend to mean importlib.metadata, and introduced a new value for pkg_resources.

Still, that’s not the best part. The real fun idea is that once we remove Python 3.7, all Python versions would have importlib.metadata built-in and the distinction will no longer be necessary. Eventually, everyone would have to update the value again, this time to bdepend. Great, right?

…or not to resolve it?

Now that we’ve discussed the solution recommended to me, let’s consider an alternative. For the vast majority of packages, the runtime dependency on setuptools is unnecessary. If the user uses Python 3.8+ or has importlib_metadata installed (which is somewhat likely, due to direct dependencies on it), pkg_resources will not be used by the entry points. Nevertheless, setuptools is still pretty common as a build-time dependencies and, as I said before, it makes little sense to uninstall it.

We can simply keep things as-is. Sure, the dependencies will not be 100% optimal. Yet, the dependency on setuptools will ensure that entry points continue working even if the user does not have importlib_metadata installed. We will eventually want to update DISTUTILS_USE_SETUPTOOLS logic but we can wait for it till Python versions older than 3.8 become irrelevant, and we are back to three main variants.

I suppose that most of the Gentoo developers have seen at least one of the ‘uses a probably incorrect DISTUTILS_USE_SETUPTOOLS value’ bugs by now. Over 350 have been filed so far, and new ones are filed practically daily. The truth is, I’ve never intended for this QA check to result in bugs being filed against packages, and certainly not that many bugs.

This is not an important problem to be fixed immediately. The vast majority of Python packages depend on setuptools at build time (this is why the build-time dependency is the eclass’ default), and being able to unmerge setuptools is not a likely scenario. The underlying idea was that the QA check would make it easier to update DISTUTILS_USE_SETUPTOOLS when bumping packages.

Nobody has asked me for my opinion, and now we have hundreds of bugs that are not very helpful. In fact, the effort involved in going through all the bugmail, updating packages and closing the bugs greatly exceeds the negligible gain. Nevertheless, some people actually did it. I have bad news for them: setuptools upstream has changed entry point mechanism, and most of the values will have to change again. Let me elaborate on that.

The current logic

The current eclass logic revolves around three primary values:

  • no indicating that the package does not use setuptools
  • bdepend indicating that the package uses setuptools at build time only
  • rdepend indicating that the package uses setuptools at build- and runtime

There’s also support for pyproject.toml but it’s tangential to the problem at hand, so let’s ignore it.

The setuptools package — besides the build system — includes a pkg_resources sub-package that can be used to access package’s metadata and resources. The two primary uses of rdepend revolves around this. These are:

  1. console_scripts entry points — i.e. autogenerated executable scripts that call a function within the installed package rather than containing the program code itself.
  2. Direct uses of pkg_resources in the modules installed by the package.

Both of these cases were equivalent from dependency standpoint. Well, not anymore.

Entry points via importlib.metadata

Well, the big deal is the importlib.metadata module that was added in Python 3.8 (there’s also a relevant importlib.resources module since Python 3.7). It is a built-in module that provides routines to access the installed package metadata, and therefore renders another part of pkg_resources redundant.

The big deal is that the new versions of setuptools have embraced it, and no longer require pkg_resources to run entry points. To be more precise, the new logic selects the built-in module as the first choice, with fallback to the importlib_metadata backport and finally to pkg_resources.

This means that the vast majority of packages that used to depend on setuptools at runtime, no longer does strictly that. With Python 3.8 and newer, they have no additional runtime dependencies and just require setuptools at build time. With older versions of Python, they prefer importlib_metadata over it. In both cases, the packages can still use pkg_resources directly though.

How to resolve it via the eclass?

Now, technically speaking this means replacing rdepend with three new variants:

  • scripts — that means build-time dependency on setuptools + runtime impl-conditional dep on importlib_metadata, for pure entry point usage.
  • rdepend — that means runtime dependency on setuptools, for pure pkg_resources usage.
  • scripts+rdepend — for packages that combine both.

Of course, this means that the existing packages would get a humongous number of new bug reports, often requesting a change to the value that was updated recently. The number could be smaller if we changed the existing meaning of rdepend to mean importlib.metadata, and introduced a new value for pkg_resources.

Still, that’s not the best part. The real fun idea is that once we remove Python 3.7, all Python versions would have importlib.metadata built-in and the distinction will no longer be necessary. Eventually, everyone would have to update the value again, this time to bdepend. Great, right?

…or not to resolve it?

Now that we’ve discussed the solution recommended to me, let’s consider an alternative. For the vast majority of packages, the runtime dependency on setuptools is unnecessary. If the user uses Python 3.8+ or has importlib_metadata installed (which is somewhat likely, due to direct dependencies on it), pkg_resources will not be used by the entry points. Nevertheless, setuptools is still pretty common as a build-time dependencies and, as I said before, it makes little sense to uninstall it.

We can simply keep things as-is. Sure, the dependencies will not be 100% optimal. Yet, the dependency on setuptools will ensure that entry points continue working even if the user does not have importlib_metadata installed. We will eventually want to update DISTUTILS_USE_SETUPTOOLS logic but we can wait for it till Python versions older than 3.8 become irrelevant, and we are back to three main variants.

October 06 2020

Speeding up emerge depgraph calculation using PyPy3

Michał Górny (mgorny) October 06, 2020, 8:01

WARNING: Some of the respondents were< not able to reproduce my results. It is possible that this dependent on the hardware or even a specific emerge state. Please do not rely on my claims that PyPy3 runs faster, and verify it on your system before switching permanently.

If you used Gentoo for some time, you’ve probably noticed that emerge is getting slower and slower. Before I switched to SSD, my emerge could take even 10 minutes before it figured out what to do! Even now it’s pretty normal for the dependency calculation to take 2 minutes. Georgy Yakovlev recently tested PyPy3 on PPC64, and noticed a great speedup, apparently due to very poor optimization of CPython on that platform. I’ve attempted the same on amd64, and measured a 35% speedup nevertheless.

PyPy is an alternative implementation of Python that uses a JIT compiler to run Python code. JIT can achieve greater performance on computation-intensive tasks, at the cost of slower program startup. This means that it could be slower for some programs, and faster for others. In case of emerge dependency calculation, it’s definitely faster. A quick benchmark done using dev-perl/Dumbbench (great tool, by the way) shows, for today’s @world upgrade:

  • Python 3.9.0: 111.42 s ± 0.87 s (0.8%)
  • PyPy3.7 7.3.2: 72.30 s ± 0.23 s (0.3%)

dev-python/pypy3 is supported on Gentoo, on amd64, arm64, ppc64 and x86 targets. The interpreter itself takes quite a while to build (35­­–45 minutes on a modern Ryzen), so you may want to suggest emerge to grab dev-python/pypy3-exe-bin:

$ emerge -nv dev-python/pypy3 dev-python/pypy3-exe-bin

If you want to build it from source, it is recommended to grab dev-python/pypy first (possibly with dev-python/pypy-exe-bin for faster bootstrap), as building with PyPy itself is much faster:

# use prebuilt compiler for fast bootstrap
$ emerge -1v dev-python/pypy dev-python/pypy-exe-bin
# rebuild the interpreter
$ emerge -nv dev-python/pypy dev-python/pypy-exe
# build pypy3
$ emerge -nv dev-python/pypy3

Update 2020-10-07: Afterwards, you need to rebuild Portage and its dependencies with PyPy3 support enabled. The easiest way of doing it is to enable the PyPy3 target globally, and rebuilding relevant packages:

$ echo '*/* PYTHON_TARGETS: pypy3' >> /etc/portage/package.use
$ emerge -1vUD sys-apps/portage

Finally, you can use python-exec’s per-program configuration to use PyPy3 for emerge while continuing to use CPython for other programs:

$ echo pypy3 >> /etc/python-exec/emerge.conf
# yep, that's pypy3.7
$ emerge --info | head -1
Portage 3.0.7 (python 3.7.4-final-0, default/linux/amd64/17.1/desktop, gcc-9.3.0, glibc-2.32-r2, 5.8.12 x86_64)

WARNING: Some of the respondents were< not able to reproduce my results. It is possible that this dependent on the hardware or even a specific emerge state. Please do not rely on my claims that PyPy3 runs faster, and verify it on your system before switching permanently.

If you used Gentoo for some time, you’ve probably noticed that emerge is getting slower and slower. Before I switched to SSD, my emerge could take even 10 minutes before it figured out what to do! Even now it’s pretty normal for the dependency calculation to take 2 minutes. Georgy Yakovlev recently tested PyPy3 on PPC64, and noticed a great speedup, apparently due to very poor optimization of CPython on that platform. I’ve attempted the same on amd64, and measured a 35% speedup nevertheless.

PyPy is an alternative implementation of Python that uses a JIT compiler to run Python code. JIT can achieve greater performance on computation-intensive tasks, at the cost of slower program startup. This means that it could be slower for some programs, and faster for others. In case of emerge dependency calculation, it’s definitely faster. A quick benchmark done using dev-perl/Dumbbench (great tool, by the way) shows, for today’s @world upgrade:

  • Python 3.9.0: 111.42 s ± 0.87 s (0.8%)
  • PyPy3.7 7.3.2: 72.30 s ± 0.23 s (0.3%)

dev-python/pypy3 is supported on Gentoo, on amd64, arm64, ppc64 and x86 targets. The interpreter itself takes quite a while to build (35­­–45 minutes on a modern Ryzen), so you may want to suggest emerge to grab dev-python/pypy3-exe-bin:

$ emerge -nv dev-python/pypy3 dev-python/pypy3-exe-bin

If you want to build it from source, it is recommended to grab dev-python/pypy first (possibly with dev-python/pypy-exe-bin for faster bootstrap), as building with PyPy itself is much faster:

# use prebuilt compiler for fast bootstrap
$ emerge -1v dev-python/pypy dev-python/pypy-exe-bin
# rebuild the interpreter
$ emerge -nv dev-python/pypy dev-python/pypy-exe
# build pypy3
$ emerge -nv dev-python/pypy3

Update 2020-10-07: Afterwards, you need to rebuild Portage and its dependencies with PyPy3 support enabled. The easiest way of doing it is to enable the PyPy3 target globally, and rebuilding relevant packages:

$ echo '*/* PYTHON_TARGETS: pypy3' >> /etc/portage/package.use
$ emerge -1vUD sys-apps/portage

Finally, you can use python-exec’s per-program configuration to use PyPy3 for emerge while continuing to use CPython for other programs:

$ echo pypy3 >> /etc/python-exec/emerge.conf
# yep, that's pypy3.7
$ emerge --info | head -1
Portage 3.0.7 (python 3.7.4-final-0, default/linux/amd64/17.1/desktop, gcc-9.3.0, glibc-2.32-r2, 5.8.12 x86_64)

September 16 2020

Console-bound systemd services, the right way

Marek Szuba (marecki) September 16, 2020, 17:40

Let’s say that you need to run on your system some sort server software which instead of daemonising, has a command console permanently attached to standard input. Let us also say that said console is the only way for the administrator to interact with the service, including requesting its orderly shutdown – whoever has written it has not implemented any sort of signal handling so sending SIGTERM to the service process causes it to simply drop dead, potentially losing data in the process. And finally, let us say that the server in question is proprietary software so it isn’t really possible for you to fix any of the above in the source code (yes, I am talking about a specific piece of software – which by the way is very much alive and kicking as of late 2020). What do you do?

According to the collective wisdom of World Wide Web, the answer to this question is “use a terminal multiplexer like tmux or screen“, or at the very least a stripped-down variant of same such as dtach. OK, that sort of works – what if you want to run it as a proper system-managed service under e.g. OpenRC? The answer of the Stack Exchange crowd: have your init script invoke the terminal multiplexer. Oooooookay, how about under systemd, which actually prefers services it manages not to daemonise by itself? Nope, still “use a terminal multiplexer”.

What follows is my attempt to run a service like this under systemd more efficiently and elegantly, or at least with no extra dependencies beyond basic Unix shell commands.

Let us have a closer look at what systemd does with standard I/O of processes it spawns. The man page systemd.exec(5) tells us that what happens here is controlled by the directives StandardInput, StandardOutput and StandardError. By default the former is assigned to null while the latter two get piped to the journal, there are however quite a few other options here. According to the documentation, here is what systemd allows us to connect to standard input:

    • we are not interested in null (for obvious reasons) or any of the tty options (the whole point of this exercise is to run fully detached from any terminals);
    • data would work if we needed to feed some commands to the service when it starts but is useless for triggering a shutdown;
    • file looks promising – just point it to a FIFO on the file system and we’re all set – but it doesn’t actually take care of creating the FIFO for us. While we could in theory work around that by invoking mkfifo (and possibly chown if the service is to run as a specific user) in ExecStartPre, let’s see if we can find a better option
    • socket “is valid in socket-activated services only” and the corresponding socket unit must “have Accept=yes set”. What we want is the opposite, i.e. for the service to create its socket
    • finally, there is fd – which seems to be exactly what we need. According to the documentation all we have to do is write a socket unit creating a FIFO with appropriate ownership and permissions, make it a dependency of our service using the Sockets directive, and assign the corresponding named file descriptor to standard input.

Let’s try it out. To begin with, our socket unit “proprietarycrapd.socket”. Note that I have successfully managed to get this to work using unit templates as well, %i expansion works fine both here and while specifying unit or file-descriptor names in the service unit – but in order to avoid any possible confusion caused by the fact socket-activated services explicitly require being defined with templates, I have based my example on static units:

[Unit]
Description=Command FIFO for proprietarycrapd

[Socket]
ListenFIFO=/run/proprietarycrapd/pcd.control
DirectoryMode=0700
SocketMode=0600
SocketUser=pcd
SocketGroup=pcd
RemoveOnStop=true

Apart from the fact the unit in question has got no [Install] section (which makes sense given we want this socket to only be activated by the corresponding service, not by systemd itself), nothing out of the ordinary here. Note that since we haven’t used the directive FileDescriptorName, systemd will apply default behaviour and give the file descriptor associated with the FIFO the name of the socket unit itself.

And now, our service unit “proprietarycrapd.service”:

[Unit]
Description=proprietarycrap daemon
After=network.target

[Service]
User=pcd
Group=pcd
Sockets=proprietarycrapd.socket
StandardInput=socket
StandardOutput=journal
StandardError=journal
ExecStart=/opt/proprietarycrap/bin/proprietarycrapd
ExecStop=/usr/local/sbin/proprietarycrapd-stop

[Install]
WantedBy=multi-user.target

StandardInput=socket??? Whatever’s happened to StandardInput=fd:proprietarycrapd.socket??? Here is an odd thing. If I use the latter on my system, the service starts fine and gets the FIFO attached to its standard input – but when I try to stop the service the journal shows “Failed to load a named file descriptor: No such file or directory”, the ExecStop command is not run and systemd immediately fires a SIGTERM at the process. No idea why. Anyway, through trial and error I have found out that StandardInput=socket not only works fine in spite of being used in a service that is not socket-activated but actually does exactly what I wanted to achieve – so that is what I have ended up using.

Which brings us to the final topic, the ExecStop command. There are three reasons why I have opted for putting all the commands required to shut the server down in a shell script:

    • first and foremost, writing the shutdown command to the FIFO will return right away even if the service takes time to shut down. systemd sends SIGTERM to the unit process as soon as the last ExecStop command has exited so we have to follow the echo with something that waits for the server process to finish (see below)
    • systemd does not execute Exec commands in a shell so simply running echo > /run/proprietarycrapd/pcd.control doesn’t work, we would have to wrap the echo call in an explicit invocation of a shell
    • between the aforementioned two reasons and the fact the particular service for which I have created these units actually requires several commands in order to execute an orderly shutdown, I have decided that putting all those command in a script file instead of cramming them into the unit would be much cleaner.

The shutdown script itself is mostly unremarkable so I’ll only quote the bit responsible for waiting for the server to actually shut down. At present I am still looking for doing it in blocking fashion without adding more dependencies (wait only works on child processes of the current shell, the server in question does not create any lock files to which I could attach inotifywait, and attaching the latter to the relevant directory in /proc does not work) but in the meantime, the loop

while kill -0 “${MAINPID}” 2> /dev/null; do
sleep 1s
done

keeps the script ticking along until either the process has exited or the script has timed out (see the TimeoutStopSec directive in systemd.service(5)) and systemd has killed both it and the service itself.

Acknowledgements: with many thanks to steelman for having figured out the StandardInput=socket bit in particular and having let me bounce my ideas off him in general.

Let’s say that you need to run on your system some sort server software which instead of daemonising, has a command console permanently attached to standard input. Let us also say that said console is the only way for the administrator to interact with the service, including requesting its orderly shutdown – whoever has written it has not implemented any sort of signal handling so sending SIGTERM to the service process causes it to simply drop dead, potentially losing data in the process. And finally, let us say that the server in question is proprietary software so it isn’t really possible for you to fix any of the above in the source code (yes, I am talking about a specific piece of software – which by the way is very much alive and kicking as of late 2020). What do you do?

According to the collective wisdom of World Wide Web, the answer to this question is “use a terminal multiplexer like tmux or screen“, or at the very least a stripped-down variant of same such as dtach. OK, that sort of works – what if you want to run it as a proper system-managed service under e.g. OpenRC? The answer of the Stack Exchange crowd: have your init script invoke the terminal multiplexer. Oooooookay, how about under systemd, which actually prefers services it manages not to daemonise by itself? Nope, still “use a terminal multiplexer”.

What follows is my attempt to run a service like this under systemd more efficiently and elegantly, or at least with no extra dependencies beyond basic Unix shell commands.

Let us have a closer look at what systemd does with standard I/O of processes it spawns. The man page systemd.exec(5) tells us that what happens here is controlled by the directives StandardInput, StandardOutput and StandardError. By default the former is assigned to null while the latter two get piped to the journal, there are however quite a few other options here. According to the documentation, here is what systemd allows us to connect to standard input:

    • we are not interested in null (for obvious reasons) or any of the tty options (the whole point of this exercise is to run fully detached from any terminals);
    • data would work if we needed to feed some commands to the service when it starts but is useless for triggering a shutdown;
    • file looks promising – just point it to a FIFO on the file system and we’re all set – but it doesn’t actually take care of creating the FIFO for us. While we could in theory work around that by invoking mkfifo (and possibly chown if the service is to run as a specific user) in ExecStartPre, let’s see if we can find a better option
    • socket “is valid in socket-activated services only” and the corresponding socket unit must “have Accept=yes set”. What we want is the opposite, i.e. for the service to create its socket
    • finally, there is fd – which seems to be exactly what we need. According to the documentation all we have to do is write a socket unit creating a FIFO with appropriate ownership and permissions, make it a dependency of our service using the Sockets directive, and assign the corresponding named file descriptor to standard input.

Let’s try it out. To begin with, our socket unit “proprietarycrapd.socket”. Note that I have successfully managed to get this to work using unit templates as well, %i expansion works fine both here and while specifying unit or file-descriptor names in the service unit – but in order to avoid any possible confusion caused by the fact socket-activated services explicitly require being defined with templates, I have based my example on static units:

[Unit]
Description=Command FIFO for proprietarycrapd

[Socket]
ListenFIFO=/run/proprietarycrapd/pcd.control
DirectoryMode=0700
SocketMode=0600
SocketUser=pcd
SocketGroup=pcd
RemoveOnStop=true

Apart from the fact the unit in question has got no [Install] section (which makes sense given we want this socket to only be activated by the corresponding service, not by systemd itself), nothing out of the ordinary here. Note that since we haven’t used the directive FileDescriptorName, systemd will apply default behaviour and give the file descriptor associated with the FIFO the name of the socket unit itself.

And now, our service unit “proprietarycrapd.service”:

[Unit]
Description=proprietarycrap daemon
After=network.target

[Service]
User=pcd
Group=pcd
Sockets=proprietarycrapd.socket
StandardInput=socket
StandardOutput=journal
StandardError=journal
ExecStart=/opt/proprietarycrap/bin/proprietarycrapd
ExecStop=/usr/local/sbin/proprietarycrapd-stop

[Install]
WantedBy=multi-user.target

StandardInput=socket??? Whatever’s happened to StandardInput=fd:proprietarycrapd.socket??? Here is an odd thing. If I use the latter on my system, the service starts fine and gets the FIFO attached to its standard input – but when I try to stop the service the journal shows “Failed to load a named file descriptor: No such file or directory”, the ExecStop command is not run and systemd immediately fires a SIGTERM at the process. No idea why. Anyway, through trial and error I have found out that StandardInput=socket not only works fine in spite of being used in a service that is not socket-activated but actually does exactly what I wanted to achieve – so that is what I have ended up using.

Which brings us to the final topic, the ExecStop command. There are three reasons why I have opted for putting all the commands required to shut the server down in a shell script:

    • first and foremost, writing the shutdown command to the FIFO will return right away even if the service takes time to shut down. systemd sends SIGTERM to the unit process as soon as the last ExecStop command has exited so we have to follow the echo with something that waits for the server process to finish (see below)
    • systemd does not execute Exec commands in a shell so simply running echo > /run/proprietarycrapd/pcd.control doesn’t work, we would have to wrap the echo call in an explicit invocation of a shell
    • between the aforementioned two reasons and the fact the particular service for which I have created these units actually requires several commands in order to execute an orderly shutdown, I have decided that putting all those command in a script file instead of cramming them into the unit would be much cleaner.

The shutdown script itself is mostly unremarkable so I’ll only quote the bit responsible for waiting for the server to actually shut down. At present I am still looking for doing it in blocking fashion without adding more dependencies (wait only works on child processes of the current shell, the server in question does not create any lock files to which I could attach inotifywait, and attaching the latter to the relevant directory in /proc does not work) but in the meantime, the loop

while kill -0 “${MAINPID}” 2> /dev/null; do
sleep 1s
done

keeps the script ticking along until either the process has exited or the script has timed out (see the TimeoutStopSec directive in systemd.service(5)) and systemd has killed both it and the service itself.

Acknowledgements: with many thanks to steelman for having figured out the StandardInput=socket bit in particular and having let me bounce my ideas off him in general.

September 15 2020

Distribution kernel for Gentoo

Gentoo News (GentooNews) September 15, 2020, 5:00

The Gentoo Distribution Kernel project is excited to announce that our new Linux Kernel packages are ready for a wide audience! The project aims to create a better Linux Kernel maintenance experience by providing ebuilds that can be used to configure, compile, and install a kernel entirely through the package manager as well as prebuilt binary kernels. We are currently shipping three kernel packages:

  • sys-kernel/gentoo-kernel - providing a kernel with genpatches applied, built using the package manager with either a distribution default or a custom configuration
  • sys-kernel/gentoo-kernel-bin - prebuilt version of gentoo-kernel, saving time on compiling
  • sys-kernel/vanilla-kernel - providing a vanilla (unmodified) upstream kernel

All the packages install the kernel as part of the package installation process — just like the rest of your system! More information can be found in the Gentoo Handbook and on the Distribution Kernel project page. Happy hacking!

Larry with Tux as cowboy

The Gentoo Distribution Kernel project is excited to announce that our new Linux Kernel packages are ready for a wide audience! The project aims to create a better Linux Kernel maintenance experience by providing ebuilds that can be used to configure, compile, and install a kernel entirely through the package manager as well as prebuilt binary kernels. We are currently shipping three kernel packages:

All the packages install the kernel as part of the package installation process — just like the rest of your system! More information can be found in the Gentoo Handbook and on the Distribution Kernel project page. Happy hacking!

September 12 2020

New vulnerability fixes in Python 2.7 (and PyPy)

Michał Górny (mgorny) September 12, 2020, 20:13

As you probably know (and aren’t necessarily happy about it), Gentoo is actively working on eliminating Python 2.7 support from packages until end of 2020. Nevertheless, we are going to keep the Python 2.7 interpreter much longer because of some build-time dependencies. While we do that, we consider it important to keep Python 2.7 as secure as possible.

The last Python 2.7 release was in April 2020. Since then, at least Gentoo and Fedora have backported CVE-2019-20907 (infinite loop in tarfile) fix to it, mostly because the patch from Python 3 applied cleanly to Python 2.7. I’ve indicated that Python 2.7 may contain more vulnerabilities, and two days ago I’ve finally gotten to audit it properly as part of bumping PyPy.

The result is matching two more vulnerabilities that were discovered in Python 3.6, and backporting fixes for them: CVE-2020-8492 (ReDoS in basic HTTP auth handling) and bpo-39603 (header injection via HTTP method). I am pleased to announce that Gentoo is probably the first distribution to address these issues, and our Python 2.7.18-r2 should not contain any known vulnerabilities. Of course, this doesn’t mean it’s safe from undiscovered problems.

While at it, I’ve also audited PyPy. Sadly, all current versions of PyPy2.7 were vulnerable to all aforementioned issues, plus partially to CVE-2019-18348 (header injection via hostname, fixed in 2.7.18). PyPy3.6 was even worse, missing 12 fixes from CPython 3.6. All these issues were fixed in Mercurial now, and should be part of 7.3.2 final.

As you probably know (and aren’t necessarily happy about it), Gentoo is actively working on eliminating Python 2.7 support from packages until end of 2020. Nevertheless, we are going to keep the Python 2.7 interpreter much longer because of some build-time dependencies. While we do that, we consider it important to keep Python 2.7 as secure as possible.

The last Python 2.7 release was in April 2020. Since then, at least Gentoo and Fedora have backported CVE-2019-20907 (infinite loop in tarfile) fix to it, mostly because the patch from Python 3 applied cleanly to Python 2.7. I’ve indicated that Python 2.7 may contain more vulnerabilities, and two days ago I’ve finally gotten to audit it properly as part of bumping PyPy.

The result is matching two more vulnerabilities that were discovered in Python 3.6, and backporting fixes for them: CVE-2020-8492 (ReDoS in basic HTTP auth handling) and bpo-39603 (header injection via HTTP method). I am pleased to announce that Gentoo is probably the first distribution to address these issues, and our Python 2.7.18-r2 should not contain any known vulnerabilities. Of course, this doesn’t mean it’s safe from undiscovered problems.

While at it, I’ve also audited PyPy. Sadly, all current versions of PyPy2.7 were vulnerable to all aforementioned issues, plus partially to CVE-2019-18348 (header injection via hostname, fixed in 2.7.18). PyPy3.6 was even worse, missing 12 fixes from CPython 3.6. All these issues were fixed in Mercurial now, and should be part of 7.3.2 final.

September 09 2020

New Packages site features

Gentoo News (GentooNews) September 09, 2020, 5:00

Our packages.gentoo.org site has recently received major feature upgrades thanks to the continued efforts of Gentoo developer Max Magorsch (arzano). Highlights include:

  • Tracking Gentoo bugs of specific packages (Bugzilla integration)
  • Tracking available upstream package versions (Repology integration)
  • QA check warnings for specific packages (QA reports integration)

Additionally, an experimental command-line client for packages.gentoo.org named “pgo” is in preparation, specifically also for our users with accesssibility needs.

Gentoo in a package

Our packages.gentoo.org site has recently received major feature upgrades thanks to the continued efforts of Gentoo developer Max Magorsch (arzano). Highlights include:

Additionally, an experimental command-line client for packages.gentoo.org named “pgo” is in preparation, specifically also for our users with accesssibility needs.

September 07 2020

py3status v3.29

Alexys Jacob (ultrabug) September 07, 2020, 20:39

Almost 5 months after the latest release (thank you COVID) I’m pleased and relieved to have finally packaged and pushed py3status v3.29 to PyPi and Gentoo portage!

This release comes with a lot of interesting contributions from quite a bunch of first-time contributors so I thought that I’d thank them first for a change!

Thank you contributors!
  • Jacotsu
  • lasers
  • Marc Poulhiès
  • Markus Sommer
  • raphaunix
  • Ricardo Pérez
  • vmoyankov
  • Wilmer van der Gaast
  • Yaroslav Dronskii
So what’s new in v3.29?

Two new exciting modules are in!

  • prometheus module: to display your promQL queries on your bar
  • watson module: for the watson time-tracking tool

Then some interesting bug fixes and enhancements are to be noted

  • py3.requests: return empty json on remote server problem fix #1401
  • core modules: remove deprectated function, fix type annotation support (#1942)

Some modules also got improved

  • battery_level module: add power consumption placeholder (#1939) + support more battery paths detection (#1946)
  • do_not_disturb module: change pause default from False to True
  • mpris module: implement broken chromium mpris interface workaround (#1943)
  • sysdata module: add {mem,swap}_free, {mem,swap}_free_unit, {mem,swap}_free_percent + try to use default intel/amd sensors first
  • google_calendar module: fix imports for newer google-python-client-api versions (#1948)

Next version of py3status will certainly drop support for EOL Python 3.5!

ultrabug (ultrabug ) September 07, 2020, 20:39

Almost 5 months after the latest release (thank you COVID) I’m pleased and relieved to have finally packaged and pushed py3status v3.29 to PyPi and Gentoo portage!

This release comes with a lot of interesting contributions from quite a bunch of first-time contributors so I thought that I’d thank them first for a change!

Thank you contributors!

  • Jacotsu
  • lasers
  • Marc Poulhiès
  • Markus Sommer
  • raphaunix
  • Ricardo Pérez
  • vmoyankov
  • Wilmer van der Gaast
  • Yaroslav Dronskii

So what’s new in v3.29?

Two new exciting modules are in!

  • prometheus module: to display your promQL queries on your bar
  • watson module: for the watson time-tracking tool

Then some interesting bug fixes and enhancements are to be noted

  • py3.requests: return empty json on remote server problem fix #1401
  • core modules: remove deprectated function, fix type annotation support (#1942)

Some modules also got improved

  • battery_level module: add power consumption placeholder (#1939) + support more battery paths detection (#1946)
  • do_not_disturb module: change pause default from False to True
  • mpris module: implement broken chromium mpris interface workaround (#1943)
  • sysdata module: add {mem,swap}_free, {mem,swap}_free_unit, {mem,swap}_free_percent + try to use default intel/amd sensors first
  • google_calendar module: fix imports for newer google-python-client-api versions (#1948)

Next version of py3status will certainly drop support for EOL Python 3.5!

September 05 2020

Portage 3.0 stabilized

Gentoo News (GentooNews) September 05, 2020, 5:00

We have good news! Gentoo’s Portage project has recently stabilized version 3.0 of the package manager.

What’s new? Well, this third version of Portage removes support for Python 2.7, which has been an ongoing effort across the main Gentoo repository by Gentoo’s Python project during the 2020 year (see this blog post).

In addition, due to a user provided patch, updating to the latest version of Portage can vastly speed up dependency calculations by around 50-60%. We love to see our community engaging in our software! For more details, see this Reddit post from the community member who provided the patch. Stay healthy and keep cooking with Gentoo!

Skating Larry

We have good news! Gentoo’s Portage project has recently stabilized version 3.0 of the package manager.

What’s new? Well, this third version of Portage removes support for Python 2.7, which has been an ongoing effort across the main Gentoo repository by Gentoo’s Python project during the 2020 year (see this blog post).

In addition, due to a user provided patch, updating to the latest version of Portage can vastly speed up dependency calculations by around 50-60%. We love to see our community engaging in our software! For more details, see this Reddit post from the community member who provided the patch. Stay healthy and keep cooking with Gentoo!

September 02 2020

New tools to help with package cleanups

Michał Górny (mgorny) September 02, 2020, 7:12

Did you ever have had Croaker shout at you because you removed an old version that just happened to be still required by some other package? Did you have to run your cleanups past (slow-ish) CI just to avoid that? If you did, I have just released app-portage/mgorny-dev-scripts, version 6 that has a tool just for that!

check-revdep to check depgraph of reverse dependencies

If you have used mgorny-dev-tools before, then you may know already about the rdep tool that prints reverse dependency information collected from qa-reports.gentoo.org. Now I’ve put it into a trivial pipeline with pkgcheck, and made check-revdep. The idea is really trivial: it fetches list of reverse dependencies from the server, filters it through qatom (from app-portage/portage-utils) and passes to pkgcheck scan -c VisibilityCheck.

So you do something like:


$ cd dev-python/unidecode
$ git rm unidecode-0.04.21.ebuild 
rm 'dev-python/unidecode/unidecode-0.04.21.ebuild'
$ check-revdep 
== rdep of dev-python/unidecode ==
== ddep of dev-python/unidecode ==
== bdep of dev-python/unidecode ==
== pdep of dev-python/unidecode ==
cat: /tmp/pindex/dev-python/unidecode: No such file or directory
dev-python/awesome-slugify
  NonexistentDeps: version 1.6.5: RDEPEND: nonexistent package: <dev-python/unidecode-0.05
  NonsolvableDepsInDev: version 1.6.5: nonsolvable depset(rdepend) keyword(~amd64) dev profile (default/linux/amd64/17.0/no-multilib/prefix/kernel-3.2+) (33 total): solutions: [ <dev-python/unidecode-0.05 ]
  NonsolvableDepsInStable: version 1.6.5: nonsolvable depset(rdepend) keyword(~amd64) stable profile (default/linux/amd64/17.0) (38 total): solutions: [ <dev-python/unidecode-0.05 ]
[...]
$ git restore --staged --worktree .

…and you know you can’t clean it up.

Warning: the tooling uses data from qa-reports that is updated periodically. If the data is not up-to-date (read: someone just added a dependency on your package), check-revdep may miss something.

Enable cache to speed things up

The rdep also supports using a local cache to avoid fetching everything from the server (= 4 requests per package). To populate the cache using the current data from server, just run:

$ rdep-fetch-cache 
--2020-09-02 09:00:05--  qa-reports.gentoo.org/output/genrdeps/rdeps.tar.xz
Resolving qa-reports.gentoo.org (qa-reports.gentoo.org)... 140.211.166.190, 2001:470:ea4a:1:230:48ff:fef8:9fdc
Connecting to qa-reports.gentoo.org (qa-reports.gentoo.org)|140.211.166.190|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1197176 (1,1M) [application/x-xz]
Saving to: ‘STDOUT’

-                                    100%[=====================================================================>]   1,14M   782KB/s    in 1,5s    

2020-09-02 09:00:08 (782 KB/s) - written to stdout [1197176/1197176]

The script will fetch the data as a tarball from server, and unpack to /tmp/*index.

pkgcheck also has its own caching, so successive checks will run faster if packages don’t change.

Combining rdep with other tools

You can also pass rdep output through to other tools, such as eshowkw (app-portage/gentoolkit) or gpy-showimpls (app-portage/gpyutils). The recommended pipeline is:

$ rdep $(pkg) | grep -v '^\[B' | xargs qatom -C -F '%{CATEGORY}/%{PN}' | sort -u | xargs gpy-showimpls | less
== rdep of dev-python/unidecode ==
== ddep of dev-python/unidecode ==
== bdep of dev-python/unidecode ==
== pdep of dev-python/unidecode ==
cat: /tmp/pindex/dev-python/unidecode: No such file or directory
app-misc/khard:0
          0.13.0: S             3.7 3.8
          0.17.0: ~             3.7 3.8 3.9
app-text/pelican:0
           3.7.1: S   #     3.6
           4.0.0: ~   #     3.6
           4.0.1: ~   #     3.6
           4.1.2: ~   #     3.6
           4.2.0: S         3.6 3.7
            9999:           3.6 3.7
dev-python/awesome-slugify:0
           1.6.5: ~         3.6 3.7 3.8
dev-python/pretty-yaml:0
          20.4.0: S         3.6 3.7 3.8 3.9
dev-python/python-slugify:0
           1.2.6: S   #     3.6 3.7 3.8 3.9
           4.0.1: S         3.6 3.7 3.8 3.9
media-sound/beets:0
        1.4.9-r2: ~ s       3.6 3.7 3.8
            9999:   s       3.6 3.7 3.8
www-apps/nikola:0
          7.8.15: S         3.6
       7.8.15-r1: ~   #     3.6
           8.0.4: ~   #     3.6 3.7 3.8
           8.1.0: ~   #     3.6 3.7 3.8
           8.1.1: ~   #     3.6 3.7 3.8
        8.1.1-r1: ~         3.6 3.7 3.8
Getting redundant versions from pkgcheck

Another nice trick is to have pkgcheck scan for redundant versions, and output it in format convenient for machine use.

For example, I often use:

git grep -l mgorny@ '**/metadata.xml' | cut -d/ -f1-2 | uniq | xargs pkgcheck scan -c RedundantVersionCheck -R FormatReporter --format '( cd {category}/{package} && eshowkw -C )'| sort -u | bash - |& less

that gives eshowkw output for all packages maintained by me that have potentially redundant versions. Plus, in another terminal:

$ git grep -l mgorny@ '**/metadata.xml' | cut -d/ -f1-2 | uniq | xargs pkgcheck scan -c RedundantVersionCheck -R FormatReporter --format 'git rm {category}/{package}/${package}-{version}.ebuild' | less

that gives convenient commands to copy-paste-execute.

Did you ever have had Croaker shout at you because you removed an old version that just happened to be still required by some other package? Did you have to run your cleanups past (slow-ish) CI just to avoid that? If you did, I have just released app-portage/mgorny-dev-scripts, version 6 that has a tool just for that!

check-revdep to check depgraph of reverse dependencies

If you have used mgorny-dev-tools before, then you may know already about the rdep tool that prints reverse dependency information collected from qa-reports.gentoo.org. Now I’ve put it into a trivial pipeline with pkgcheck, and made check-revdep. The idea is really trivial: it fetches list of reverse dependencies from the server, filters it through qatom (from app-portage/portage-utils) and passes to pkgcheck scan -c VisibilityCheck.

So you do something like:


$ cd dev-python/unidecode
$ git rm unidecode-0.04.21.ebuild 
rm 'dev-python/unidecode/unidecode-0.04.21.ebuild'
$ check-revdep 
== rdep of dev-python/unidecode ==
== ddep of dev-python/unidecode ==
== bdep of dev-python/unidecode ==
== pdep of dev-python/unidecode ==
cat: /tmp/pindex/dev-python/unidecode: No such file or directory
dev-python/awesome-slugify
  NonexistentDeps: version 1.6.5: RDEPEND: nonexistent package: <dev-python/unidecode-0.05
  NonsolvableDepsInDev: version 1.6.5: nonsolvable depset(rdepend) keyword(~amd64) dev profile (default/linux/amd64/17.0/no-multilib/prefix/kernel-3.2+) (33 total): solutions: [ <dev-python/unidecode-0.05 ]
  NonsolvableDepsInStable: version 1.6.5: nonsolvable depset(rdepend) keyword(~amd64) stable profile (default/linux/amd64/17.0) (38 total): solutions: [ <dev-python/unidecode-0.05 ]
[...]
$ git restore --staged --worktree .

…and you know you can’t clean it up.

Warning: the tooling uses data from qa-reports that is updated periodically. If the data is not up-to-date (read: someone just added a dependency on your package), check-revdep may miss something.

Enable cache to speed things up

The rdep also supports using a local cache to avoid fetching everything from the server (= 4 requests per package). To populate the cache using the current data from server, just run:

$ rdep-fetch-cache 
--2020-09-02 09:00:05--  https://qa-reports.gentoo.org/output/genrdeps/rdeps.tar.xz
Resolving qa-reports.gentoo.org (qa-reports.gentoo.org)... 140.211.166.190, 2001:470:ea4a:1:230:48ff:fef8:9fdc
Connecting to qa-reports.gentoo.org (qa-reports.gentoo.org)|140.211.166.190|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1197176 (1,1M) [application/x-xz]
Saving to: ‘STDOUT’

-                                    100%[=====================================================================>]   1,14M   782KB/s    in 1,5s    

2020-09-02 09:00:08 (782 KB/s) - written to stdout [1197176/1197176]

The script will fetch the data as a tarball from server, and unpack to /tmp/*index.

pkgcheck also has its own caching, so successive checks will run faster if packages don’t change.

Combining rdep with other tools

You can also pass rdep output through to other tools, such as eshowkw (app-portage/gentoolkit) or gpy-showimpls (app-portage/gpyutils). The recommended pipeline is:

$ rdep $(pkg) | grep -v '^\[B' | xargs qatom -C -F '%{CATEGORY}/%{PN}' | sort -u | xargs gpy-showimpls | less
== rdep of dev-python/unidecode ==
== ddep of dev-python/unidecode ==
== bdep of dev-python/unidecode ==
== pdep of dev-python/unidecode ==
cat: /tmp/pindex/dev-python/unidecode: No such file or directory
app-misc/khard:0
          0.13.0: S             3.7 3.8
          0.17.0: ~             3.7 3.8 3.9
app-text/pelican:0
           3.7.1: S   #     3.6
           4.0.0: ~   #     3.6
           4.0.1: ~   #     3.6
           4.1.2: ~   #     3.6
           4.2.0: S         3.6 3.7
            9999:           3.6 3.7
dev-python/awesome-slugify:0
           1.6.5: ~         3.6 3.7 3.8
dev-python/pretty-yaml:0
          20.4.0: S         3.6 3.7 3.8 3.9
dev-python/python-slugify:0
           1.2.6: S   #     3.6 3.7 3.8 3.9
           4.0.1: S         3.6 3.7 3.8 3.9
media-sound/beets:0
        1.4.9-r2: ~ s       3.6 3.7 3.8
            9999:   s       3.6 3.7 3.8
www-apps/nikola:0
          7.8.15: S         3.6
       7.8.15-r1: ~   #     3.6
           8.0.4: ~   #     3.6 3.7 3.8
           8.1.0: ~   #     3.6 3.7 3.8
           8.1.1: ~   #     3.6 3.7 3.8
        8.1.1-r1: ~         3.6 3.7 3.8

Getting redundant versions from pkgcheck

Another nice trick is to have pkgcheck scan for redundant versions, and output it in format convenient for machine use.

For example, I often use:

git grep -l mgorny@ '**/metadata.xml' | cut -d/ -f1-2 | uniq | xargs pkgcheck scan -c RedundantVersionCheck -R FormatReporter --format '( cd {category}/{package} && eshowkw -C )'| sort -u | bash - |& less

that gives eshowkw output for all packages maintained by me that have potentially redundant versions. Plus, in another terminal:

$ git grep -l mgorny@ '**/metadata.xml' | cut -d/ -f1-2 | uniq | xargs pkgcheck scan -c RedundantVersionCheck -R FormatReporter --format 'git rm {category}/{package}/${package}-{version}.ebuild' | less

that gives convenient commands to copy-paste-execute.

August 25 2020

Is an umbrella organization a good choice for Gentoo?

Michał Górny (mgorny) August 25, 2020, 17:59

The talk of joining an umbrella organization and disbanding the Gentoo Foundation (GF) has been recurring over the last years. To the best of my knowledge, even some unofficial talks have been had earlier. However, so far our major obstacle for joining one was the bad standing of the Gentoo Foundation with the IRS. Now that that is hopefully out of the way, we can start actively working towards it.

But why would we want to join an umbrella in the first place? Isn’t having our own dedicated Foundation better? I believe that an umbrella is better for three reasons:

  1. Long-term sustainability. A dedicated professional entity that supports multiple projects has better chances than a small body run by volunteers from the developer community.
  2. Cost efficiency. Less money spent on organizational support, more money for what really matters to Gentoo.
  3. Added value. Umbrellas can offer us services and status that we currently haven’t been able to achieve.

I’ll expand on all three points.

Long-term sustainability

As you probably know by now, the Gentoo Foundation was not handled properly in the past. For many years, we have failed to file the necessary paperwork or pay due taxes. Successive boards of Trustees have either ignored the problem or were unable to resolve it. Only recently have we finally managed to come clean.

Now, many people point out that since we’re clean now, the problem is solved. However, I would like to point out that our good standing currently depends on one person doing the necessary bookkeeping, and a professional CPA doing the filings for us. The former means a bus factor of one, the latter means expenses. So far all efforts to train a backup have failed.

My point is, as long as Foundation exists we need to rely either on volunteers or on commercial support to keep it running. If we fail, it could be a major problem for Gentoo. We might not get away with it the next time. What’s more important, if we get into bad standing again, the chances of an umbrella taking us would decrease.

Remember that the umbrellas that interest us were founded precisely to support open source projects. They have professional staff to handle legal and financial affairs of their members. Gentoo Foundation on the other hand has staff of Gentoo developers — programmers, scientists but not really bookkeepers or lawyers. Sure, many of us run small companies but so far we lacked volunteers being equipped and willing to seriously handle GF.

Cost efficiency

So far I’ve been focusing on the volunteer-run Foundation. However, if we lack capable volunteers we can always rely on commercial support. The problem with that is that’s really expensive. Admittedly, being part of an umbrella is not free either but so far it seems that even the costliest umbrellas are cheaper than being on our own. Let’s crunch some numbers!

Right now we’re already relying on a CPA to handle our filings. For a commercial company (we are one now), the cost is $1500 a year. If we wanted to go for proper non-profit, the estimated cost is between $2000 and $3000 a year.

If we were to pass full accounting to an external company, the rough estimate I’ve been given by Trustees is $2400. So once our volunteer bookkeeper retires, we’re talking of around $4000 + larger taxes for a corporation, or $4500 to $5500 + very little taxes for a non-profit.

How does that compare to our income? I’ve created the following chart according to the financial reports.

The chart is focused on estimating expected cash income within the particular year. Therefore, commission back payments were omitted from it. In the full version (click for it), GSoC back payments were moved to their respective years too.

Small donations are the key point here, as they are more reliable than other sources of income. Over the years, they varied between $5000 and $12000, amounting to $7200 on average. Over the years, we had a few larger (>$1000) donations but we can’t rely on these in the next years (especially that they were none in FY2020). The next major source of income was Google Summer of Code that I’ve split into cash and travel reimbursement. The former only counts towards actual cash, and again, we can’t really rely on it reliably happening in the future. Interest and commission have minimal impact.

The point is, full bookkeeping services come dangerously close to our baseline annual income. On average, it would eat half of our budget! In 2014, if not for large donations (which are pretty much 0/1 thing) we would have ended up with loss. We’re talking about a situation where we can end up spending more on organization overhead than on Gentoo!

Even if we take the optimistic approach, we’re talking about costs at around 20% to 45% income according to the past years. This is much more than the 10% taken by SFC (and SFC isn’t exactly cheap).

Added value

So far I’ve been focusing on the effort/money necessary to keep the Gentoo Foundation as-is. That is, a for-profit corporation that spends some money on Infrastructure and CPA, and whose biggest non-infra investment in Gentoo was the Nitrokey giveaway.

Over the recent years, the possibility of becoming a non-profit was discussed. The primary advantages of that would be tax deduction for the Foundation, and tax deduction for donors in the USA (hopefully convincing more people to donate). However, becoming a non-profit is non-trivial, requires additional effort and most likely increases maintenance costs. That is, if our application is not rejected like Yorba Foundation was. On the other hand, if we join a non-profit umbrella (such as SFC), we get that as part of the deal!

Another interesting point is increasing actual spending on Gentoo, particularly by issuing bounties on actual development work. If we were to become a non-profit, some legal advice would be greatly desirable here and again, that’s something umbrellas offer. On the other hand, if we spend more and more money on keeping the Gentoo Foundation alive we probably won’t have much to spend on this anyway.

So why keep GF alive?

That’s precisely the question. Some developers argue that an external umbrella could try to take control of Gentoo, and limit our freedom. However, given that we’re going to sign a specific contract with an umbrella, I don’t see this as very likely.

On the other hand, keeping GF alive doesn’t guarantee Gentoo autonomy either — given the lack of interest in becoming a Trustee, it is possible that Foundation will eventually be taken over by people who want to aggressively take control of Gentoo against the will of the greater community. In fact, until very recently you could become a Trustee without getting a single vote of support if there were not enough candidates to compete over seats (and there usually weren’t).

Then, there are snarky people who believe that the GF exists so that non-developers could reap negligible profits from Foundation membership, and people who would never be voted into the Council could win Trustee elections and enhance their CVs.

In any case, I think that the benefits of an umbrella organization outweigh the risks. I believe sustainability is the most important value here — a reasonable guarantee that Gentoo will not get into trouble in a few years because we couldn’t manage to find volunteers to run the Foundation or money to cover the accounting costs.

The talk of joining an umbrella organization and disbanding the Gentoo Foundation (GF) has been recurring over the last years. To the best of my knowledge, even some unofficial talks have been had earlier. However, so far our major obstacle for joining one was the bad standing of the Gentoo Foundation with the IRS. Now that that is hopefully out of the way, we can start actively working towards it.

But why would we want to join an umbrella in the first place? Isn’t having our own dedicated Foundation better? I believe that an umbrella is better for three reasons:

  1. Long-term sustainability. A dedicated professional entity that supports multiple projects has better chances than a small body run by volunteers from the developer community.
  2. Cost efficiency. Less money spent on organizational support, more money for what really matters to Gentoo.
  3. Added value. Umbrellas can offer us services and status that we currently haven’t been able to achieve.

I’ll expand on all three points.

Long-term sustainability

As you probably know by now, the Gentoo Foundation was not handled properly in the past. For many years, we have failed to file the necessary paperwork or pay due taxes. Successive boards of Trustees have either ignored the problem or were unable to resolve it. Only recently have we finally managed to come clean.

Now, many people point out that since we’re clean now, the problem is solved. However, I would like to point out that our good standing currently depends on one person doing the necessary bookkeeping, and a professional CPA doing the filings for us. The former means a bus factor of one, the latter means expenses. So far all efforts to train a backup have failed.

My point is, as long as Foundation exists we need to rely either on volunteers or on commercial support to keep it running. If we fail, it could be a major problem for Gentoo. We might not get away with it the next time. What’s more important, if we get into bad standing again, the chances of an umbrella taking us would decrease.

Remember that the umbrellas that interest us were founded precisely to support open source projects. They have professional staff to handle legal and financial affairs of their members. Gentoo Foundation on the other hand has staff of Gentoo developers — programmers, scientists but not really bookkeepers or lawyers. Sure, many of us run small companies but so far we lacked volunteers being equipped and willing to seriously handle GF.

Cost efficiency

So far I’ve been focusing on the volunteer-run Foundation. However, if we lack capable volunteers we can always rely on commercial support. The problem with that is that’s really expensive. Admittedly, being part of an umbrella is not free either but so far it seems that even the costliest umbrellas are cheaper than being on our own. Let’s crunch some numbers!

Right now we’re already relying on a CPA to handle our filings. For a commercial company (we are one now), the cost is $1500 a year. If we wanted to go for proper non-profit, the estimated cost is between $2000 and $3000 a year.

If we were to pass full accounting to an external company, the rough estimate I’ve been given by Trustees is $2400. So once our volunteer bookkeeper retires, we’re talking of around $4000 + larger taxes for a corporation, or $4500 to $5500 + very little taxes for a non-profit.

How does that compare to our income? I’ve created the following chart according to the financial reports.

Gentoo Foundation income chart

The chart is focused on estimating expected cash income within the particular year. Therefore, commission back payments were omitted from it. In the full version (click for it), GSoC back payments were moved to their respective years too.

Small donations are the key point here, as they are more reliable than other sources of income. Over the years, they varied between $5000 and $12000, amounting to $7200 on average. Over the years, we had a few larger (>$1000) donations but we can’t rely on these in the next years (especially that they were none in FY2020). The next major source of income was Google Summer of Code that I’ve split into cash and travel reimbursement. The former only counts towards actual cash, and again, we can’t really rely on it reliably happening in the future. Interest and commission have minimal impact.

The point is, full bookkeeping services come dangerously close to our baseline annual income. On average, it would eat half of our budget! In 2014, if not for large donations (which are pretty much 0/1 thing) we would have ended up with loss. We’re talking about a situation where we can end up spending more on organization overhead than on Gentoo!

Even if we take the optimistic approach, we’re talking about costs at around 20% to 45% income according to the past years. This is much more than the 10% taken by SFC (and SFC isn’t exactly cheap).

Added value

So far I’ve been focusing on the effort/money necessary to keep the Gentoo Foundation as-is. That is, a for-profit corporation that spends some money on Infrastructure and CPA, and whose biggest non-infra investment in Gentoo was the Nitrokey giveaway.

Over the recent years, the possibility of becoming a non-profit was discussed. The primary advantages of that would be tax deduction for the Foundation, and tax deduction for donors in the USA (hopefully convincing more people to donate). However, becoming a non-profit is non-trivial, requires additional effort and most likely increases maintenance costs. That is, if our application is not rejected like Yorba Foundation was. On the other hand, if we join a non-profit umbrella (such as SFC), we get that as part of the deal!

Another interesting point is increasing actual spending on Gentoo, particularly by issuing bounties on actual development work. If we were to become a non-profit, some legal advice would be greatly desirable here and again, that’s something umbrellas offer. On the other hand, if we spend more and more money on keeping the Gentoo Foundation alive we probably won’t have much to spend on this anyway.

So why keep GF alive?

That’s precisely the question. Some developers argue that an external umbrella could try to take control of Gentoo, and limit our freedom. However, given that we’re going to sign a specific contract with an umbrella, I don’t see this as very likely.

On the other hand, keeping GF alive doesn’t guarantee Gentoo autonomy either — given the lack of interest in becoming a Trustee, it is possible that Foundation will eventually be taken over by people who want to aggressively take control of Gentoo against the will of the greater community. In fact, until very recently you could become a Trustee without getting a single vote of support if there were not enough candidates to compete over seats (and there usually weren’t).

Then, there are snarky people who believe that the GF exists so that non-developers could reap negligible profits from Foundation membership, and people who would never be voted into the Council could win Trustee elections and enhance their CVs.

In any case, I think that the benefits of an umbrella organization outweigh the risks. I believe sustainability is the most important value here — a reasonable guarantee that Gentoo will not get into trouble in a few years because we couldn’t manage to find volunteers to run the Foundation or money to cover the accounting costs.

August 02 2020

Why proactively clean Python 2 up?

Michał Górny (mgorny) August 02, 2020, 10:18

It seems a recurring complaint that we’re too aggressive on cleaning Python 2 up from packages. Why remove it if (package’s) upstream still supports py2? Why remove it when it still works? Why remove it when somebody’s ready to put some work to keep it working?

I’m pretty sure that you’re aware that Python 2 has finally reached its end-of-life. It’s past its last release, and the current version is most likely vulnerable. We know we can’t remove it entirely just yet (but the clock is ticking!), so why remove its support here and there instead of keeping it some more?

This is best explained on the example of dev-python/twisted — but dev-python/pillow is also quite similar. Twisted upstream removed support for Python 2 at version 20. This means that we ended up having to keep two versions of Twisted — 19 that still supports Python 2, and 20 that does not. What does that means for our users?

Firstly, they can’t normally upgrade Twisted if at least one of its reverse dependencies supports Python 2 and is installed. What’s important is that the user does not have to meaningfully need or use Python 2 in that reverse dependency. It is entirely sufficient that it supports Python 2 and the user is using default PYTHON_TARGETS.

Of course, you could argue that changing the default PYTHON_TARGETS would resolve the problem without having to proactively remove Python 2 from Twisted revdeps. Today, I’m not sure which of the two options is better. However, back when cleanup started changing default PT would involve a lot of pain for the users. We’d have to reenable 2.7 via package.use for many packages (but which ones?) or the users would have to reenable it themselves. But that’s really tangential now.

Secondly, when upstream stops supporting the old version, the maintenance cost rises quickly. Since we don’t allow mixing two versions easily (and I don’t really want to go down that path), a single version must provide all implementations that the union of its reverse dependencies requires. This meant that I had to put significant effort fixing Python 3.8 and 3.9 support in Twisted 19.

Thirdly, old versions tend to end up becoming vulnerable. This is now the case both with Twisted and Pillow! In both cases, we can’t clean up vulnerable versions yet because they still have unresolved Python 2 reverse dependencies. We have a pretty descriptive phrase for this kind of situation in Polish: «to wake up with your hand in the potty».

What’s my point here? Removing Python 2 proactively means removing it at our leisure. We start with packages that don’t need it (because they fully support Python 3), we unlock the removal in their dependencies, we clean these dependencies… and when one of the upstreams decides to remove it, we don’t have to do anything because we’ve already done that and resolved all the issues. And we don’t have to worry about having to quickly clean up the depgraph and remove vulnerable versions or perform non-trivial backports.

It seems a recurring complaint that we’re too aggressive on cleaning Python 2 up from packages. Why remove it if (package’s) upstream still supports py2? Why remove it when it still works? Why remove it when somebody’s ready to put some work to keep it working?

I’m pretty sure that you’re aware that Python 2 has finally reached its end-of-life. It’s past its last release, and the current version is most likely vulnerable. We know we can’t remove it entirely just yet (but the clock is ticking!), so why remove its support here and there instead of keeping it some more?

This is best explained on the example of dev-python/twisted — but dev-python/pillow is also quite similar. Twisted upstream removed support for Python 2 at version 20. This means that we ended up having to keep two versions of Twisted — 19 that still supports Python 2, and 20 that does not. What does that means for our users?

Firstly, they can’t normally upgrade Twisted if at least one of its reverse dependencies supports Python 2 and is installed. What’s important is that the user does not have to meaningfully need or use Python 2 in that reverse dependency. It is entirely sufficient that it supports Python 2 and the user is using default PYTHON_TARGETS.

Of course, you could argue that changing the default PYTHON_TARGETS would resolve the problem without having to proactively remove Python 2 from Twisted revdeps. Today, I’m not sure which of the two options is better. However, back when cleanup started changing default PT would involve a lot of pain for the users. We’d have to reenable 2.7 via package.use for many packages (but which ones?) or the users would have to reenable it themselves. But that’s really tangential now.

Secondly, when upstream stops supporting the old version, the maintenance cost rises quickly. Since we don’t allow mixing two versions easily (and I don’t really want to go down that path), a single version must provide all implementations that the union of its reverse dependencies requires. This meant that I had to put significant effort fixing Python 3.8 and 3.9 support in Twisted 19.

Thirdly, old versions tend to end up becoming vulnerable. This is now the case both with Twisted and Pillow! In both cases, we can’t clean up vulnerable versions yet because they still have unresolved Python 2 reverse dependencies. We have a pretty descriptive phrase for this kind of situation in Polish: «to wake up with your hand in the potty».

What’s my point here? Removing Python 2 proactively means removing it at our leisure. We start with packages that don’t need it (because they fully support Python 3), we unlock the removal in their dependencies, we clean these dependencies… and when one of the upstreams decides to remove it, we don’t have to do anything because we’ve already done that and resolved all the issues. And we don’t have to worry about having to quickly clean up the depgraph and remove vulnerable versions or perform non-trivial backports.

July 20 2020

Updated Gentoo RISC-V stages

Andreas K. Hüttel (dilfridge) July 20, 2020, 16:28
♦I finally got around to updating the experimental riscv stages. You can find the result on our webserver. All stages use the rv64gc instruction set; there is a multilib stage with both lp64 and lp64d support, and there are non-multilib stages for both lp64 and lp64d ABI. Please test, and report bugs if anything doesn't work.
As for the technical details, the stages are built using qemu-user on a big and beefy Gentoo amd64 AWS instance. We are currently working on automating that process, such that riscv (and potentially also arm and others) get the same level of support as amd64 and friends. Thanks a lot to Amazon for the credits via their open source promotial program!
I finally got around to updating the experimental riscv stages. You can find the result on our webserver. All stages use the rv64gc instruction set; there is a multilib stage with both lp64 and lp64d support, and there are non-multilib stages for both lp64 and lp64d ABI. Please test, and report bugs if anything doesn't work.
As for the technical details, the stages are built using qemu-user on a big and beefy Gentoo amd64 AWS instance. We are currently working on automating that process, such that riscv (and potentially also arm and others) get the same level of support as amd64 and friends. Thanks a lot to Amazon for the credits via their open source promotial program!

July 07 2020

Gentoo on Android 64-bit release

Gentoo News (GentooNews) July 07, 2020, 5:00

Gentoo Project Android is pleased to announce a new 64-bit release of the stage3 Android prefix tarball. This is a major release after 2.5 years of development, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31. Enjoy Gentoo in your pocket!

gentoo-android logo

Gentoo Project Android is pleased to announce a new 64-bit release of the stage3 Android prefix tarball. This is a major release after 2.5 years of development, featuring gcc-10.1.0, binutils-2.34 and glibc-2.31. Enjoy Gentoo in your pocket!

July 04 2020

gentoo tinderbox

Agostino Sarubbo (ago) July 04, 2020, 13:03

If you are visiting this page, it is very likely that the software you maintain has been analyzed by my tinderbox system.

What is a tinderbox?

It is a machine that compiles 24/7 that aims to find build failures, test failures, QA issues and so on in the portage tree.
It can be differentiated into:

– tinderbox – ci


TINDERBOX:

It compiles the entire portage tree against a particular change like:
– a new version of compiler/libc/linker
– a new C/CXX/LD FLAG
– a different toolchain like clang/llvm/lld
– and so on

In short it uses uncommon but supported settings and looks for breakage.

CI:

It is a continuous integration; the CI system compiles the packages after they have been touched in gentoo.git

The CI system uses a standard set of settings, so if you get a bug report from it, it is very likely that the failure is reproducible for users too.

What are the rules that you may know when you see a report from those systems?

1) The reports are filed automatically.
2) Because of the first, it is not possible for me to set an exact error in the bug summary. Instead a general error is used.
3) Because of the above, maintainer is encouraged to set an appropriate summary at its convenience
4) Common additional logs (like test-suite.log testlog.txt CMakeOutput.log CMakeError.log LastTest.log config.log testsuite.log autoconf.out) are automatically attached but before of the first if you need something else please ask for them.
5) If you ask for another log, I have to stop the tinderbox service, so there may be a delay between your request and my reaction.
6) There may be an internal reference between round brackets on the “Discovered on” line. This is for me to understand where that failure was reproduced.
7) If you see ‘ci’ as internal reference after you pushed a fix, it is very probably that the bug still exist, or there is another failure in the same ebuild phase. Please inspect deeply the build log. Point 8 may help you about that.
8) At the beginning of the build log a git SHA of the repository at the time of emerging is provided. For convenience there is a link.
9) To avoid making a separate attachment on bugzilla, at the beginning of the build log there is the ’emerge –info’, please check it DEEPLY to understand the system configuration and what differs respect to a more ‘standard’ system.
10) If you see a compressed build log, is because the plain text version exceeds the limits on our bugzilla (1MB).
11) This system is not perfect. There may be duplicates or invalid bugs.
12) My best suggestion is try to reproduce the issue on empty stage3 (or docker for convenience).
13) When you close the bug with a resolution different from RESOLVED/FIXED please not be cryptic.
14) If new points will be added, there may be a mention like “Valid from YY:MM:DD”

How to fix the common errors:

1) Compile/build failure:
It depends on the error. Please get in touch with upstream if you are unsure.

2) Test failure:
It depends on the error. Please get in touch with upstream if you are unsure.

3) CFLAGS/LDFLAGS not respected:
You can touch the build system or inject the flags in the ebuild where possible. There are a lot of examples in the tracker.

4) -Wformat-security failure:
TBD

5) Metainfo installed in /usr/share/appdata:
Install metainfo into /usr/share instead of /usr/share/appdata

6) Python modules that are not byte-compiled
TBD

7) Unrecognized configure options:
Remove the configure options from the ebuild where possible. Sometimes there are false positives related to the option passed to configure in subdirectories.

8) Compressed manpages and documentation:
Decompress documentation and install it as plain text.

9) Icon cache not updated:
TBD

10) Deprecated configure.in:
TBD

11) .Desktop do not pass validation:
TBD

12) Path that should be created at runtime:
TBD

13) Libraries that lack NEEDED entries:
TBD

14) Libraries that lack a SONAME:
TBD

15) Text relocation:
TBD

16) Toolchain binaries called directly (cc/gcc/g++/c++/nm/ar/ranlib/cpp/ld/strip/objcopy/objdump/size/as/strings/readelf and so on):
TBD

17) Files with name not encoded with UTF-8:
TBD

18) Files with broken symlink:
TBD

19) Command that do not exist:
TBD

20) Pkg-config files with wrong LDFLAGS:
TBD

21) Pre-stripped files:
TBD

22) File collision:
TBD

23) Compile failure if CPP is set to CC -E:
TBD

24) Compile failure with -fno-common:
TBD

25) Files with unresolved SONAME dependencies:
TBD

26) Files that contain insecure RUNPATHs:
TBD

27) Files installed into unexpected paths:
TBD

28) LD usage instead of CC/CXX:
TBD

29) Link failure with LLD because of /usr/lib:
TBD

30) Compilation in src_install phase:
TBD

31) Automake usage in maintainer-mode:
TBD

32) Mimeinfo cache not updated when .desktop files with MimeType are installed:
TBD

33) Broken png files installed:
TBD

34) Mime-info files installed without update mime-info cache:
TBD

35) Udev rules installed into wrong directory:
TBD

ago (ago ) July 04, 2020, 13:03

If you are visiting this page, it is very likely that the software you maintain has been analyzed by my tinderbox system.

What is a tinderbox?

It is a machine that compiles 24/7 that aims to find build failures, test failures, QA issues and so on in the portage tree.
It can be differentiated into:

– tinderbox

– ci

TINDERBOX:

It compiles the entire portage tree against a particular change like:
– a new version of compiler/libc/linker
– a new C/CXX/LD FLAG
– a different toolchain like clang/llvm/lld
– and so on

In short it uses uncommon but supported settings and looks for breakage.

CI:

It is a continuous integration; the CI system compiles the packages after they have been touched in gentoo.git

The CI system uses a standard set of settings, so if you get a bug report from it, it is very likely that the failure is reproducible for users too.

What are the rules that you may know when you see a report from those systems?

1) The reports are filed automatically.
2) Because of the first, it is not possible for me to set an exact error in the bug summary. Instead a general error is used.
3) Because of the above, maintainer is encouraged to set an appropriate summary at its convenience
4) Common additional logs (like test-suite.log testlog.txt CMakeOutput.log CMakeError.log LastTest.log config.log testsuite.log autoconf.out) are automatically attached but before of the first if you need something else please ask for them.
5) If you ask for another log, I have to stop the tinderbox service, so there may be a delay between your request and my reaction.
6) There may be an internal reference between round brackets on the “Discovered on” line. This is for me to understand where that failure was reproduced.
7) If you see ‘ci’ as internal reference after you pushed a fix, it is very probably that the bug still exist, or there is another failure in the same ebuild phase. Please inspect deeply the build log. Point 8 may help you about that.
8) At the beginning of the build log a git SHA of the repository at the time of emerging is provided. For convenience there is a link.
9) To avoid making a separate attachment on bugzilla, at the beginning of the build log there is the ’emerge –info’, please check it DEEPLY to understand the system configuration and what differs respect to a more ‘standard’ system.
10) If you see a compressed build log, is because the plain text version exceeds the limits on our bugzilla (1MB).
11) This system is not perfect. There may be duplicates or invalid bugs.
12) My best suggestion is try to reproduce the issue on empty stage3 (or docker for convenience).
13) When you close the bug with a resolution different from RESOLVED/FIXED please not be cryptic.
14) If new points will be added, there may be a mention like “Valid from YY:MM:DD”

How to fix the common errors:

1) Compile/build failure:
It depends on the error. Please get in touch with upstream if you are unsure.

2) Test failure:
It depends on the error. Please get in touch with upstream if you are unsure.

3) CFLAGS/LDFLAGS not respected:
You can touch the build system or inject the flags in the ebuild where possible. There are a lot of examples in the tracker.

4) -Wformat-security failure:
TBD

5) Metainfo installed in /usr/share/appdata:
Install metainfo into /usr/share instead of /usr/share/appdata

6) Python modules that are not byte-compiled
TBD

7) Unrecognized configure options:
Remove the configure options from the ebuild where possible. Sometimes there are false positives related to the option passed to configure in subdirectories.

8) Compressed manpages and documentation:
Decompress documentation and install it as plain text.

9) Icon cache not updated:
TBD

10) Deprecated configure.in:
TBD

11) .Desktop do not pass validation:
TBD

12) Path that should be created at runtime:
TBD

13) Libraries that lack NEEDED entries:
TBD

14) Libraries that lack a SONAME:
TBD

15) Text relocation:
TBD

16) Toolchain binaries called directly (cc/gcc/g++/c++/nm/ar/ranlib/cpp/ld/strip/objcopy/objdump/size/as/strings/readelf and so on):
TBD

17) Files with name not encoded with UTF-8:
TBD

18) Files with broken symlink:
TBD

19) Command that do not exist:
TBD

20) Pkg-config files with wrong LDFLAGS:
TBD

21) Pre-stripped files:
TBD

22) File collision:
TBD

23) Compile failure if CPP is set to CC -E:
TBD

24) Compile failure with -fno-common:
TBD

25) Files with unresolved SONAME dependencies:
TBD

26) Files that contain insecure RUNPATHs:
TBD

27) Files installed into unexpected paths:
TBD

28) LD usage instead of CC/CXX:
TBD

29) Link failure with LLD because of /usr/lib:
TBD

30) Compilation in src_install phase:
TBD

31) Automake usage in maintainer-mode:
TBD

32) Mimeinfo cache not updated when .desktop files with MimeType are installed:
TBD

33) Broken png files installed:
TBD

34) Mime-info files installed without update mime-info cache:
TBD

35) Udev rules installed into wrong directory:
TBD

Official Gentoo Docker images

Gentoo News (GentooNews) July 04, 2020, 5:00

Did you already know that we have official Gentoo Docker images available on Docker Hub?! The most popular one is based on the amd64 stage. Images are created automatically; you can peek at the source code for this on our git server. Thanks to the Gentoo Docker project!

docker logo

Did you already know that we have official Gentoo Docker images available on Docker Hub?! The most popular one is based on the amd64 stage. Images are created automatically; you can peek at the source code for this on our git server. Thanks to the Gentoo Docker project!

June 04 2020

Baïkal (CalDAV) 0.7.0 in Gentoo

Nathan Zachary (nathanzachary) June 04, 2020, 3:30

Just this past week, the new version of of Baïkal (0.7.0)—a PHP CalDAV and CardDAV server based on Sabre—was released, and one of the key changes was that support was added for more modern versions of PHP (like 7.4).

Since my personal Gentoo server is running the ~amd64 branch, I had to wait for this release in order to get my CalDAV server up and running. For the most part, installing Baïkal 0.7.0 was a straightforward process, but there were a couple of “gotchas” along the way.

The first (and most confusing) problem came after the installation/initial configuration when I tried to access my newly-created user’s calendar via the URL:

dav.MYDOMAIN.com/html/dav.php/calendar/MYUSERNAME/default

I knew that something was wrong when it wouldn’t even prompt me for credentials. Instead, the logs indicated the following error message:

[Tue Jun 02 14:13:05.529805 2020] [proxy_fcgi:error] [pid 32165:tid 139743908050688] [client 71.81.87.208:38910] AH01071: Got error 'PHP message: LogicException: Requested uri (/html/dav.php) is out of base uri (/s/html/dav.php/) in /var/www/domains/MYDOMAIN/dav/htdocs/vendor/sabre/http/lib/Request.php:184

I couldn’t figure out where the “/s/” was coming in before the “/html” portion, but that was certainly the cause of the error message. I filed an issue for it, and though I still don’t know the source of the problem, I was able to work around it by adding a trailing slash to the DocumentRoot for that particular vhost:

# pwd && diff -Nut dav.MYDOMAIN.conf.PRE-20200602_docroot dav.MYDOMAIN.conf
/etc/apache2/vhosts.d/includes
--- dav.MYDOMAIN.conf.PRE-20200602_docroot 2020-06-02 17:23:20.246281195 -0400 +++ dav.MYDOMAIN.conf 2020-06-02 17:20:59.892270352 -0400
@@ -1,7 +1,7 @@
- DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs"
+ DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs/"

After solving that strange problem, I was at least prompted for credentials when I accessed the calendar URL from above. After logging in, I ran into one more problem, though:

Class 'XMLWriter' not found

This problem was much easier to fix. I simply needed to add the ‘xmlwriter‘ USE flag to dev-lang/php (I also added ‘xmlreader‘ for good measure), emerge it again, and restart PHP-FPM. Other distributions (like CentOS) will likely need to install the ‘php-xml’ package (or something similar).

After that fix, I am happy to report that Baïkal 0.7.0 is working beautifully, and I have my calendars synced across all my devices. I personally use Thunderbird with Lightning on my computers, and a combination of DAVx5 with Simple Calendar Pro on my Android devices.

Just this past week, the new version of of Baïkal (0.7.0)—a PHP CalDAV and CardDAV server based on Sabre—was released, and one of the key changes was that support was added for more modern versions of PHP (like 7.4).

Since my personal Gentoo server is running the ~amd64 branch, I had to wait for this release in order to get my CalDAV server up and running. For the most part, installing Baïkal 0.7.0 was a straightforward process, but there were a couple of “gotchas” along the way.

The first (and most confusing) problem came after the installation/initial configuration when I tried to access my newly-created user’s calendar via the URL:

https://dav.MYDOMAIN.com/html/dav.php/calendar/MYUSERNAME/default

I knew that something was wrong when it wouldn’t even prompt me for credentials. Instead, the logs indicated the following error message:

[Tue Jun 02 14:13:05.529805 2020] [proxy_fcgi:error] [pid 32165:tid 139743908050688] [client 71.81.87.208:38910] AH01071: Got error 'PHP message: LogicException: Requested uri (/html/dav.php) is out of base uri (/s/html/dav.php/) in /var/www/domains/MYDOMAIN/dav/htdocs/vendor/sabre/http/lib/Request.php:184

I couldn’t figure out where the “/s/” was coming in before the “/html” portion, but that was certainly the cause of the error message. I filed an issue for it, and though I still don’t know the source of the problem, I was able to work around it by adding a trailing slash to the DocumentRoot for that particular vhost:

# pwd && diff -Nut dav.MYDOMAIN.conf.PRE-20200602_docroot dav.MYDOMAIN.conf
/etc/apache2/vhosts.d/includes
--- dav.MYDOMAIN.conf.PRE-20200602_docroot 2020-06-02 17:23:20.246281195 -0400 +++ dav.MYDOMAIN.conf 2020-06-02 17:20:59.892270352 -0400
@@ -1,7 +1,7 @@
- DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs"
+ DocumentRoot "/var/www/domains/MYDOMAIN/dav/htdocs/"

After solving that strange problem, I was at least prompted for credentials when I accessed the calendar URL from above. After logging in, I ran into one more problem, though:

Class 'XMLWriter' not found

This problem was much easier to fix. I simply needed to add the ‘xmlwriter‘ USE flag to dev-lang/php (I also added ‘xmlreader‘ for good measure), emerge it again, and restart PHP-FPM. Other distributions (like CentOS) will likely need to install the ‘php-xml’ package (or something similar).

After that fix, I am happy to report that Baïkal 0.7.0 is working beautifully, and I have my calendars synced across all my devices. I personally use Thunderbird with Lightning on my computers, and a combination of DAVx5 with Simple Calendar Pro on my Android devices.

VIEW

SCOPE

FILTER
  from
  to