May 05 2023

Gentoo Google Summer of Code (GSoC) for 2023

Gentoo Google Summer of Code (GSoC) May 05, 2023, 3:48

♦Gentoo is excited to announce that the Gentoo Google Summer of Code has accepted a group of talented contributors to participate in this year’s program. We extend our congratulations and welcome them aboard!

Google Summer of Code is a global program that provides a unique opportunity for students and young professionals to work on open-source projects under the guidance of experienced mentors.

We received a high volume of impressive applications from individuals around the world, each demonstrating their passion and skills for open-source projects. The selection process was challenging, but we are pleased to have accepted the following four contributors:

  • Alfred Persson Forsberg – IRC Handle: catcream
  • Berin Aniesh – IRC Handle: hyperedge
  • Stepan Kulikov – IRC Handle: labbrat
  • Brahmajit Das – IRC Handle: listout

Each of the accepted contributors displayed a commitment to their chosen project, as well as an eagerness to learn and grow. We have no doubt that they will make significant contributions to their respective projects and Gentoo over the course of the program.

We would like to extend our gratitude to all the applicants who took the time to apply. We were humbled by the enthusiasm and passion displayed in their applications and encourage them to continue contributing to the open-source community.

We asked the contributors to say something about themselves as part of their introduction to the Gentoo community.

  • Alfred Person Forsberg (catcream)
    • My name is Alfred and I’ve been a Gentoo user for around 2,5 years. This year I will be working on initial support for the LLVM C Library on Gentoo, with the goal of having a working terminal-based desktop and an experimental tarball for others to play with. Last year I participated in a GSoC project about getting KDE Plasma and all its dependencies working on Gentoo musl, and I gained a lot of knowledge doing it.I am currently 20 years old and studying electrical engineering,
      previously science, and mathematics. I like kinder eggs and syntax
      highlighting, and I am very happy to get the opportunity of doing this again!
  • Berin Aniesh (hyperedge)
    • My name is Berin Aniesh. I am from a small town called “Kanyakumari” in India. I have dabbled around in a few domains like mechanical and nuclear engineering before choosing software engineering. I am a generalist and I get excited about learning new stuff. I spend a lot of time writing simple scripts that help automate my workflow. I have been fascinated by computers as far as I remember. I used to play around, getting Linux installed, and deleting my mom’s files, ah fun times! I have been using Gentoo for about two years. I really like it because of the trust it puts on its user and the power it gives them. It is an honor to work with the people who make it possible.
  • Stepan Kulikov (labbrat)
    • My name is Stepan, I switched to IT a couple of years ago when I realized that a career in academia wasn’t as fulfilling (though Metallurgy sure sounds cool). Linux systems are a foundation and a passion on which I’m building my career. Apart from work, I enjoy chess and swimming, just to keep everything well-rounded.Finding out that I got accepted to GSoC was quite an ordeal since I was participating in a pub quiz at that time, and I repeated ‘WHAT?!’ about five times in a row.
  • Brahmajit Das (listout)
    • I am a student at the University of Calcutta, pursuing a master’s degree (m.sc) in computer science. I am a fan of embedded systems, Linux, and Warhammer 40K. I spend a lot of time tinkering around with various embedded development boards, Gentoo, and optimization of neovim
      config.I’m very excited about getting selected for the summer of code for
      the 2nd time and last time, also very happy that I’ll be
      contributing to an important part of Gentoo (porting Gentoo
      packages for clang 16 and c23)

If you are a Gentoo Developer please feel free to stop by the #gentoo-soc channel and say hello, and if you would like to provide your expertise to these contributors as they work with their mentors to complete their projects and contribute to Gentoo.

Congratulations once again to our accepted contributors, and welcome to the Gentoo Google Summer of Code 2023 program!

Gentoo is excited to announce that the Gentoo Google Summer of Code has accepted a group of talented contributors to participate in this year’s program. We extend our congratulations and welcome them aboard!

Google Summer of Code is a global program that provides a unique opportunity for students and young professionals to work on open-source projects under the guidance of experienced mentors.

We received a high volume of impressive applications from individuals around the world, each demonstrating their passion and skills for open-source projects. The selection process was challenging, but we are pleased to have accepted the following four contributors:

  • Alfred Persson Forsberg – IRC Handle: catcream
  • Berin Aniesh – IRC Handle: hyperedge
  • Stepan Kulikov – IRC Handle: labbrat
  • Brahmajit Das – IRC Handle: listout

Each of the accepted contributors displayed a commitment to their chosen project, as well as an eagerness to learn and grow. We have no doubt that they will make significant contributions to their respective projects and Gentoo over the course of the program.

We would like to extend our gratitude to all the applicants who took the time to apply. We were humbled by the enthusiasm and passion displayed in their applications and encourage them to continue contributing to the open-source community.

We asked the contributors to say something about themselves as part of their introduction to the Gentoo community.

  • Alfred Person Forsberg (catcream)
    • My name is Alfred and I’ve been a Gentoo user for around 2,5 years. This year I will be working on initial support for the LLVM C Library on Gentoo, with the goal of having a working terminal-based desktop and an experimental tarball for others to play with. Last year I participated in a GSoC project about getting KDE Plasma and all its dependencies working on Gentoo musl, and I gained a lot of knowledge doing it.I am currently 20 years old and studying electrical engineering,
      previously science, and mathematics. I like kinder eggs and syntax
      highlighting, and I am very happy to get the opportunity of doing this again!
  • Berin Aniesh (hyperedge)
    • My name is Berin Aniesh. I am from a small town called “Kanyakumari” in India. I have dabbled around in a few domains like mechanical and nuclear engineering before choosing software engineering. I am a generalist and I get excited about learning new stuff. I spend a lot of time writing simple scripts that help automate my workflow. I have been fascinated by computers as far as I remember. I used to play around, getting Linux installed, and deleting my mom’s files, ah fun times! I have been using Gentoo for about two years. I really like it because of the trust it puts on its user and the power it gives them. It is an honor to work with the people who make it possible.
  • Stepan Kulikov (labbrat)
    • My name is Stepan, I switched to IT a couple of years ago when I realized that a career in academia wasn’t as fulfilling (though Metallurgy sure sounds cool). Linux systems are a foundation and a passion on which I’m building my career. Apart from work, I enjoy chess and swimming, just to keep everything well-rounded.Finding out that I got accepted to GSoC was quite an ordeal since I was participating in a pub quiz at that time, and I repeated ‘WHAT?!’ about five times in a row.
  • Brahmajit Das (listout)
    • I am a student at the University of Calcutta, pursuing a master’s degree (m.sc) in computer science. I am a fan of embedded systems, Linux, and Warhammer 40K. I spend a lot of time tinkering around with various embedded development boards, Gentoo, and optimization of neovim
      config.I’m very excited about getting selected for the summer of code for
      the 2nd time and last time, also very happy that I’ll be
      contributing to an important part of Gentoo (porting Gentoo
      packages for clang 16 and c23)

If you are a Gentoo Developer please feel free to stop by the #gentoo-soc channel and say hello, and if you would like to provide your expertise to these contributors as they work with their mentors to complete their projects and contribute to Gentoo.

Congratulations once again to our accepted contributors, and welcome to the Gentoo Google Summer of Code 2023 program!

April 28 2023

Bubblewrap cross-architecture chroot

Maciej Barć (xgqt) April 28, 2023, 17:06
System preparation Qemu

Emerge qemu with static-user USE enabled and your wanted architectures.

1
2
3
4
5
6
7
8
app-emulation/qemu      QEMU_SOFTMMU_TARGETS: aarch64 arm x86_64
app-emulation/qemu      QEMU_USER_TARGETS: aarch64 arm x86_64

app-emulation/qemu      static-user
dev-libs/glib           static-libs
sys-apps/attr           static-libs
sys-libs/zlib           static-libs
dev-libs/libpcre2       static-libs
OpenRC

Enable qemu-binfmt:

1
rc-update add qemu-binfmt default

Start qemu-binfmt:

1
rc-service qemu-binfmt start
Chrooting
  • select chroot location (eg /chroots/gentoo-arm64-musl-stable)
  • unpack the desired rootfs
  • create needed directories
    • mkdir -p /chroots/gentoo-arm64-musl-stable/var/cache/distfiles
  • execute bwrap
    • with last ro-bind mount the qemu emulator binary (eg qemu-aarch64)
    • execute the mounted emulator binary giving it a shell program (eg bash)

Chroot with bwrap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
bwrap                                                       \
    --bind /chroots/gentoo-arm64-musl-stable /              \
    --dev /dev                                              \
    --proc /proc --perms 1777                               \
    --tmpfs /dev/shm                                        \
    --tmpfs /run                                            \
    --ro-bind /etc/resolv.conf /etc/resolv.conf             \
    --bind /var/cache/distfiles /var/cache/distfiles        \
    --ro-bind /usr/bin/qemu-aarch64 /usr/bin/qemu-aarch64   \
    /usr/bin/qemu-aarch64 /bin/bash -l

System preparation

Qemu

Emerge qemu with static-user USE enabled and your wanted architectures.

1
2
3
4
5
6
7
8
app-emulation/qemu      QEMU_SOFTMMU_TARGETS: aarch64 arm x86_64
app-emulation/qemu      QEMU_USER_TARGETS: aarch64 arm x86_64

app-emulation/qemu      static-user
dev-libs/glib           static-libs
sys-apps/attr           static-libs
sys-libs/zlib           static-libs
dev-libs/libpcre2       static-libs

OpenRC

Enable qemu-binfmt:

1
rc-update add qemu-binfmt default

Start qemu-binfmt:

1
rc-service qemu-binfmt start

Chrooting

  • select chroot location (eg /chroots/gentoo-arm64-musl-stable)
  • unpack the desired rootfs
  • create needed directories
    • mkdir -p /chroots/gentoo-arm64-musl-stable/var/cache/distfiles
  • execute bwrap
    • with last ro-bind mount the qemu emulator binary (eg qemu-aarch64)
    • execute the mounted emulator binary giving it a shell program (eg bash)

Chroot with bwrap:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
bwrap                                                       \
    --bind /chroots/gentoo-arm64-musl-stable /              \
    --dev /dev                                              \
    --proc /proc --perms 1777                               \
    --tmpfs /dev/shm                                        \
    --tmpfs /run                                            \
    --ro-bind /etc/resolv.conf /etc/resolv.conf             \
    --bind /var/cache/distfiles /var/cache/distfiles        \
    --ro-bind /usr/bin/qemu-aarch64 /usr/bin/qemu-aarch64   \
    /usr/bin/qemu-aarch64 /bin/bash -l

April 07 2023

Installing PowerShell modules via Portage

Maciej Barć (xgqt) April 07, 2023, 2:26
Building PowerShell

As a part of my work of modernizing the way .NET SDK packages are distributed in Gentoo I delved into packaging a from-source build of PowerShell for Gentoo using the dotnet-pkg eclass.

Packaging pwsh was a little tricky but I got a lot of help from reading the Alpine Linux’s APKBUILD. I had to generate special C# code bindings with ResGen and repackage the PowerShell tarball. Other than this trick, restoring and building PowerShell was pretty straight forward with the NuGet package management support from the dotnet-pkg.eclass.

Alternatively if you do not want to build PowerShell you can install the binary package, I have in plans to keep that package around even after we get the non-binary app-shells/pwsh into the official Gentoo ebuild repository.

Why install modules via Portage?

But why stop on PowerShell when we can also package multiple PS modules?

Installing modules via Portage has many benefits:

  • better version control,
  • more control over global install,
  • no need to enable PS Gallery,
  • sandboxed builds,
  • using system .NET runtime.
Merging the modules

PowerShell’s method of finding modules is at follows: check paths from the PSModulePath environment variable for directories containing valid .psd1 files which define the PS modules.

By default pwsh tries to find modules in paths:

  • user’s modules directory — ~/.local/share/powershell/Modules
  • system modules directory in /usr/local/usr/local/share/powershell/Modules
  • Modules directory inside the pwsh home — for example /usr/share/pwsh-7.3/Modules

Because we do not want to touch either /usr/local nor pwsh home, we embed a special environment variable inside the pwsh launcher script to extend the path where pwsh looks for PS modules. The new module directory is located at /usr/share/GentooPowerShell/Modules.

1
2
dotnet-pkg-utils_append_launchervar \
    'PSModulePath="${PSModulePath}:/usr/share/GentooPowerShell/Modules:"'

So every PowerShell module will install it’s files inside /usr/share/GentooPowerShell/Modules.

To follow PS module location convention we add to that path a segment for the real module name and a segment for module version. This also enables us to have proper multi-slotting because most of the time the modules will not block installing other versions.

Take a look at this example from the app-pwsh/posh-dotnet–1.2.3 ebuild:

1
2
3
4
5
6
src_install() {
    insinto /usr/share/GentooPowerShell/Modules/${PN}/${PV}
    doins ${PN}.psd1 ${PN}.psm1

    einstalldocs
}

And that is it. Some packages do not even need to be compiled, they just need files placed into specific location. But when compilation of C# code is needed we have dotnet-pkg to help.

Building PowerShell

As a part of my work of modernizing the way .NET SDK packages are distributed in Gentoo I delved into packaging a from-source build of PowerShell for Gentoo using the dotnet-pkg eclass.

Packaging pwsh was a little tricky but I got a lot of help from reading the Alpine Linux’s APKBUILD. I had to generate special C# code bindings with ResGen and repackage the PowerShell tarball. Other than this trick, restoring and building PowerShell was pretty straight forward with the NuGet package management support from the dotnet-pkg.eclass.

Alternatively if you do not want to build PowerShell you can install the binary package, I have in plans to keep that package around even after we get the non-binary app-shells/pwsh into the official Gentoo ebuild repository.

Why install modules via Portage?

But why stop on PowerShell when we can also package multiple PS modules?

Installing modules via Portage has many benefits:

  • better version control,
  • more control over global install,
  • no need to enable PS Gallery,
  • sandboxed builds,
  • using system .NET runtime.

Merging the modules

PowerShell’s method of finding modules is at follows: check paths from the PSModulePath environment variable for directories containing valid .psd1 files which define the PS modules.

By default pwsh tries to find modules in paths:

  • user’s modules directory — ~/.local/share/powershell/Modules
  • system modules directory in /usr/local/usr/local/share/powershell/Modules
  • Modules directory inside the pwsh home — for example /usr/share/pwsh-7.3/Modules

Because we do not want to touch either /usr/local nor pwsh home, we embed a special environment variable inside the pwsh launcher script to extend the path where pwsh looks for PS modules. The new module directory is located at /usr/share/GentooPowerShell/Modules.

1
2
dotnet-pkg-utils_append_launchervar \
    'PSModulePath="${PSModulePath}:/usr/share/GentooPowerShell/Modules:"'

So every PowerShell module will install it’s files inside /usr/share/GentooPowerShell/Modules.

To follow PS module location convention we add to that path a segment for the real module name and a segment for module version. This also enables us to have proper multi-slotting because most of the time the modules will not block installing other versions.

Take a look at this example from the app-pwsh/posh-dotnet–1.2.3 ebuild:

1
2
3
4
5
6
src_install() {
    insinto /usr/share/GentooPowerShell/Modules/${PN}/${PV}
    doins ${PN}.psd1 ${PN}.psm1

    einstalldocs
}

And that is it. Some packages do not even need to be compiled, they just need files placed into specific location. But when compilation of C# code is needed we have dotnet-pkg to help.

Fixing Intel Wi-Fi 6 AX200 latency and ping spikes in Linux

Nathan Zachary (nathanzachary) April 07, 2023, 0:01

Recently, I purchased a mini-PC with an embedded Intel Wi-Fi 6 AX200 wireless chipset and installed Gentoo Linux on it. The reason I purchased the mini-PC was to replace my ageing music server (running MPD), but I also wanted to use it for playing retro video games through my entertainment centre. Everything went well with the OS installation, but during that time, I had it plugged into my router via a wired connection. Once I moved the mini-PC into place in my sitting room and started using the wireless connection, I noticed that there was substantial lag even when doing something simple like typing in a terminal via SSH.

A quick ping test showed that there was clearly a problem with the wireless. For my home network, I consider any ping response time of >=5ms to be unacceptable and indicative of an underlying network problem:

$ ping -c 20 192.168.1.120
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=214 ms
64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=30.7 ms
64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=54.4 ms
64 bytes from 192.168.1.120: icmp_seq=4 ttl=64 time=75.1 ms
64 bytes from 192.168.1.120: icmp_seq=5 ttl=64 time=97.8 ms
64 bytes from 192.168.1.120: icmp_seq=6 ttl=64 time=122 ms
64 bytes from 192.168.1.120: icmp_seq=7 ttl=64 time=142 ms
64 bytes from 192.168.1.120: icmp_seq=8 ttl=64 time=2.46 ms
64 bytes from 192.168.1.120: icmp_seq=9 ttl=64 time=2.30 ms
64 bytes from 192.168.1.120: icmp_seq=10 ttl=64 time=4.72 ms
64 bytes from 192.168.1.120: icmp_seq=11 ttl=64 time=26.3 ms
64 bytes from 192.168.1.120: icmp_seq=12 ttl=64 time=2.30 ms
64 bytes from 192.168.1.120: icmp_seq=13 ttl=64 time=71.7 ms
64 bytes from 192.168.1.120: icmp_seq=14 ttl=64 time=94.6 ms
64 bytes from 192.168.1.120: icmp_seq=15 ttl=64 time=116 ms
64 bytes from 192.168.1.120: icmp_seq=16 ttl=64 time=139 ms
64 bytes from 192.168.1.120: icmp_seq=17 ttl=64 time=161 ms
64 bytes from 192.168.1.120: icmp_seq=18 ttl=64 time=184 ms
64 bytes from 192.168.1.120: icmp_seq=19 ttl=64 time=205 ms
64 bytes from 192.168.1.120: icmp_seq=20 ttl=64 time=23.5 ms

Though 4 of the 20 ping response times were under my 5-millisecond threshold, the other 16 were not only above it, but many of them were nonsensically high for a small home network (e.g. 214ms).

In this type of scenario, the first thing that I consider is the driver and/or firmware for the wireless adapter (namely, the Intel Wi-Fi 6 AX200). For nearly all modern Intel wireless chips the Linux driver is the in-kernel iwlwifi driver, so I didn’t pay too much attention there. That driver has two possible modules to use in conjunction:

  • DVM (iwldvm)
    • The module that supports the firmware for a specific group of (primarily) AGN chips
  • MVM (iwlmvm)
    • The module that supports the firmware for a much broader scope of Intel wireless chips

I had chosen the iwlmvm module and built it into my kernel for convenience. I also then chose to load the corresponding firmware directly into the kernel as well. The gigantic linux-firmware package contains all the various options for the iwlwifi supporting firmware, and from that table, I saw that the original firmware for the Intel Wi-Fi 6 AX200 was named:

iwlwifi-cc-46.3cfab8da.0.ucode

Looking at the current linux-firmware git tree (at the time of this writing), the relevant firmware packages were:

  • iwlwifi-cc-a0-50.ucode
  • iwlwifi-cc-a0-59.ucode
  • iwlwifi-cc-a0-66.ucode
  • iwlwifi-cc-a0-72.ucode
  • iwlwifi-cc-a0-73.ucode
  • iwlwifi-cc-a0-74.ucode
  • iwlwifi-cc-a0-77.ucode

Using my trial-and-error approach of rebooting and looking at the output of dmesg | grep iwlwifi to find the first version the kernel attempted (but failed) to load, I found that the correct version was iwl-cc-a0-72.ucode, and passed via the kernel’s firmware loader.

After trying various options (such as switching from wpa_supplicant to iwd, and using an older firmware blob), I finally found the fix for the latency and ping spikes: power saving. By issuing iw wlan0 set power_save off, and then starting a new ping test, I could immediately see that the problem was fixed:

$ ping -c 20 192.168.1.120
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=2.90 ms
64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=1.68 ms
64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=2.43 ms
64 bytes from 192.168.1.120: icmp_seq=4 ttl=64 time=2.68 ms
64 bytes from 192.168.1.120: icmp_seq=5 ttl=64 time=3.08 ms
64 bytes from 192.168.1.120: icmp_seq=6 ttl=64 time=2.80 ms
64 bytes from 192.168.1.120: icmp_seq=7 ttl=64 time=3.25 ms
64 bytes from 192.168.1.120: icmp_seq=8 ttl=64 time=3.17 ms
64 bytes from 192.168.1.120: icmp_seq=9 ttl=64 time=2.83 ms
64 bytes from 192.168.1.120: icmp_seq=10 ttl=64 time=3.01 ms
64 bytes from 192.168.1.120: icmp_seq=11 ttl=64 time=2.77 ms
64 bytes from 192.168.1.120: icmp_seq=12 ttl=64 time=2.80 ms
64 bytes from 192.168.1.120: icmp_seq=13 ttl=64 time=3.37 ms
64 bytes from 192.168.1.120: icmp_seq=14 ttl=64 time=2.52 ms
64 bytes from 192.168.1.120: icmp_seq=15 ttl=64 time=2.71 ms
64 bytes from 192.168.1.120: icmp_seq=16 ttl=64 time=2.83 ms
64 bytes from 192.168.1.120: icmp_seq=17 ttl=64 time=3.25 ms
64 bytes from 192.168.1.120: icmp_seq=18 ttl=64 time=2.78 ms
64 bytes from 192.168.1.120: icmp_seq=19 ttl=64 time=2.48 ms
64 bytes from 192.168.1.120: icmp_seq=20 ttl=64 time=3.37 ms

--- 192.168.1.120 ping statistics ---
20 packets transmitted, 20 received, 0% packet loss, time 19036ms
rtt min/avg/max/mdev = 1.679/2.834/3.371/0.379 ms

Now that I had found the solution to the problem, the next task was to make the changes persistent across reboots. Of course, I could throw that iw command command into an rc.local script or something like that, but that seemed hackish to me. Instead, I decided to change my approach for loading the iwlwifi driver and firmware. Rather than having both the driver and the firmware built-in to the kernel, I chose to load them as kernel modules. Doing so allowed me to pass configuration options to the modules when they load. I made a configuration file at /etc/modprobe.d/iwlwifi.conf with the following contents:

$ cat /etc/modprobe.d/iwlwifi.conf 

## Has the same effect has running `iw wlan0 set power_save off`
## Both options sets are needed as iwlmvm will override iwlwifi :(
options iwlwifi power_save=0
## iwlmvm 1=always on, 2=balanced, 3=low-power
options iwlmvm power_scheme=1

As I mentioned in the comments there, BOTH options need to be set: one option passed to the iwlwifi module and the other option passed to the iwlmvm module. Also note the available parameters for the iwlmvm power_scheme:

  • 1 = always on
  • 2 = balanced
  • 3 = low-power

I have validated that these settings work across reboots, and that I no longer see the latency or ping spikes when connecting to this mini-PC over the Intel Wi-Fi 6 AX200. Some of these instructions may be specific to Gentoo and/or the OpenRC init system that I choose to use, but they should be readily adaptable to other distributions and init systems.

Cheers,
Nathan Zachary

Recently, I purchased a mini-PC with an embedded Intel Wi-Fi 6 AX200 wireless chipset and installed Gentoo Linux on it. The reason I purchased the mini-PC was to replace my ageing music server (running MPD), but I also wanted to use it for playing retro video games through my entertainment centre. Everything went well with the OS installation, but during that time, I had it plugged into my router via a wired connection. Once I moved the mini-PC into place in my sitting room and started using the wireless connection, I noticed that there was substantial lag even when doing something simple like typing in a terminal via SSH.

A quick ping test showed that there was clearly a problem with the wireless. For my home network, I consider any ping response time of >=5ms to be unacceptable and indicative of an underlying network problem:

$ ping -c 20 192.168.1.120
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=214 ms
64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=30.7 ms
64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=54.4 ms
64 bytes from 192.168.1.120: icmp_seq=4 ttl=64 time=75.1 ms
64 bytes from 192.168.1.120: icmp_seq=5 ttl=64 time=97.8 ms
64 bytes from 192.168.1.120: icmp_seq=6 ttl=64 time=122 ms
64 bytes from 192.168.1.120: icmp_seq=7 ttl=64 time=142 ms
64 bytes from 192.168.1.120: icmp_seq=8 ttl=64 time=2.46 ms
64 bytes from 192.168.1.120: icmp_seq=9 ttl=64 time=2.30 ms
64 bytes from 192.168.1.120: icmp_seq=10 ttl=64 time=4.72 ms
64 bytes from 192.168.1.120: icmp_seq=11 ttl=64 time=26.3 ms
64 bytes from 192.168.1.120: icmp_seq=12 ttl=64 time=2.30 ms
64 bytes from 192.168.1.120: icmp_seq=13 ttl=64 time=71.7 ms
64 bytes from 192.168.1.120: icmp_seq=14 ttl=64 time=94.6 ms
64 bytes from 192.168.1.120: icmp_seq=15 ttl=64 time=116 ms
64 bytes from 192.168.1.120: icmp_seq=16 ttl=64 time=139 ms
64 bytes from 192.168.1.120: icmp_seq=17 ttl=64 time=161 ms
64 bytes from 192.168.1.120: icmp_seq=18 ttl=64 time=184 ms
64 bytes from 192.168.1.120: icmp_seq=19 ttl=64 time=205 ms
64 bytes from 192.168.1.120: icmp_seq=20 ttl=64 time=23.5 ms

Though 4 of the 20 ping response times were under my 5-millisecond threshold, the other 16 were not only above it, but many of them were nonsensically high for a small home network (e.g. 214ms).

In this type of scenario, the first thing that I consider is the driver and/or firmware for the wireless adapter (namely, the Intel Wi-Fi 6 AX200). For nearly all modern Intel wireless chips the Linux driver is the in-kernel iwlwifi driver, so I didn’t pay too much attention there. That driver has two possible modules to use in conjunction:

  • DVM (iwldvm)
    • The module that supports the firmware for a specific group of (primarily) AGN chips
  • MVM (iwlmvm)
    • The module that supports the firmware for a much broader scope of Intel wireless chips

I had chosen the iwlmvm module and built it into my kernel for convenience. I also then chose to load the corresponding firmware directly into the kernel as well. The gigantic linux-firmware package contains all the various options for the iwlwifi supporting firmware, and from that table, I saw that the original firmware for the Intel Wi-Fi 6 AX200 was named:

iwlwifi-cc-46.3cfab8da.0.ucode

Looking at the current linux-firmware git tree (at the time of this writing), the relevant firmware packages were:

  • iwlwifi-cc-a0-50.ucode
  • iwlwifi-cc-a0-59.ucode
  • iwlwifi-cc-a0-66.ucode
  • iwlwifi-cc-a0-72.ucode
  • iwlwifi-cc-a0-73.ucode
  • iwlwifi-cc-a0-74.ucode
  • iwlwifi-cc-a0-77.ucode

Using my trial-and-error approach of rebooting and looking at the output of dmesg | grep iwlwifi to find the first version the kernel attempted (but failed) to load, I found that the correct version was iwl-cc-a0-72.ucode, and passed via the kernel’s firmware loader.

After trying various options (such as switching from wpa_supplicant to iwd, and using an older firmware blob), I finally found the fix for the latency and ping spikes: power saving. By issuing iw wlan0 set power_save off, and then starting a new ping test, I could immediately see that the problem was fixed:

$ ping -c 20 192.168.1.120
PING 192.168.1.120 (192.168.1.120) 56(84) bytes of data.
64 bytes from 192.168.1.120: icmp_seq=1 ttl=64 time=2.90 ms
64 bytes from 192.168.1.120: icmp_seq=2 ttl=64 time=1.68 ms
64 bytes from 192.168.1.120: icmp_seq=3 ttl=64 time=2.43 ms
64 bytes from 192.168.1.120: icmp_seq=4 ttl=64 time=2.68 ms
64 bytes from 192.168.1.120: icmp_seq=5 ttl=64 time=3.08 ms
64 bytes from 192.168.1.120: icmp_seq=6 ttl=64 time=2.80 ms
64 bytes from 192.168.1.120: icmp_seq=7 ttl=64 time=3.25 ms
64 bytes from 192.168.1.120: icmp_seq=8 ttl=64 time=3.17 ms
64 bytes from 192.168.1.120: icmp_seq=9 ttl=64 time=2.83 ms
64 bytes from 192.168.1.120: icmp_seq=10 ttl=64 time=3.01 ms
64 bytes from 192.168.1.120: icmp_seq=11 ttl=64 time=2.77 ms
64 bytes from 192.168.1.120: icmp_seq=12 ttl=64 time=2.80 ms
64 bytes from 192.168.1.120: icmp_seq=13 ttl=64 time=3.37 ms
64 bytes from 192.168.1.120: icmp_seq=14 ttl=64 time=2.52 ms
64 bytes from 192.168.1.120: icmp_seq=15 ttl=64 time=2.71 ms
64 bytes from 192.168.1.120: icmp_seq=16 ttl=64 time=2.83 ms
64 bytes from 192.168.1.120: icmp_seq=17 ttl=64 time=3.25 ms
64 bytes from 192.168.1.120: icmp_seq=18 ttl=64 time=2.78 ms
64 bytes from 192.168.1.120: icmp_seq=19 ttl=64 time=2.48 ms
64 bytes from 192.168.1.120: icmp_seq=20 ttl=64 time=3.37 ms

--- 192.168.1.120 ping statistics ---
20 packets transmitted, 20 received, 0% packet loss, time 19036ms
rtt min/avg/max/mdev = 1.679/2.834/3.371/0.379 ms

Now that I had found the solution to the problem, the next task was to make the changes persistent across reboots. Of course, I could throw that iw command command into an rc.local script or something like that, but that seemed hackish to me. Instead, I decided to change my approach for loading the iwlwifi driver and firmware. Rather than having both the driver and the firmware built-in to the kernel, I chose to load them as kernel modules. Doing so allowed me to pass configuration options to the modules when they load. I made a configuration file at /etc/modprobe.d/iwlwifi.conf with the following contents:

$ cat /etc/modprobe.d/iwlwifi.conf 

## Has the same effect has running `iw wlan0 set power_save off`
## Both options sets are needed as iwlmvm will override iwlwifi :(
options iwlwifi power_save=0
## iwlmvm 1=always on, 2=balanced, 3=low-power
options iwlmvm power_scheme=1

As I mentioned in the comments there, BOTH options need to be set: one option passed to the iwlwifi module and the other option passed to the iwlmvm module. Also note the available parameters for the iwlmvm power_scheme:

  • 1 = always on
  • 2 = balanced
  • 3 = low-power

I have validated that these settings work across reboots, and that I no longer see the latency or ping spikes when connecting to this mini-PC over the Intel Wi-Fi 6 AX200. Some of these instructions may be specific to Gentoo and/or the OpenRC init system that I choose to use, but they should be readily adaptable to other distributions and init systems.

Cheers,
Nathan Zachary

March 23 2023

Binary packages in Gentoo

Maciej Barć (xgqt) March 23, 2023, 9:01
Binpkgs generated by user

The binary packages generated by user can have architecture-specific optimizations because they are generated after they were compiled by the host Portage installation.

In addition binpkgs are generated from ebuilds so if there is a USE flag incompatibility on the consumer system then the binpkg will not be installed on the host and Portage will fall back to from-source compilation.

Those binary packages can use two formats: XPAK and GPKG.

XPAK had many issues and is getting superseded by the GPKG format. Beware of upcoming GPKG transition and if you must use XPAKs then you should explicitly enable it in your system’s Portage configuration.

To host a binary package distribution server see the Binary package guide on the Gentoo wiki.

Bin packages in a repository

Binary packages in ::gentoo (the official Gentoo repository) have the -bin suffix.

Those packages might have USE flags but generally they are very limited in case of customizations or code optimizations because they were compiled either by a Gentoo developer or by a given package upstream maintainer (or their CI/CD system).

Those packages land in ::gentoo mostly because it is too hard (or even impossible) to compile them natively by Portage. Most of the time those packages use very complicated build systems or do not play nice with network sandbox like (e.g. Scala-based projects) or use very large frameworks/libraries like (e.g. Electron).

They can also be added to the repository because they are very desirable either by normal users (e.g. www-client/firefox-bin) or for (from-source) package bootstrapping purposes (e.g. dev-java/openjdk-bin). Such packages are sometimes generated from the regular source packages inside ::gentoo and later repackaged.

Binpkgs generated by user

The binary packages generated by user can have architecture-specific optimizations because they are generated after they were compiled by the host Portage installation.

In addition binpkgs are generated from ebuilds so if there is a USE flag incompatibility on the consumer system then the binpkg will not be installed on the host and Portage will fall back to from-source compilation.

Those binary packages can use two formats: XPAK and GPKG.

XPAK had many issues and is getting superseded by the GPKG format. Beware of upcoming GPKG transition and if you must use XPAKs then you should explicitly enable it in your system’s Portage configuration.

To host a binary package distribution server see the Binary package guide on the Gentoo wiki.

Bin packages in a repository

Binary packages in ::gentoo (the official Gentoo repository) have the -bin suffix.

Those packages might have USE flags but generally they are very limited in case of customizations or code optimizations because they were compiled either by a Gentoo developer or by a given package upstream maintainer (or their CI/CD system).

Those packages land in ::gentoo mostly because it is too hard (or even impossible) to compile them natively by Portage. Most of the time those packages use very complicated build systems or do not play nice with network sandbox like (e.g. Scala-based projects) or use very large frameworks/libraries like (e.g. Electron).

They can also be added to the repository because they are very desirable either by normal users (e.g. www-client/firefox-bin) or for (from-source) package bootstrapping purposes (e.g. dev-java/openjdk-bin). Such packages are sometimes generated from the regular source packages inside ::gentoo and later repackaged.

February 24 2023

Ebuild lit tests

Maciej Barć (xgqt) February 24, 2023, 0:00
Patching

The file lit.site.cfg has to be inspected for any incorrect calls to executables. For example see src_prepare function form dev-lang/boogie.

Eclasses

Because we will need to specify how many threads should lit run we need to inherit multiprocessing to detect how many parallel jobs the portage config sets.

1
inherit multiprocessing
Dependencies

Ensure that dev-python/lit is in BDEPEND, but also additional packages may be needed, for example dev-python/OutputCheck.

1
2
3
4
5
6
7
BDEPEND="
    ${RDEPEND}
    test? (
        dev-python/lit
        dev-python/OutputCheck
    )
"
Bad tests

To deal with bad test you can simply remove the files causing the failures.

1
2
3
4
5
6
7
8
9
local -a bad_tests=(
    civl/inductive-sequentialization/BroadcastConsensus.bpl
    civl/inductive-sequentialization/PingPong.bpl
    livevars/bla1.bpl
)
local bad_test
for bad_test in ${bad_tests[@]} ; do
    rm "${S}"/Test/${bad_test} || die
done
Test phase

--threads $(makeopts_jobs) specifies how many parallel tests to run.

--verbose option will show output of failed tests.

Last lit argument specifies where lit should look for lit.site.cfg and tests.

1
2
3
src_test() {
    lit --threads $(makeopts_jobs) --verbose "${S}"/Test || die
}
xgqt (xgqt ) February 24, 2023, 0:00

Patching

The file lit.site.cfg has to be inspected for any incorrect calls to executables. For example see src_prepare function form dev-lang/boogie.

Eclasses

Because we will need to specify how many threads should lit run we need to inherit multiprocessing to detect how many parallel jobs the portage config sets.

1
inherit multiprocessing

Dependencies

Ensure that dev-python/lit is in BDEPEND, but also additional packages may be needed, for example dev-python/OutputCheck.

1
2
3
4
5
6
7
BDEPEND="
    ${RDEPEND}
    test? (
        dev-python/lit
        dev-python/OutputCheck
    )
"

Bad tests

To deal with bad test you can simply remove the files causing the failures.

1
2
3
4
5
6
7
8
9
local -a bad_tests=(
    civl/inductive-sequentialization/BroadcastConsensus.bpl
    civl/inductive-sequentialization/PingPong.bpl
    livevars/bla1.bpl
)
local bad_test
for bad_test in ${bad_tests[@]} ; do
    rm "${S}"/Test/${bad_test} || die
done

Test phase

--threads $(makeopts_jobs) specifies how many parallel tests to run.

--verbose option will show output of failed tests.

Last lit argument specifies where lit should look for lit.site.cfg and tests.

1
2
3
src_test() {
    lit --threads $(makeopts_jobs) --verbose "${S}"/Test || die
}

February 22 2023

Gentoo accepted into Google Summer of Code 2023

Gentoo News (GentooNews) February 22, 2023, 6:00

Do you want to learn more about Gentoo and contribute to your favourite free software project?! Once again, now for the 11th time, we have been accepted as a mentoring organization for this year’s Google Summer of Code!

The GSoC is an excellent opportunity for gaining real-world experience in software design and making oneself known in the broader open source community. It also looks great on a resume. Some initial project ideas can be found here, but new projects ideas are also welcome. For new projects time is of the essence: they have to be worked out, discussed with the mentors, and submitted before the April 4th deadline. It is strongly recommended that contributors refine new project ideas with a mentor before proposing the idea formally.

Potential GSoC contributors are encouraged to e-mail the GSoC admins with their name, IRC nickname, and the desired project, and discuss ideas in the #gentoo-soc IRC channel on Libera Chat. Further information can be found on the Gentoo GSoC 2023 wiki page. Those with unanswered questions should also not hesitate to contact the Summer of Code mentors via their mailing list.

GSoC logo

Do you want to learn more about Gentoo and contribute to your favourite free software project?! Once again, now for the 11th time, we have been accepted as a mentoring organization for this year’s Google Summer of Code!

The GSoC is an excellent opportunity for gaining real-world experience in software design and making oneself known in the broader open source community. It also looks great on a resume. Some initial project ideas can be found here, but new projects ideas are also welcome. For new projects time is of the essence: they have to be worked out, discussed with the mentors, and submitted before the April 4th deadline. It is strongly recommended that contributors refine new project ideas with a mentor before proposing the idea formally.

Potential GSoC contributors are encouraged to e-mail the GSoC admins with their name, IRC nickname, and the desired project, and discuss ideas in the #gentoo-soc IRC channel on Libera Chat. Further information can be found on the Gentoo GSoC 2023 wiki page. Those with unanswered questions should also not hesitate to contact the Summer of Code mentors via their mailing list.

2022 in retrospect & late happy new year 2023!

Gentoo News (GentooNews) February 09, 2023, 6:00

♦ A quite late Happy New Year 2023 to all of you!

Once again with 2022 an eventful year has passed, and Gentoo is still alive and kicking! 2023 already started some time ago and some of us have even already been meeting up and networking at FOSDEM 2023. Still, we are happy to present once more a review of the Gentoo news of the past year 2022. Read on for new developers, distribution wide initiatives and improvements, up-to-date numbers on Gentoo development, tales from the infrastructure, and all the fresh new packages you can emerge now.

Gentoo in numbers

The number of commits to the main ::gentoo repository has remained at high level in 2022, from 126920 to 126682. This is also true for the number of commits by external contributors, 10492, now across an even increased 440 unique external authors compared to 435 last year.

GURU, our user-curated repository with a trusted user model, is clearly growing further. We have had 5761 commits in 2022, up by 12% from 5131 in 2021. The number of contributors to GURU has increased similarly, from 125 in 2021 to 144 in 2022. Please join us there and help packaging the latest and greatest software. That’s the ideal preparation for becoming a full Gentoo developer!

On the Gentoo bugtracker bugs.gentoo.org, both the number of reported and of resolved bugs has increased clearly. We’ve had 26362 bug reports created in 2022, compared to 24056 in 2021. The number of resolved bugs shows a similar trend, with 24499 in 2022 compared to 24076 in 2021.

New developers

In 2022 we have gained four new Gentoo developers. They are in chronological order:

  1. Matthew Smith (matthew): ♦ Matthew joined us already in February from the North East of England. By trade embedded software developer, he helps with a diverse set of packages, from mold to erlang and from nasm to tree-sitter.

  2. WANG Xuerui (xen0n): ♦ A long-time Gentoo user, Xuerui joined us as a developer in March from Shanghai, China. He jumped in right into the deep end, bringing LoongArch support to Gentoo as well as lots of toolchain and qemu expertise (as long as his cat lets him).

  3. Kenton Groombridge (concord): ♦ Kenton comes from the US and from a real Gentoo family (yes, such a thing exists!); he joined up in May. His speciality is Gentoo Hardened and SELinux, and he has already collected quite some commits there!

  4. Viorel Munteanu (ceamac): ♦ In November, Viorel joined us from Bucharest, Romania. He’s active in the virtualization and proxy maintainers teams, and takes care of the VirtualBox stack and, e.g., TigerVNC.

Featured changes and news

Let’s now look at the major improvements and news of 2022 in Gentoo.

Distribution-wide Initiatives
  • ♦ LiveGUI Gentoo ISO download: For an instant, full-fledged Gentoo experience we now have a weekly-built 3.7GByte amd64 LiveGUI ISO ready for download. It is suitable for booting from DVDs or USB sticks, and boots into a full KDE Plasma desktop based on stable Gentoo. A ton of ready-to-use software is included, from dozens of system utilities, LibreOffice, Inkscape, and TeXLive all the way to Firefox and Chromium. Also, all build dependencies are installed and you can emerge additional packages as you like!

  • Modern C porting: This recent cross-distribution initiative has as its objective to port as much open source software as possible to modern C standards. Upcoming versions of GCC and Clang will eventually lose support for constructs that have been deprecated for decades, and we will have to be prepared for that. Together with Fedora we have taken the lead here, and a lot of effort has already gone into fixing and modernization.

  • ♦ Clang / LLVM as primary system compiler: Closely related, support for using Clang as the primary system compiler in Gentoo has never been better than now. For the most popular architectures, we have LLVM stages available which replace the GNU toolchain as far as possible (also using libc++, compiler-rc, lld, …) While glibc at the moment still requires GCC to build, the LLVM/musl stages come fully without GNU toolchain.

  • New binary package format gpkg: Gentoo’s package manager Portage now supports a new binary package format defined in GLEP 78. Besides many minor improvements, the most important new feature of the file format is that it fully supports cryptographic signing of packages. This was one of the most important roadblocks for more extensive binary package support in Gentoo.

  • ♦ merged-usr profiles and systemd merged-usr stages: All systemd profiles have now gained a merged-usr subprofile, corresponding to a filesystem layout where, e.g., /bin is a symbolic link to /usr/bin. The migration procedure has been described in detail in a news item. With this, we prepare for the time when systemd will only support the merged-usr layout anymore, as already announced by the upstream developers. Across all architectures, we also now consistently offer in addition to openrc downloads systemd stages with and without merged-usr layout. Merged-usr openrc stages will follow for completeness.

Architectures
  • ♦ LoongArch64: In the meantime, LoongArch64, a Chinese development by Loongson Co. based in parts on MIPS and on RISC-V, has become a fully supported Gentoo architecture, with toolchain support, widespread keywording, and up-to-date stages for download. First server-type chipsets based on these chips are currently being sold. (Outside mainland China hardware is difficult to obtain though.)

  • AArch64: An exotic variant of AArch64 (arm64) has been added to our download portfolio: Big-endian AArch64. Enjoy!

  • ♦ PA-RISC: Weekly stage builds for the hppa architecture (PA-RISC) are back, including systemd images for both hppa-1.1 and hppa-2.0 and an installation CD.

  • MIPS: The weekly builds for MIPS are back as well! Here, we can now offer downloads for the o32, n32, and n64 ABI plus multilib stages - and all that for both endianness variants and init systems. No matter what your hardware is, you should find a starting point.

  • Hardened: With more and more hardening becoming de-facto standard, the compiler settings in the hardened profiles have been tightened again to include additional experimental switches. In particular, in Gentoo Hardened, gcc and clang both now default to _FORTIFY_SOURCE=3, C++ standard library assertions, and enabled stack-clash-protection.

Packages
  • ♦ Modern Java: A huge amount of work was done by our Java project to revive the language ecosystem and in particular recent Java versions in Gentoo. Additionally, OpenJDK 11 and OpenJDK 17 were bootstrapped for big-endian ppc64, as well as for x86, riscv, and arm64 with musl as C library, enabling the usage of modern Java on those configurations.

  • GNU Emacs: Emacs ebuild-mode has seen a flurry of activity in 2022. New features include a new ebuild-repo-mode, inserting of user’s name and date stamp in package.mask and friends, support for pkgdev and pkgcheck commands, support for colors in ebuild command output, and a major refactoring of the code for keyword highlighting. Additionally, there’s flycheck-pkgcheck for on-the-fly linting and company-ebuild for automatic completion.

  • ♦ Mathematics: The sci-mathematics category has grown with the addition of theorem provers such as lean, yices2, cadabra, or picosat. Further, the Coq Proof Assistant ecosystem support has been improved with new Coq versions, Emacs support via company-coq, and packages such as coq-mathcomp, coq-serapi, flocq, gappalib-coq …

  • Alternatives: Many base system utilities exist in different flavours that are more or less drop-in replacements. One example of this is the compressor bzip2, with lbzip2 and pbzip2 as parallelizing alternatives; another tar, which exists both as gtar (GNU tar) and as bsdtar in libarchive. With alternatives we now have a clean system in place to use either of these options as default program via a symlinked binary.

  • ♦ Racket: An ongoing project aims to bring first-class support for Racket, a modern dialect of Lisp and a descendant of Scheme, and the Racket language ecosystem to Gentoo.

  • Python: In the meantime the default Python version in Gentoo has reached Python 3.10. Additionally we have also Python 3.11 available stable, which means we’re fully up to date with upstream. Gentoo testing provides the alpha releases of Python 3.12, so we can easily prepare for what comes next.

Physical and Software Infrastructure
  • Hardware: Our infrastructure team has set up two beefy new servers as Ganeti nodes hosted at OSUOSL, with 2x AMD EPYC 7543, 1TiB RAM, 22TiB NVME, and 25Gbit networking each. These will provide virtual machines for various services in the future. A new 1/10/25Gbit switch was also added to better support new and existing servers.

  • ♦ Gitlab: We are now running an experimental self-hosted Gitlab instance, gitlab.gentoo.org. It will slowly take over and serve more and more git repositories.

  • Pkgcore: Building on existing coding efforts, an official Gentoo PkgCore project was created to improve this set of QA and commit tools for Gentoo developers. Repoman was deprecated and removed from the Portage code base, and pkgcheck, part of PkgCore, has become the official QA tool for commits to the main Gentoo repository. It is also the code running our automated continuous integration system.

  • ♦ Tattoo: The new tattoo arch testing system now manages and automates large parts of the architecture testing process. This has simplified and streamlined the stabilization process, shortening developer response times and “saving” arch stabilization.

  • Devmanual: The Gentoo Development Manual has seen major improvements in 2022. More documentation is good!

Finances of the Gentoo Foundation
  • ♦ Income: The Gentoo Foundation took in approximately $16,500 in fiscal year 2022; the majority (over 90%) were individual cash donations from the community.

  • Expenses: Our expenses in 2022 were, as split into the usual three categories, operating expenses (for services, fees, …) $11,000, capital expenses (for bought assets) $55,000 (servers, networking gear, SSDs, …), and depreciation expenses (value loss of existing assets) $9,500.

  • Balance: We have about $97,000 in the bank as of July 1, 2022 (which is when our fiscal year 2022 ends for accounting purposes). The draft finanical report for 2022 is available on the Gentoo Wiki.

Thank you!

Our end of year review of course cannot cover everything that happened in Gentoo in 2022 in detail, and if you look closely you will find much more. We would like to thank all Gentoo developers and all who have submitted contributions for their relentless everyday Gentoo work. As a volunteer project, Gentoo could not exist without them.

And now let’s look forward to the new year 2023, with hopefully less unpleasant surprises than the last one!

Gentoo Fireworks A quite late Happy New Year 2023 to all of you!

Once again with 2022 an eventful year has passed, and Gentoo is still alive and kicking! 2023 already started some time ago and some of us have even already been meeting up and networking at FOSDEM 2023. Still, we are happy to present once more a review of the Gentoo news of the past year 2022. Read on for new developers, distribution wide initiatives and improvements, up-to-date numbers on Gentoo development, tales from the infrastructure, and all the fresh new packages you can emerge now.

Gentoo in numbers

The number of commits to the main ::gentoo repository has remained at high level in 2022, from 126920 to 126682. This is also true for the number of commits by external contributors, 10492, now across an even increased 440 unique external authors compared to 435 last year.

GURU, our user-curated repository with a trusted user model, is clearly growing further. We have had 5761 commits in 2022, up by 12% from 5131 in 2021. The number of contributors to GURU has increased similarly, from 125 in 2021 to 144 in 2022. Please join us there and help packaging the latest and greatest software. That’s the ideal preparation for becoming a full Gentoo developer!

On the Gentoo bugtracker bugs.gentoo.org, both the number of reported and of resolved bugs has increased clearly. We’ve had 26362 bug reports created in 2022, compared to 24056 in 2021. The number of resolved bugs shows a similar trend, with 24499 in 2022 compared to 24076 in 2021.

New developers

In 2022 we have gained four new Gentoo developers. They are in chronological order:

  1. Matthew Smith (matthew): Matthew joined us already in February from the North East of England. By trade embedded software developer, he helps with a diverse set of packages, from mold to erlang and from nasm to tree-sitter.

  2. WANG Xuerui (xen0n): A long-time Gentoo user, Xuerui joined us as a developer in March from Shanghai, China. He jumped in right into the deep end, bringing LoongArch support to Gentoo as well as lots of toolchain and qemu expertise (as long as his cat lets him).

  3. Kenton Groombridge (concord): Kenton comes from the US and from a real Gentoo family (yes, such a thing exists!); he joined up in May. His speciality is Gentoo Hardened and SELinux, and he has already collected quite some commits there!

  4. Viorel Munteanu (ceamac): In November, Viorel joined us from Bucharest, Romania. He’s active in the virtualization and proxy maintainers teams, and takes care of the VirtualBox stack and, e.g., TigerVNC.

Let’s now look at the major improvements and news of 2022 in Gentoo.

Distribution-wide Initiatives

  • LiveGUI Gentoo ISO download: For an instant, full-fledged Gentoo experience we now have a weekly-built 3.7GByte amd64 LiveGUI ISO ready for download. It is suitable for booting from DVDs or USB sticks, and boots into a full KDE Plasma desktop based on stable Gentoo. A ton of ready-to-use software is included, from dozens of system utilities, LibreOffice, Inkscape, and TeXLive all the way to Firefox and Chromium. Also, all build dependencies are installed and you can emerge additional packages as you like!

  • Modern C porting: This recent cross-distribution initiative has as its objective to port as much open source software as possible to modern C standards. Upcoming versions of GCC and Clang will eventually lose support for constructs that have been deprecated for decades, and we will have to be prepared for that. Together with Fedora we have taken the lead here, and a lot of effort has already gone into fixing and modernization.

  • Clang / LLVM as primary system compiler: Closely related, support for using Clang as the primary system compiler in Gentoo has never been better than now. For the most popular architectures, we have LLVM stages available which replace the GNU toolchain as far as possible (also using libc++, compiler-rc, lld, …) While glibc at the moment still requires GCC to build, the LLVM/musl stages come fully without GNU toolchain.

  • New binary package format gpkg: Gentoo’s package manager Portage now supports a new binary package format defined in GLEP 78. Besides many minor improvements, the most important new feature of the file format is that it fully supports cryptographic signing of packages. This was one of the most important roadblocks for more extensive binary package support in Gentoo.

  • merged-usr profiles and systemd merged-usr stages: All systemd profiles have now gained a merged-usr subprofile, corresponding to a filesystem layout where, e.g., /bin is a symbolic link to /usr/bin. The migration procedure has been described in detail in a news item. With this, we prepare for the time when systemd will only support the merged-usr layout anymore, as already announced by the upstream developers. Across all architectures, we also now consistently offer in addition to openrc downloads systemd stages with and without merged-usr layout. Merged-usr openrc stages will follow for completeness.

Architectures

  • LoongArch64: In the meantime, LoongArch64, a Chinese development by Loongson Co. based in parts on MIPS and on RISC-V, has become a fully supported Gentoo architecture, with toolchain support, widespread keywording, and up-to-date stages for download. First server-type chipsets based on these chips are currently being sold. (Outside mainland China hardware is difficult to obtain though.)

  • AArch64: An exotic variant of AArch64 (arm64) has been added to our download portfolio: Big-endian AArch64. Enjoy!

  • PA-RISC: Weekly stage builds for the hppa architecture (PA-RISC) are back, including systemd images for both hppa-1.1 and hppa-2.0 and an installation CD.

  • MIPS: The weekly builds for MIPS are back as well! Here, we can now offer downloads for the o32, n32, and n64 ABI plus multilib stages - and all that for both endianness variants and init systems. No matter what your hardware is, you should find a starting point.

  • Hardened: With more and more hardening becoming de-facto standard, the compiler settings in the hardened profiles have been tightened again to include additional experimental switches. In particular, in Gentoo Hardened, gcc and clang both now default to _FORTIFY_SOURCE=3, C++ standard library assertions, and enabled stack-clash-protection.

Packages

  • Modern Java: A huge amount of work was done by our Java project to revive the language ecosystem and in particular recent Java versions in Gentoo. Additionally, OpenJDK 11 and OpenJDK 17 were bootstrapped for big-endian ppc64, as well as for x86, riscv, and arm64 with musl as C library, enabling the usage of modern Java on those configurations.

  • GNU Emacs: Emacs ebuild-mode has seen a flurry of activity in 2022. New features include a new ebuild-repo-mode, inserting of user’s name and date stamp in package.mask and friends, support for pkgdev and pkgcheck commands, support for colors in ebuild command output, and a major refactoring of the code for keyword highlighting. Additionally, there’s flycheck-pkgcheck for on-the-fly linting and company-ebuild for automatic completion.

  • Mathematics: The sci-mathematics category has grown with the addition of theorem provers such as lean, yices2, cadabra, or picosat. Further, the Coq Proof Assistant ecosystem support has been improved with new Coq versions, Emacs support via company-coq, and packages such as coq-mathcomp, coq-serapi, flocq, gappalib-coq

  • Alternatives: Many base system utilities exist in different flavours that are more or less drop-in replacements. One example of this is the compressor bzip2, with lbzip2 and pbzip2 as parallelizing alternatives; another tar, which exists both as gtar (GNU tar) and as bsdtar in libarchive. With alternatives we now have a clean system in place to use either of these options as default program via a symlinked binary.

  • Racket: An ongoing project aims to bring first-class support for Racket, a modern dialect of Lisp and a descendant of Scheme, and the Racket language ecosystem to Gentoo.

  • Python: In the meantime the default Python version in Gentoo has reached Python 3.10. Additionally we have also Python 3.11 available stable, which means we’re fully up to date with upstream. Gentoo testing provides the alpha releases of Python 3.12, so we can easily prepare for what comes next.

Physical and Software Infrastructure

  • Hardware: Our infrastructure team has set up two beefy new servers as Ganeti nodes hosted at OSUOSL, with 2x AMD EPYC 7543, 1TiB RAM, 22TiB NVME, and 25Gbit networking each. These will provide virtual machines for various services in the future. A new 1/10/25Gbit switch was also added to better support new and existing servers.

  • Gitlab: We are now running an experimental self-hosted Gitlab instance, gitlab.gentoo.org. It will slowly take over and serve more and more git repositories.

  • Pkgcore: Building on existing coding efforts, an official Gentoo PkgCore project was created to improve this set of QA and commit tools for Gentoo developers. Repoman was deprecated and removed from the Portage code base, and pkgcheck, part of PkgCore, has become the official QA tool for commits to the main Gentoo repository. It is also the code running our automated continuous integration system.

  • Tattoo: The new tattoo arch testing system now manages and automates large parts of the architecture testing process. This has simplified and streamlined the stabilization process, shortening developer response times and “saving” arch stabilization.

  • Devmanual: The Gentoo Development Manual has seen major improvements in 2022. More documentation is good!

Finances of the Gentoo Foundation

  • Income: The Gentoo Foundation took in approximately $16,500 in fiscal year 2022; the majority (over 90%) were individual cash donations from the community.

  • Expenses: Our expenses in 2022 were, as split into the usual three categories, operating expenses (for services, fees, …) $11,000, capital expenses (for bought assets) $55,000 (servers, networking gear, SSDs, …), and depreciation expenses (value loss of existing assets) $9,500.

  • Balance: We have about $97,000 in the bank as of July 1, 2022 (which is when our fiscal year 2022 ends for accounting purposes). The draft finanical report for 2022 is available on the Gentoo Wiki.

Thank you!

Our end of year review of course cannot cover everything that happened in Gentoo in 2022 in detail, and if you look closely you will find much more. We would like to thank all Gentoo developers and all who have submitted contributions for their relentless everyday Gentoo work. As a volunteer project, Gentoo could not exist without them.

And now let’s look forward to the new year 2023, with hopefully less unpleasant surprises than the last one!

January 13 2023

FOSDEM 2023

Gentoo News (GentooNews) January 13, 2023, 6:00

Finally, after a long break, it’s FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year’s FOSDEM 2023 will be held on February 4th and 5th.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building H, level 1! Visit this year’s wiki page to see who’s coming.

FOSDEM logo

Finally, after a long break, it’s FOSDEM time again! Join us at Université Libre de Bruxelles, Campus du Solbosch, in Brussels, Belgium. This year’s FOSDEM 2023 will be held on February 4th and 5th.

Our developers will be happy to greet all open source enthusiasts at our Gentoo stand in building H, level 1! Visit this year’s wiki page to see who’s coming.

December 30 2022

Ebuild-mode

Maciej Barć (xgqt) December 30, 2022, 0:00
Portage

Configure the following for Portage.

1
dev-util/pkgcheck emacs
Emerge

Emerge the following packages:

  • app-emacs/company-ebuild
  • dev-util/pkgcheck

Company-Ebuild should pull in app-emacs/ebuild-mode, if that does not happen, then report a bug ;-D

Standard

Add the following to your user's Emacs initialization file. The initialization file is either ~/.emacs.d/init.el or ~/.config/emacs/init.el for newer versions of GNU Emacs.

1
2
3
4
5
6
7
8
(require 'ebuild-mode)
(require 'company-ebuild)
(require 'flycheck)
(require 'flycheck-pkgcheck)

(add-hook 'ebuild-mode-hook 'company-ebuild-setup)
(add-hook 'ebuild-mode-hook 'flycheck-mode)
(add-hook 'ebuild-mode-hook 'flycheck-pkgcheck-setup)
Use-Package

We can also configure our environment using a use-package macro that simplifies the setup a little bit.

To use the below configuration the app-emacs/use-package package will have to be installed.

1
2
3
4
5
6
7
8
9
(require 'use-package)

(use-package ebuild-mode
  :defer t
  :mode "\\.\\(ebuild\\|eclass\\)\\'"
  :hook
  ((ebuild-mode . company-ebuild-setup)
   (ebuild-mode . flycheck-mode)
   (ebuild-mode . flycheck-pkgcheck-setup)))

The :defer t and :mode "..." enable deferred loading which theoretically speeds up GNU Emacs initialization time at the cost of running the whole use-package block of ebuild-mode configuration when the :mode condition is met.

xgqt (xgqt ) December 30, 2022, 0:00

Portage

Configure the following for Portage.

1
dev-util/pkgcheck emacs

Emerge

Emerge the following packages:

  • app-emacs/company-ebuild
  • dev-util/pkgcheck

Company-Ebuild should pull in app-emacs/ebuild-mode, if that does not happen, then report a bug ;-D

Standard

Add the following to your user's Emacs initialization file. The initialization file is either ~/.emacs.d/init.el or ~/.config/emacs/init.el for newer versions of GNU Emacs.

1
2
3
4
5
6
7
8
(require 'ebuild-mode)
(require 'company-ebuild)
(require 'flycheck)
(require 'flycheck-pkgcheck)

(add-hook 'ebuild-mode-hook 'company-ebuild-setup)
(add-hook 'ebuild-mode-hook 'flycheck-mode)
(add-hook 'ebuild-mode-hook 'flycheck-pkgcheck-setup)

Use-Package

We can also configure our environment using a use-package macro that simplifies the setup a little bit.

To use the below configuration the app-emacs/use-package package will have to be installed.

1
2
3
4
5
6
7
8
9
(require 'use-package)

(use-package ebuild-mode
  :defer t
  :mode "\\.\\(ebuild\\|eclass\\)\\'"
  :hook
  ((ebuild-mode . company-ebuild-setup)
   (ebuild-mode . flycheck-mode)
   (ebuild-mode . flycheck-pkgcheck-setup)))

The :defer t and :mode "..." enable deferred loading which theoretically speeds up GNU Emacs initialization time at the cost of running the whole use-package block of ebuild-mode configuration when the :mode condition is met.

src_snapshot

Maciej Barć (xgqt) December 30, 2022, 0:00
Prototype

Recently while browsing the Alpine git repo I noticed they have a function called snapshot, see: git.alpinelinux.org/aports/tree/testing/dart/APKBUILD#n45 I am not 100% sure about how that works but a wild guess is that the developers can run that function to fetch the sources and maybe later upload them to the Alpine repo or some sort of (cloud?) storage.

In Portage there exists a pkg_config function used to run miscellaneous configuration for packages. The only major difference between src_snapshot and that would of course be that users would never run snapshot.

Sandbox

Probably only the network sandbox would have to be lifted out… to fetch the sources of course.

But also a few (at least one?) special directories and variables would be useful.

xgqt (xgqt ) December 30, 2022, 0:00

Prototype

Recently while browsing the Alpine git repo I noticed they have a function called snapshot, see: https://git.alpinelinux.org/aports/tree/testing/dart/APKBUILD#n45 I am not 100% sure about how that works but a wild guess is that the developers can run that function to fetch the sources and maybe later upload them to the Alpine repo or some sort of (cloud?) storage.

In Portage there exists a pkg_config function used to run miscellaneous configuration for packages. The only major difference between src_snapshot and that would of course be that users would never run snapshot.

Sandbox

Probably only the network sandbox would have to be lifted out… to fetch the sources of course.

But also a few (at least one?) special directories and variables would be useful.

September 12 2022

Refining ROCm Packages in Gentoo — project summary

Gentoo Google Summer of Code (GSoC) September 12, 2022, 13:07

12 weeks quickly slips away, and I’m proud to say that the packaging quality of ROCm in Gentoo does gets improved in this project.

Two sets of major deliverables are achieved: New ebuilds of ROCm-5.1.3 tool-chain that purely depends on vanilla llvm/clang, and rocm.eclass along with ROCm-5.1.3 libraries utilizing them. Each brings one great QA improvement compare to the original ROCm packaging method.

Beyond these, I also maintained rocprofiler, rocm-opencl-runtimes, bumping their version with nontrivial changes. I discovered several bugs, and talked to upstream. I also wrote ROCm wiki pages, which starts my journey on Gentoo wiki.

By writing rocm.eclass, I learnt pretty much about eclass writing — how to design, how to balance needs and QA concerns, how to write comments and examples well, etc. I’m really grateful to those Gentoo developers who pointed out my mistakes and helped me polishing my eclass.

Since I’m working on top of Gentoo repo, my work is scattered around rather than having my own repo. My major products can be seen in [0], where all my PRs to ::gentoo located. My weekly report can be found on Gentoo GSoC blogs

[0] My finished PRs for gentoo during GSoC 2022

Details are as followed:

First, it’s about ROCm on vanilla llvm/clang

Originally, ROCm has its own llvm fork, which has some modifications not upstreamed yet. In the original Gentoo ROCm packaging roadmap, sys-devel/llvm-roc is introduced as the ROCm forked llvm/clang. This is the simple way, and worked well on ROCm-only packages [1]. But it brings troubles if a large project like blender pulls in dependencies using vanilla llvm, and results in symbol collision [2].

So, when I noticed [1] in week 1, I began my journey on porting ROCm on vanilla clang. I’m very lucky, because at that time clang-14.0.5 was just released, eliminating major obstacles for porting (previous versions more or less have bugs). After some quick hack I succeeded, which is recorded in the week 1 report [3]. In that week I successfully built blender with hip cycles (GPU-accelerated render code written in HIP), and rendered some example projects on a Radeon RX 6700XT.

While I was thrilled in porting ROCm tool-chain upon vanilla clang, my mentor pointed out that I have carelessly brought some serious bugs in ::gentoo. In week 2, I managed to fix bugs I created, and set up a reproducible test ground using docker, to make test more easy and clean and avoid such bugs from happening again. Details can be found in week 2’s report [4].

After that there weren’t non-trivial progresses in porting to vanilla clang, only bug fixes and ebuild polishing, until I met MIOpen in the last week.

The story of debugging MIOpen assemblies

In week 12 rocm.eclass is almost in its final shape, so I began to land ROCm libraries [1] including sci-libs/miopen. ROCm libraries are usually written in “high level” languages like HIP, while dev-util/hip is already ported to use vanilla clang in good shape, so there is no need to worry compilation problems. However, MIOpen have various hand-written assemblies for JIT, which causes several test failures [5]. It was frustrating because I’m unfamiliar with AMDGPU assemblies, so I was close to gave up (my mentor also suggest to give up working on it in GSoC). Thus, I reported my problem to upstream in [5], attached with my debugging attempts.

Thanks to my testing system mentioned previously, I have setup not only standard environments, but also one snapshot with full llvm/clang debug symbols. I quickly located the problem and reported to upstream via issue, but I still didn’t know why the error is happening.

In the second day, I decided to look at the assembly and debugging result once again. This time fortune is on my side, and I discovered the key issue is LLVM treating Y and N in metadata as boolean values, not strings (they should be kernel parameter names) [6]. I provided a fix in [7], and all tests passed on both Radeon VII and Radeon RX 6700XT. Amazing! I have also mentioned how excited I was in week 12’s report [8].

[1] For example, ROCm libraries in github.com/ROCmSoftwarePlatform
[2] bugs.gentoo.org/693200
[3] Week 1 Report for Refining ROCm Packages in Gentoo
[4] Week 4 Report for Refining ROCm Packages in Gentoo
[5] github.com/ROCmSoftwarePlatform/MIOpen/issues/1731
[6] github.com/ROCmSoftwarePlatform/MIOpen/issues/1731#issuecomment-1236913096
[7] github.com/littlewu2508/gentoo/commit/40eb81f151f43eb5d833dc7440b02f12dab04b89
[8] Week 12 Report for Refining ROCm Packages in Gentoo

The second deliverable is rocm.eclass

The most challenging part for me, is to write rocm.eclass. I started writing it in week 4 [9], and finished my design in week 8 [10] (including 10 days of temporary leave). In week 9-12, I posted 7 revisions of rocm.eclass in gentoo-dev mailing list [10,11], and received many helpful comments. Also, on Github PR [12], I also got lots of suggestions from Gentoo developers.

Eventually, I finished rocm.eclass, providing amdgpu_targets USE_EXPAND, ROCM_REQUIRED_USE, and ROCM_USE_DEP to control which gpu targets to compile, and coherency among dependencies. The eclass provides get_amdgpu_flags for src_configure and check_amdgpu for ensuring AMDGPU device accessibility in src_test. Finally, rocm.eclass is merged into ::gentoo in [13].

[9] Week 9 Report for Refining ROCm Packages in Gentoo
[10] archives.gentoo.org/gentoo-dev/threads/2022-08/
[11] archives.gentoo.org/gentoo-dev/threads/2022-09/
[12] github.com/gentoo/gentoo/pull/26784
[13] gitweb.gentoo.org/repo/gentoo.git/commit/?id=cf8a6a845b68b578772f2ae0d2703f203c6dec33

Other coding products Merged ebuilds rocprofiler

I have bumped dev-util/rocprofiler and its dependencies to version 5.1.3, and fixed proprietary aql profiler lib loading, so ROCm stack on Gentoo stays fully open-sourced without losing most profiling functionalities [14].

[14] github.com/ROCm-Developer-Tools/rocprofiler/issues/38

Unmerged ebuilds

Due to limited time and long testing period, ebuilds of ROCm-5.1.3 libraries (ones using rocm.eclass) does not get merged. They can be found in this PR.
dev-libs/rocm-opencl-runtime is a critical package because it provides opencl, and many users still use opencl for GPGPU since HIP is a new stuff. I bumped it to 5.1.3 to match the vanilla clang tool-chain, and enabled its src_test, so users can make sure that vanilla clang isn’t breaking anything. The PR is located here.

Bug fixes

Existing bug fixing is also a part of my GSoC. I have created various PRs and closed corresponding bugs on Gentoo Bugzilla: #822828, #853718, #851795, #851792, #852236, #850937, #836248, #836274, #866839. Also, many bug fixing happens before new packages enter the gentoo main repo, or they are found by myself in the first place, so there is no record on Bugzilla.

Last but not least, the wiki page

I have created 3 pages [15-17], filling important information about ROCm. I also received a lot of help from the Gentoo community, mainly focused on refining my wiki to meet the standards.

[15] wiki.gentoo.org/wiki/ROCm
[16] wiki.gentoo.org/wiki/HIP
[17] wiki.gentoo.org/wiki/Rocprofiler

Comparison with original plan

The original plan in proposal also contained rocm.eclass. But it only allocated the last week for “investigation on vanilla clang”. In week 1, my mentor and I added “porting ROCm on vanilla clang” to the plan, and this became the new major deliverable. Due to the time limit, packaging high level frameworks like pytorch and tensorflow is abandoned. I only worked to get CuPy worked [18], showing rocm.eclass functionality on packages that depend on ROCm libraries.

I think the change of plan and deliverables better annotated the project title “Refining”, because what I did greatly improves the quality of existing ebuilds, rather than introducing more ebuilds.

[18] github.com/littlewu2508/gentoo/commit/3d142fa4b4ada560c053c2fd3c8c1501c82aace2

12 weeks quickly slips away, and I’m proud to say that the packaging quality of ROCm in Gentoo does gets improved in this project.

Two sets of major deliverables are achieved: New ebuilds of ROCm-5.1.3 tool-chain that purely depends on vanilla llvm/clang, and rocm.eclass along with ROCm-5.1.3 libraries utilizing them. Each brings one great QA improvement compare to the original ROCm packaging method.

Beyond these, I also maintained rocprofiler, rocm-opencl-runtimes, bumping their version with nontrivial changes. I discovered several bugs, and talked to upstream. I also wrote ROCm wiki pages, which starts my journey on Gentoo wiki.

By writing rocm.eclass, I learnt pretty much about eclass writing — how to design, how to balance needs and QA concerns, how to write comments and examples well, etc. I’m really grateful to those Gentoo developers who pointed out my mistakes and helped me polishing my eclass.

Since I’m working on top of Gentoo repo, my work is scattered around rather than having my own repo. My major products can be seen in [0], where all my PRs to ::gentoo located. My weekly report can be found on Gentoo GSoC blogs

[0] My finished PRs for gentoo during GSoC 2022

Details are as followed:

First, it’s about ROCm on vanilla llvm/clang

Originally, ROCm has its own llvm fork, which has some modifications not upstreamed yet. In the original Gentoo ROCm packaging roadmap, sys-devel/llvm-roc is introduced as the ROCm forked llvm/clang. This is the simple way, and worked well on ROCm-only packages [1]. But it brings troubles if a large project like blender pulls in dependencies using vanilla llvm, and results in symbol collision [2].

So, when I noticed [1] in week 1, I began my journey on porting ROCm on vanilla clang. I’m very lucky, because at that time clang-14.0.5 was just released, eliminating major obstacles for porting (previous versions more or less have bugs). After some quick hack I succeeded, which is recorded in the week 1 report [3]. In that week I successfully built blender with hip cycles (GPU-accelerated render code written in HIP), and rendered some example projects on a Radeon RX 6700XT.

While I was thrilled in porting ROCm tool-chain upon vanilla clang, my mentor pointed out that I have carelessly brought some serious bugs in ::gentoo. In week 2, I managed to fix bugs I created, and set up a reproducible test ground using docker, to make test more easy and clean and avoid such bugs from happening again. Details can be found in week 2’s report [4].

After that there weren’t non-trivial progresses in porting to vanilla clang, only bug fixes and ebuild polishing, until I met MIOpen in the last week.

The story of debugging MIOpen assemblies

In week 12 rocm.eclass is almost in its final shape, so I began to land ROCm libraries [1] including sci-libs/miopen. ROCm libraries are usually written in “high level” languages like HIP, while dev-util/hip is already ported to use vanilla clang in good shape, so there is no need to worry compilation problems. However, MIOpen have various hand-written assemblies for JIT, which causes several test failures [5]. It was frustrating because I’m unfamiliar with AMDGPU assemblies, so I was close to gave up (my mentor also suggest to give up working on it in GSoC). Thus, I reported my problem to upstream in [5], attached with my debugging attempts.

Thanks to my testing system mentioned previously, I have setup not only standard environments, but also one snapshot with full llvm/clang debug symbols. I quickly located the problem and reported to upstream via issue, but I still didn’t know why the error is happening.

In the second day, I decided to look at the assembly and debugging result once again. This time fortune is on my side, and I discovered the key issue is LLVM treating Y and N in metadata as boolean values, not strings (they should be kernel parameter names) [6]. I provided a fix in [7], and all tests passed on both Radeon VII and Radeon RX 6700XT. Amazing! I have also mentioned how excited I was in week 12’s report [8].

[1] For example, ROCm libraries in https://github.com/ROCmSoftwarePlatform
[2] https://bugs.gentoo.org/693200
[3] Week 1 Report for Refining ROCm Packages in Gentoo
[4] Week 4 Report for Refining ROCm Packages in Gentoo
[5] https://github.com/ROCmSoftwarePlatform/MIOpen/issues/1731
[6] https://github.com/ROCmSoftwarePlatform/MIOpen/issues/1731#issuecomment-1236913096
[7] https://github.com/littlewu2508/gentoo/commit/40eb81f151f43eb5d833dc7440b02f12dab04b89
[8] Week 12 Report for Refining ROCm Packages in Gentoo

The second deliverable is rocm.eclass

The most challenging part for me, is to write rocm.eclass. I started writing it in week 4 [9], and finished my design in week 8 [10] (including 10 days of temporary leave). In week 9-12, I posted 7 revisions of rocm.eclass in gentoo-dev mailing list [10,11], and received many helpful comments. Also, on Github PR [12], I also got lots of suggestions from Gentoo developers.

Eventually, I finished rocm.eclass, providing amdgpu_targets USE_EXPAND, ROCM_REQUIRED_USE, and ROCM_USE_DEP to control which gpu targets to compile, and coherency among dependencies. The eclass provides get_amdgpu_flags for src_configure and check_amdgpu for ensuring AMDGPU device accessibility in src_test. Finally, rocm.eclass is merged into ::gentoo in [13].

[9] Week 9 Report for Refining ROCm Packages in Gentoo
[10] https://archives.gentoo.org/gentoo-dev/threads/2022-08/
[11] https://archives.gentoo.org/gentoo-dev/threads/2022-09/
[12] https://github.com/gentoo/gentoo/pull/26784
[13] https://gitweb.gentoo.org/repo/gentoo.git/commit/?id=cf8a6a845b68b578772f2ae0d2703f203c6dec33

Other coding products

Merged ebuilds

rocprofiler

I have bumped dev-util/rocprofiler and its dependencies to version 5.1.3, and fixed proprietary aql profiler lib loading, so ROCm stack on Gentoo stays fully open-sourced without losing most profiling functionalities [14].

[14] https://github.com/ROCm-Developer-Tools/rocprofiler/issues/38

Unmerged ebuilds

Due to limited time and long testing period, ebuilds of ROCm-5.1.3 libraries (ones using rocm.eclass) does not get merged. They can be found in this PR.
dev-libs/rocm-opencl-runtime is a critical package because it provides opencl, and many users still use opencl for GPGPU since HIP is a new stuff. I bumped it to 5.1.3 to match the vanilla clang tool-chain, and enabled its src_test, so users can make sure that vanilla clang isn’t breaking anything. The PR is located here.

Bug fixes

Existing bug fixing is also a part of my GSoC. I have created various PRs and closed corresponding bugs on Gentoo Bugzilla: #822828, #853718, #851795, #851792, #852236, #850937, #836248, #836274, #866839. Also, many bug fixing happens before new packages enter the gentoo main repo, or they are found by myself in the first place, so there is no record on Bugzilla.

Last but not least, the wiki page

I have created 3 pages [15-17], filling important information about ROCm. I also received a lot of help from the Gentoo community, mainly focused on refining my wiki to meet the standards.

[15] https://wiki.gentoo.org/wiki/ROCm
[16] https://wiki.gentoo.org/wiki/HIP
[17] https://wiki.gentoo.org/wiki/Rocprofiler

Comparison with original plan

The original plan in proposal also contained rocm.eclass. But it only allocated the last week for “investigation on vanilla clang”. In week 1, my mentor and I added “porting ROCm on vanilla clang” to the plan, and this became the new major deliverable. Due to the time limit, packaging high level frameworks like pytorch and tensorflow is abandoned. I only worked to get CuPy worked [18], showing rocm.eclass functionality on packages that depend on ROCm libraries.

I think the change of plan and deliverables better annotated the project title “Refining”, because what I did greatly improves the quality of existing ebuilds, rather than introducing more ebuilds.

[18] https://github.com/littlewu2508/gentoo/commit/3d142fa4b4ada560c053c2fd3c8c1501c82aace2

September 11 2022

Week 12 Report for Refining ROCm Packages in Gentoo

Gentoo Google Summer of Code (GSoC) September 11, 2022, 10:16

Although this is the final week, I would like to say that it is as exciting as the first week.

I kept polishing rocm.eclass with the help of Michał and my mentor, and it is now in good shape [1]. I must admit that the time to write an eclass for a beginner like me is much more than what I expected. In my proposal, I leave 4 weeks to finish it, 2-week implementation and 2-week polishing. In reality, I implemented within 2 weeks, but polished it for 4 weeks. I made a lot of QA issues and was not aware, which increases the number of review-modify cycles. During this process, I leant a lot:

1. Always re-read the eclass, especially comments and examples thoroughly after modification. Many times I forgot there is an example far from the change that should be updated because one functions changes its behavior.

2. Read the bash manual carefully, because properly usage of features like bash array can greatly simplify code.

3. Consider the maintenance difficulty of the eclass. I wrote a oddly specific `src_test`, which can cover all the cases of ROCm packages. But it’s not worth it, because specialized code should be placed into ebuilds, not one eclass. So instead, I remain the most common part, `check_amdgpu`, and get rid of phase functions, which made the eclass much cleaner.

I also find some bugs and their solutions. As I mentioned in week 10’s report, I observed many test failures in sci-libs/miopen based on vanilla clang. In this week, I figured out that they have 3 different reasons, and I’ve provided the two fixes for two failures ([2, 3]). The third issue, I’ve found it’s root cause [4]. I believe there would be a simple solution to this.

For gcc-12 issues, I also come to a brutal workaround [5]: undef the __noinline__ macro before including stdc++ headers and def it afterwards. I also observed that clang-15 does not fix this issue as expected, and provided a MWE at [6].

I’m also writing wiki pages, filling installation and developing guide.

In this 12-week project, I proposed to deliver rocm.eclass, and packages like pytorch, tensorflow with rocm enabled. Instead, I delivered rocm.eclass as proposed, but migrated the ROCm toolchain to vanilla clang. I thought porting ROCm toolchain to vanilla clang is closer to my project title “Refining ROCm Packages” ♦

[1] github.com/gentoo/gentoo/pull/26784
[2] github.com/littlewu2508/gentoo/commit/2bfae2e26a23d78b634a87ef4a0b3f0cc242dbc4
[3] github.com/littlewu2508/gentoo/commit/cd11b542aec825338ec396bce5c63bbced534e27
[4] github.com/ROCmSoftwarePlatform/MIOpen/issues/1731
[5] github.com/littlewu2508/gentoo/commit/2a49b4db336b075f2ac1fdfbc907f828105ea7e1
[6] github.com/llvm/llvm-project/issues/57544

Although this is the final week, I would like to say that it is as exciting as the first week.

I kept polishing rocm.eclass with the help of Michał and my mentor, and it is now in good shape [1]. I must admit that the time to write an eclass for a beginner like me is much more than what I expected. In my proposal, I leave 4 weeks to finish it, 2-week implementation and 2-week polishing. In reality, I implemented within 2 weeks, but polished it for 4 weeks. I made a lot of QA issues and was not aware, which increases the number of review-modify cycles. During this process, I leant a lot:

1. Always re-read the eclass, especially comments and examples thoroughly after modification. Many times I forgot there is an example far from the change that should be updated because one functions changes its behavior.

2. Read the bash manual carefully, because properly usage of features like bash array can greatly simplify code.

3. Consider the maintenance difficulty of the eclass. I wrote a oddly specific `src_test`, which can cover all the cases of ROCm packages. But it’s not worth it, because specialized code should be placed into ebuilds, not one eclass. So instead, I remain the most common part, `check_amdgpu`, and get rid of phase functions, which made the eclass much cleaner.

I also find some bugs and their solutions. As I mentioned in week 10’s report, I observed many test failures in sci-libs/miopen based on vanilla clang. In this week, I figured out that they have 3 different reasons, and I’ve provided the two fixes for two failures ([2, 3]). The third issue, I’ve found it’s root cause [4]. I believe there would be a simple solution to this.

For gcc-12 issues, I also come to a brutal workaround [5]: undef the __noinline__ macro before including stdc++ headers and def it afterwards. I also observed that clang-15 does not fix this issue as expected, and provided a MWE at [6].

I’m also writing wiki pages, filling installation and developing guide.

In this 12-week project, I proposed to deliver rocm.eclass, and packages like pytorch, tensorflow with rocm enabled. Instead, I delivered rocm.eclass as proposed, but migrated the ROCm toolchain to vanilla clang. I thought porting ROCm toolchain to vanilla clang is closer to my project title “Refining ROCm Packages” 🙂

[1] https://github.com/gentoo/gentoo/pull/26784
[2] https://github.com/littlewu2508/gentoo/commit/2bfae2e26a23d78b634a87ef4a0b3f0cc242dbc4
[3] https://github.com/littlewu2508/gentoo/commit/cd11b542aec825338ec396bce5c63bbced534e27
[4] https://github.com/ROCmSoftwarePlatform/MIOpen/issues/1731
[5] https://github.com/littlewu2508/gentoo/commit/2a49b4db336b075f2ac1fdfbc907f828105ea7e1
[6] https://github.com/llvm/llvm-project/issues/57544

Week 11 Report for Refining ROCm Packages in Gentoo

Gentoo Google Summer of Code (GSoC) September 11, 2022, 10:14

My progress this week is mainly writing wiki and refining rocm.eclass.

Although the current eclass can work with my new ebuilds [1], Michał Górny has pointed out various flaws on the Github PR [2]. He also pointed out the necessity about rocm.eclass, because it seems like a combination of two eclasses. In my opinion, rocm.eclass has its value, mainly for handling USE_EXPANDS and common phase functions. The ugly part is mainly in rocm_src_test: due to the inconsistency of test methods of packages in [3], I have to detect which method is using and do it accordingly. So my plan is to split the one-size-fits-all rocm_src_test into two functions, corresponding to two scenarios (cmake test or standalone binary), and let each ebuild decide which to use. This can avoid detailed detection code that make rocm_src_test bloated.

Wiki writing: I think the main part of ROCm wiki[1] and HIP[2] is nearly finished. But due to the delay of rocm.eclass, the related information is not appended (ROCm#Developing guide). There is also a section a reserved: ROCm#Installation guide. I have little clue on how to write this part, because ROCm is a wide collection of packages. Maybe a meta package (there are users working on this) would be helpful.

To be honest I’m a bit anxious, because there is only one week left, but there are still a lot to be determined and tested on rocm.eclass along with the sci-libs/roc* ebuilds. I hope I can resolve these core issues in the last week.

[1] github.com/littlewu2508/gentoo/tree/rocm-5.1.3-scilibs
[2] github.com/gentoo/gentoo/pull/26784
[3] github.com/ROCmSoftwarePlatform
[4] wiki.gentoo.org/wiki/ROCm
[5] wiki.gentoo.org/wiki/HIP

My progress this week is mainly writing wiki and refining rocm.eclass.

Although the current eclass can work with my new ebuilds [1], Michał Górny has pointed out various flaws on the Github PR [2]. He also pointed out the necessity about rocm.eclass, because it seems like a combination of two eclasses. In my opinion, rocm.eclass has its value, mainly for handling USE_EXPANDS and common phase functions. The ugly part is mainly in rocm_src_test: due to the inconsistency of test methods of packages in [3], I have to detect which method is using and do it accordingly. So my plan is to split the one-size-fits-all rocm_src_test into two functions, corresponding to two scenarios (cmake test or standalone binary), and let each ebuild decide which to use. This can avoid detailed detection code that make rocm_src_test bloated.

Wiki writing: I think the main part of ROCm wiki[1] and HIP[2] is nearly finished. But due to the delay of rocm.eclass, the related information is not appended (ROCm#Developing guide). There is also a section a reserved: ROCm#Installation guide. I have little clue on how to write this part, because ROCm is a wide collection of packages. Maybe a meta package (there are users working on this) would be helpful.

To be honest I’m a bit anxious, because there is only one week left, but there are still a lot to be determined and tested on rocm.eclass along with the sci-libs/roc* ebuilds. I hope I can resolve these core issues in the last week.

[1] https://github.com/littlewu2508/gentoo/tree/rocm-5.1.3-scilibs
[2] https://github.com/gentoo/gentoo/pull/26784
[3] https://github.com/ROCmSoftwarePlatform
[4] https://wiki.gentoo.org/wiki/ROCm
[5] https://wiki.gentoo.org/wiki/HIP

Week 10 Report for Refining ROCm Packages in Gentoo

Gentoo Google Summer of Code (GSoC) September 11, 2022, 10:10

This week I have leant a lot from Ulrich’s comments on rocm.eclass. I polished the eclass to v3 and send to gentoo-dev mailing list. However, I observed another error introduced in v3, and I’ll include a fix for it in the v4 in the following days.

Another half of my time is spent on testing sci-libs/roc-* packages on various platforms, utilizing rocm.eclass. I can say that rocm.eclass did its job as expected, so I believe after v4 it can be merged.

With src_test enabled, I have found various test failures. rocBLAS-5.1.3 fails 3 tests on Radeon RX 6700XT, slightly exceeding tolerance, which seems not a big issue; rocFFT-5.1.3 fails 16 suites on Radeon VII [1], which is serious and confirmed by upstream, so I suggest masking <code>amdgpu_targets_gfx906</code> USE flag for rocFFT-5.1.3; just today I observe MIOpen is failing many tests, probably due to vanilla clang. I’ll open issues and report those test failures to upstream. Running tests suite takes a lot of time, and often drain the GPU. It may takes more than 15 hours testing rocBLAS, even on performant CPU like Ryzen 5950X. If I use the GPU to render graphics (run a desktop environment) and do test simultaneously, it often result in amdgpu driver failure. I hope one day we can have a testing farm for ROCm packages, but that would be expensive because there are a lot of GPU architectures, and the compilation takes a lot of time.

I planned to finish the draft of wiki pages [2,3], but turns out I’m running out of time. I’ll catch up in week 11. My mentor is also busy in week 10, so my PR about rocm-opencl-runtime is still pending for review. Now we are working on solving the dependency issue of ROCm packages — gcc-12 and gcc-11.3.0 incompatibilities. Due to two bugs, the current stable gcc, gcc-11.3.0 cannot compile some ROCm packages [4], and the current unstable gcc, gcc-12, is unable to compile nearly all ROCm packages [5].

I’ll continue to do what’s postponed in week 10 — landing rocm.eclass and sci-libs packages, preparing cupy, fixing bugs, and writing the wiki pages. I’ll investigate MIOpen’s situation as well.

[1] github.com/ROCmSoftwarePlatform/rocFFT/issues/369
[2] wiki.gentoo.org/wiki/ROCm
[3] wiki.gentoo.org/wiki/HIP
[4] bugs.gentoo.org/842405
[5] bugs.gentoo.org/857660

This week I have leant a lot from Ulrich’s comments on rocm.eclass. I polished the eclass to v3 and send to gentoo-dev mailing list. However, I observed another error introduced in v3, and I’ll include a fix for it in the v4 in the following days.

Another half of my time is spent on testing sci-libs/roc-* packages on various platforms, utilizing rocm.eclass. I can say that rocm.eclass did its job as expected, so I believe after v4 it can be merged.

With src_test enabled, I have found various test failures. rocBLAS-5.1.3 fails 3 tests on Radeon RX 6700XT, slightly exceeding tolerance, which seems not a big issue; rocFFT-5.1.3 fails 16 suites on Radeon VII [1], which is serious and confirmed by upstream, so I suggest masking <code>amdgpu_targets_gfx906</code> USE flag for rocFFT-5.1.3; just today I observe MIOpen is failing many tests, probably due to vanilla clang. I’ll open issues and report those test failures to upstream. Running tests suite takes a lot of time, and often drain the GPU. It may takes more than 15 hours testing rocBLAS, even on performant CPU like Ryzen 5950X. If I use the GPU to render graphics (run a desktop environment) and do test simultaneously, it often result in amdgpu driver failure. I hope one day we can have a testing farm for ROCm packages, but that would be expensive because there are a lot of GPU architectures, and the compilation takes a lot of time.

I planned to finish the draft of wiki pages [2,3], but turns out I’m running out of time. I’ll catch up in week 11. My mentor is also busy in week 10, so my PR about rocm-opencl-runtime is still pending for review. Now we are working on solving the dependency issue of ROCm packages — gcc-12 and gcc-11.3.0 incompatibilities. Due to two bugs, the current stable gcc, gcc-11.3.0 cannot compile some ROCm packages [4], and the current unstable gcc, gcc-12, is unable to compile nearly all ROCm packages [5].

I’ll continue to do what’s postponed in week 10 — landing rocm.eclass and sci-libs packages, preparing cupy, fixing bugs, and writing the wiki pages. I’ll investigate MIOpen’s situation as well.

[1] https://github.com/ROCmSoftwarePlatform/rocFFT/issues/369
[2] https://wiki.gentoo.org/wiki/ROCm
[3] https://wiki.gentoo.org/wiki/HIP
[4] https://bugs.gentoo.org/842405
[5] https://bugs.gentoo.org/857660

Week 9 Report for Refining ROCm Packages in Gentoo

Gentoo Google Summer of Code (GSoC) September 11, 2022, 10:07

This week I mainly focused on dev-libs/rocm-opencl-runtime.

I bumped dev-libs/rocm-opencl-runtime to 5.1.3. That’s relatively easy. The difficult part is enabling its tests. I came across a major problem, which is oclgl test requiring X server. I compiled using debug options and use gdb to dive into the code, but found there is no simple solution. Currently the test needs a X server where OpenGL vender is AMD. Xvfb only provides llvmpipe, not meeting the requirements. I consulted some friends, they said NVIDIA recommends using EGL when there is no X [1], but apparently ROCm can only get OpenGL from X [2]. So my workaround is to let user passing an X display into the ebuild, by reading the environment variable OCLGL_DISPLAY (DISPLAY variable will be wiped when calling emerge, while this can survive). If no display is detected, or glxinfo shows the OpenGL vendor is not AMD, then src_test dies, throwing indications about running an X server using amdgpu driver.

I was also trapped by CRLF problem in src_test of dev-libs/rocm-opencl-runtime. Tests in oclperf.exclude should be skipped for oclperf test, but it did not. After numerous trials, I finally found that this file is using CRLF, not LF, which causes the exclusion failed ♦

Nevertheless, rocm-opencl-runtime tests passed on Radeon RX 6700XT! A good thing, because I know many user in Gentoo rely on this package to provide opencl in their computation, and the correctness is vital. Before we does not have src_test enabled. The PR is now in [6].

Other works including starting wiki writing [3,4], refine rocm.eclass according to feedback (not much, see gentoo-dev mailing list), and found a bug of dev-util/hipFindHIP.cmake module is not in the correct place. Fix can be found in [5] but I need to further polish the patch before PR.

If no further suggestions on rocm.eclass, I’ll land rocm.eclass in ::gentoo next week, and start bumping the sci-libs version already done locally.

[1] developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
[2] github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/blob/bbdc87e08b322d349f82bdd7575c8ce94d31d276/tests/ocltst/module/common/OCLGLCommonLinux.cpp
[3] wiki.gentoo.org/wiki/ROCm
[4] wiki.gentoo.org/wiki/HIP
[5] github.com/littlewu2508/gentoo/tree/hip-correct-cmake
[6] github.com/gentoo/gentoo/pull/26870

This week I mainly focused on dev-libs/rocm-opencl-runtime.

I bumped dev-libs/rocm-opencl-runtime to 5.1.3. That’s relatively easy. The difficult part is enabling its tests. I came across a major problem, which is oclgl test requiring X server. I compiled using debug options and use gdb to dive into the code, but found there is no simple solution. Currently the test needs a X server where OpenGL vender is AMD. Xvfb only provides llvmpipe, not meeting the requirements. I consulted some friends, they said NVIDIA recommends using EGL when there is no X [1], but apparently ROCm can only get OpenGL from X [2]. So my workaround is to let user passing an X display into the ebuild, by reading the environment variable OCLGL_DISPLAY (DISPLAY variable will be wiped when calling emerge, while this can survive). If no display is detected, or glxinfo shows the OpenGL vendor is not AMD, then src_test dies, throwing indications about running an X server using amdgpu driver.

I was also trapped by CRLF problem in src_test of dev-libs/rocm-opencl-runtime. Tests in oclperf.exclude should be skipped for oclperf test, but it did not. After numerous trials, I finally found that this file is using CRLF, not LF, which causes the exclusion failed 🙁

Nevertheless, rocm-opencl-runtime tests passed on Radeon RX 6700XT! A good thing, because I know many user in Gentoo rely on this package to provide opencl in their computation, and the correctness is vital. Before we does not have src_test enabled. The PR is now in [6].

Other works including starting wiki writing [3,4], refine rocm.eclass according to feedback (not much, see gentoo-dev mailing list), and found a bug of dev-util/hipFindHIP.cmake module is not in the correct place. Fix can be found in [5] but I need to further polish the patch before PR.

If no further suggestions on rocm.eclass, I’ll land rocm.eclass in ::gentoo next week, and start bumping the sci-libs version already done locally.

[1] https://developer.nvidia.com/blog/egl-eye-opengl-visualization-without-x-server/
[2] https://github.com/RadeonOpenCompute/ROCm-OpenCL-Runtime/blob/bbdc87e08b322d349f82bdd7575c8ce94d31d276/tests/ocltst/module/common/OCLGLCommonLinux.cpp
[3] https://wiki.gentoo.org/wiki/ROCm
[4] https://wiki.gentoo.org/wiki/HIP
[5] https://github.com/littlewu2508/gentoo/tree/hip-correct-cmake
[6] https://github.com/gentoo/gentoo/pull/26870

Week 8 Report for Refining ROCm Packages in Gentoo

Gentoo Google Summer of Code (GSoC) September 11, 2022, 10:04

This week there are two major progress: dev-util/rocprofiler and rocm.eclass.

I have implemented all the functions I think necessary for rocm.eclass. It was just send to rocm.eclass draft to gentoo-dev mailing list (also with a Github PR at [1]), please have a review. In the following weeks, I will collect feedbacks and continue to polish it.

In summary, I have implemented those functions which is listed in my proposal:
USE_EXPNAD of amdgpu_targets_, and ROCM_USEDEP to make the use flag coherent among dependencies;
rocm_src_configure contains common arguments in src_prepare;
rocm_src_test which checks the permission on /dev/kfd and /dev/dri/render*

There are also something listed in proposal but I decided not to implement now:
rocm_src_prepare: although there are some similarities among ebuilds, src_prepare are highly customized to each ROCm components. Unifying would take extra work.
SRC_URI: currently all SRC_URI is already specified in each ebuilds. It does not hurt to keep the status quo.

Moreover, during implementation I found another feature necessary
rocm_src_test: correctly handles different scenarios. ROCm packages may have cmake test, which can be run using cmake_src_test, or only compiled some testing binaries which requires execution from command-line. I made rocm_src_test automatically detect the method, so ROCm packages just have to call this function directly without doing anything.

Actually I have never imagined rocm.eclass could be in this shape eventually. Initially I just thought it would provide some utilities, mainly src_test and USE_EXPAND. But when implementing I found all these feature requires careful treatment. The comments (mainly examples) also takes half of the length. It ends up in 278 lines, which is a middle-sized among current eclasses. Maybe it can be further trimmed down after polishing, because there could be awkward implementations or re-inventions in it.

Based on my draft rocm.eclass, I have prepared sci-libs/roc*=5.1.3, sci-lib/hip-*-5.1.3 and dev-python/cupy making use of it. It feels great to simplify the ebuilds, and portage can handles the USE_EXPAND and dependencies just as expected. Once the rocm.eclass get in tree, I’ll push those ROCm-5.1.3 ebuilds.

Anther thing to mention is that ROCm-5.1.3 toolchains finally get merged [5], with the fixed dev-util/rocprofiler-{4.3.0,5.0.2,5.1.3}. rocprofiler is actually buggy before, because I thought I committed the patch which stripped the libhsa-amd-aqlprofile.so loading (I even claimed it in the commit message), but it was not committed and lost in history. So I reproduced the patch. Also, I did some research about this proprietary lib. By default, not loading it means tracing hsa/hip is not possible — you only get basic information like name and time of each GPU kernel execution, but do not know the pipeline of kernel execution (which one has spawned which kernel). AQL should be HSA architected queuing language (HSA AQL), where llvm.org/docs/AMDGPUUsage.html#hsa-aql-queue documented. It did sound related to the pipeline of kernel dispatching. By the description, libhsa-amd-aqlprofile.so is an extension API of AQL Profile. But actually, patching the source code to let rocprofiler not loading libhsa-amd-aqlprofile.so does not breaks the tracing of hsa/hip. So, I’m not sure why libhsa-amd-aqlprofile.so is needed, and raised a question at [2]. So I complete the fix in [3,4].

According to the renewed proposal (I have been leaving for two weeks, so there are changes in plan), I should collect feedback and refine rocm.eclass, and prepare dev-python/cupy and sci-libs/rocWMMA. I’ll investigate ROCgdb, too. Also, rocm-device-libs is a major package because many users relies on it to provide opencl. I’ll work on bumping its version, too. What’s more, with hip-5.1.3 against vanilla clang, rocm for blender can land in ::gentoo.

[1] github.com/gentoo/gentoo/pull/26784
[2] github.com/RadeonOpenCompute/ROCm/issues/1781
[3] github.com/gentoo/gentoo/pull/26755
[4] github.com/gentoo/gentoo/pull/26771
[5] github.com/gentoo/gentoo/pull/26441

This week there are two major progress: dev-util/rocprofiler and rocm.eclass.

I have implemented all the functions I think necessary for rocm.eclass. It was just send to rocm.eclass draft to gentoo-dev mailing list (also with a Github PR at [1]), please have a review. In the following weeks, I will collect feedbacks and continue to polish it.

In summary, I have implemented those functions which is listed in my proposal:
USE_EXPNAD of amdgpu_targets_, and ROCM_USEDEP to make the use flag coherent among dependencies;
rocm_src_configure contains common arguments in src_prepare;
rocm_src_test which checks the permission on /dev/kfd and /dev/dri/render*

There are also something listed in proposal but I decided not to implement now:
rocm_src_prepare: although there are some similarities among ebuilds, src_prepare are highly customized to each ROCm components. Unifying would take extra work.
SRC_URI: currently all SRC_URI is already specified in each ebuilds. It does not hurt to keep the status quo.

Moreover, during implementation I found another feature necessary
rocm_src_test: correctly handles different scenarios. ROCm packages may have cmake test, which can be run using cmake_src_test, or only compiled some testing binaries which requires execution from command-line. I made rocm_src_test automatically detect the method, so ROCm packages just have to call this function directly without doing anything.

Actually I have never imagined rocm.eclass could be in this shape eventually. Initially I just thought it would provide some utilities, mainly src_test and USE_EXPAND. But when implementing I found all these feature requires careful treatment. The comments (mainly examples) also takes half of the length. It ends up in 278 lines, which is a middle-sized among current eclasses. Maybe it can be further trimmed down after polishing, because there could be awkward implementations or re-inventions in it.

Based on my draft rocm.eclass, I have prepared sci-libs/roc*=5.1.3, sci-lib/hip-*-5.1.3 and dev-python/cupy making use of it. It feels great to simplify the ebuilds, and portage can handles the USE_EXPAND and dependencies just as expected. Once the rocm.eclass get in tree, I’ll push those ROCm-5.1.3 ebuilds.

Anther thing to mention is that ROCm-5.1.3 toolchains finally get merged [5], with the fixed dev-util/rocprofiler-{4.3.0,5.0.2,5.1.3}. rocprofiler is actually buggy before, because I thought I committed the patch which stripped the libhsa-amd-aqlprofile.so loading (I even claimed it in the commit message), but it was not committed and lost in history. So I reproduced the patch. Also, I did some research about this proprietary lib. By default, not loading it means tracing hsa/hip is not possible — you only get basic information like name and time of each GPU kernel execution, but do not know the pipeline of kernel execution (which one has spawned which kernel). AQL should be HSA architected queuing language (HSA AQL), where https://llvm.org/docs/AMDGPUUsage.html#hsa-aql-queue documented. It did sound related to the pipeline of kernel dispatching. By the description, libhsa-amd-aqlprofile.so is an extension API of AQL Profile. But actually, patching the source code to let rocprofiler not loading libhsa-amd-aqlprofile.so does not breaks the tracing of hsa/hip. So, I’m not sure why libhsa-amd-aqlprofile.so is needed, and raised a question at [2]. So I complete the fix in [3,4].

According to the renewed proposal (I have been leaving for two weeks, so there are changes in plan), I should collect feedback and refine rocm.eclass, and prepare dev-python/cupy and sci-libs/rocWMMA. I’ll investigate ROCgdb, too. Also, rocm-device-libs is a major package because many users relies on it to provide opencl. I’ll work on bumping its version, too. What’s more, with hip-5.1.3 against vanilla clang, rocm for blender can land in ::gentoo.

[1] https://github.com/gentoo/gentoo/pull/26784
[2] https://github.com/RadeonOpenCompute/ROCm/issues/1781
[3] https://github.com/gentoo/gentoo/pull/26755
[4] https://github.com/gentoo/gentoo/pull/26771
[5] https://github.com/gentoo/gentoo/pull/26441

September 05 2022

Week 12 Report for RISC-V Support for Gentoo Prefix

Gentoo Google Summer of Code (GSoC) September 05, 2022, 9:34

Hello all,
Hope you all are doing good, this is my report for 12th week of my Google Summer of Code project.

I got documentation on Porting Prefix reviewed and I have added the suggested changes.

My GSoC delieverables have been completed, so I played around with the compatibility layer and ansible. Synced the latest changes to the bootstrap script from upstream and used it for installing prefix. Working on updating the main.yml[1] accordingly. The process has been smooth so far, within next few weeks we might have a working compatibility layer for RISC-V.

Will start working on the final report and update the blogs on Gentoo Blog site. Although the official period is over I will continue working on compatibility layer and there are also few other things like pkgcraft in my bucket list which I will get my hands on.

The 12 weeks of GSoC have been super fun, thanks to mentors and the community.

[1] github.com/EESSI/compatibility-layer/blob/main/ansible/playbooks/roles/compatibility_layer/defaults/main.yml

Regards,
wiredhikari

Hello all,
Hope you all are doing good, this is my report for 12th week of my Google Summer of Code project.

I got documentation on Porting Prefix reviewed and I have added the suggested changes.

My GSoC delieverables have been completed, so I played around with the compatibility layer and ansible. Synced the latest changes to the bootstrap script from upstream and used it for installing prefix. Working on updating the main.yml[1] accordingly. The process has been smooth so far, within next few weeks we might have a working compatibility layer for RISC-V.

Will start working on the final report and update the blogs on Gentoo Blog site. Although the official period is over I will continue working on compatibility layer and there are also few other things like pkgcraft in my bucket list which I will get my hands on.

The 12 weeks of GSoC have been super fun, thanks to mentors and the community.

[1] https://github.com/EESSI/compatibility-layer/blob/main/ansible/playbooks/roles/compatibility_layer/defaults/main.yml

Regards,
wiredhikari

September 04 2022

Gentoo musl Support Expansion for Qt/KDE Week 12

Gentoo Google Summer of Code (GSoC) September 04, 2022, 22:41

This week has been mostly been spent on writing documentation and fixing up some left over things.

I started with looking over the *-standalone libraries. It turns out that tree.h is provided by libbsd and because libbsd works just fine on musl I removed the standalone. The second thing I did was removing error.h because it caused issues with some builds, and we suspect it works on Void Linux because they build packages inside a clean chroot (without error.h). The only one left is now cdefs.h. This header is an internal glibc header, and using it is basically a bug, so upstreaming fixes should be very easy. Therefore I feel like this doesn’t need to be added either, so I closed the pull request for now.

Next I rewrote Sam’s musl porting notes, moving it from his personal page to a “real” wiki page (wiki.gentoo.org/wiki/Musl_porting_notes). It’s now more like a wiki page and less like a list of errors with attached fixes. I’ve also added several things myself into it.

Another wiki I’ve added stuff to is Chroot (wiki.gentoo.org/wiki/Chroot#Sound_and_graphics). In my GSoC planning I wanted to write documentation about using Gentoo musl. There I wanted information about how to work around using glibc programs that do not work on musl, ex proprietary programs. Instead of doing that I wrote documentation about how running graphical applications with sound into the Chroot documentation, as it helps every Gentoo user. I don’t think Gentoo musl users should have any issues finding the Chroot wikipage. ♦

I have also tested gettext-tiny on Gentoo musl. This is a smaller implementation of gettext with some functionality stubbed out. gettext-tiny is built for musl, and it makes use of the libintl provided by musl. For users that only want English this makes a lot of sense because it is much smaller than gettext but still allows most packages to be built. When replacing gettext Portage complained about two packages using uninstalled libraries from GNU gettext, those being bison and poxml. When reemerging bison it errored out and I was sure it was because of gettext, but after debugging bison I found out it was caused by error-standalone. After unmerging error-standalone bison detected that the library was not installed and it compiled correctly. Poxml on the other hand hard depends on libgettextpo, a library not provided by gettext-tiny. Running “equery d -a poxml” however we can see that nothing important actually depends on poxml, so gettext-tiny should for the most part be fine.

$ equery d -a poxml
* These packages depend on poxml:
kde-apps/kdesdk-meta-22.04.3-r1 (>=kde-apps/poxml-22.04.3:5)
kde-apps/kdesdk-meta-22.08.0 (>=kde-apps/poxml-22.08.0:5)

Next week I will write my final evaluation and then I am done with GSoC! I will however continue working with some things like ebuildshell and crossdev when I have time.

This week has been mostly been spent on writing documentation and fixing up some left over things.

I started with looking over the *-standalone libraries. It turns out that tree.h is provided by libbsd and because libbsd works just fine on musl I removed the standalone. The second thing I did was removing error.h because it caused issues with some builds, and we suspect it works on Void Linux because they build packages inside a clean chroot (without error.h). The only one left is now cdefs.h. This header is an internal glibc header, and using it is basically a bug, so upstreaming fixes should be very easy. Therefore I feel like this doesn’t need to be added either, so I closed the pull request for now.

Next I rewrote Sam’s musl porting notes, moving it from his personal page to a “real” wiki page (https://wiki.gentoo.org/wiki/Musl_porting_notes). It’s now more like a wiki page and less like a list of errors with attached fixes. I’ve also added several things myself into it.

Another wiki I’ve added stuff to is Chroot (https://wiki.gentoo.org/wiki/Chroot#Sound_and_graphics). In my GSoC planning I wanted to write documentation about using Gentoo musl. There I wanted information about how to work around using glibc programs that do not work on musl, ex proprietary programs. Instead of doing that I wrote documentation about how running graphical applications with sound into the Chroot documentation, as it helps every Gentoo user. I don’t think Gentoo musl users should have any issues finding the Chroot wikipage. 🙂

I have also tested gettext-tiny on Gentoo musl. This is a smaller implementation of gettext with some functionality stubbed out. gettext-tiny is built for musl, and it makes use of the libintl provided by musl. For users that only want English this makes a lot of sense because it is much smaller than gettext but still allows most packages to be built. When replacing gettext Portage complained about two packages using uninstalled libraries from GNU gettext, those being bison and poxml. When reemerging bison it errored out and I was sure it was because of gettext, but after debugging bison I found out it was caused by error-standalone. After unmerging error-standalone bison detected that the library was not installed and it compiled correctly. Poxml on the other hand hard depends on libgettextpo, a library not provided by gettext-tiny. Running “equery d -a poxml” however we can see that nothing important actually depends on poxml, so gettext-tiny should for the most part be fine.

$ equery d -a poxml
* These packages depend on poxml:
kde-apps/kdesdk-meta-22.04.3-r1 (>=kde-apps/poxml-22.04.3:5)
kde-apps/kdesdk-meta-22.08.0 (>=kde-apps/poxml-22.08.0:5)

Next week I will write my final evaluation and then I am done with GSoC! I will however continue working with some things like ebuildshell and crossdev when I have time.

August 29 2022

Gentoo musl Support Expansion for Qt/KDE Week 11

Gentoo Google Summer of Code (GSoC) August 29, 2022, 21:32

This week has mostly been dedicated to fixing old, and harder problems that I had previously put off. I spent a whole lot of time learning about the AccountsService codebase and setting up systems with LDAP authentication, but it turned out it didn’t need a rewrite after reading a couple of issues on the GitLab page, more on that later.

To start with I added a CMAKE_SKIP_TESTS variable to cmake.eclass. Currently you need to specify skipped tests by doing myctestargs=( -E ‘(test1|test2|test3)’ ). This works fine for the most part, but if you need to specify skipped tests multiple times it gets really messy, because ctest does not allow you to pass -E multiple times. Personally I ran into this when fixing tests for kde-apps/okular. Most tests for Okular only pass when it’s installed (#653618), but the ebuild already skips some tests for other reasons. So I needed to first unconditionally disable some tests, and then conditionally with “has_version ${P} || append tests”. To solve it I introduced an array and then parsed it with myctestargs+=( -E '('$( IFS='|'; echo "${CMAKE_SKIP_TESTS[*]}")')' ), but as this was useful for a lot more ebuilds than just Okular I decided to implement it in the eclass.

The second thing I worked on was AccountsService, it’s a daemon that retrieves a list of users on the system and presents them with a DBus interface. It’s used for showing users in things like login managers and accounts settings panels. I actually worked on this a long time ago but I put it off for a while because it required a bigger rewrite, and I had more important things to do back then.
AccountsService has two issues on musl. It uses the glibc function fgetspent_r, and wtmp which is just implemented as stubs in musl (wiki.musl-libc.org/faq.html#Q:-Why-is-the-utmp/wtmp-functionality-only-implemented-as-stubs?). I asked in #musl to figure out a fgetspent_r replacement, but we then discussed why it was bad to enumerate /etc/passwd to get a list of users, for example it does not respect non-local (LDAP/NIS users), so AS needed a bigger rewrite, we thought :).
So I started with setting up two virtual machines, one LDAP client, and one server. Having never used LDAP before this was a little hard but I got it working. I also needed to set up QEMU networking so that my VMs could connect to each other, and I also set up an LDAP webui called ldap-ui so I could easily get an overview of my LDAP instance. Because AS works by providing a DBus interface I also learned using the qdbusviewer and dbus-send tools. Before taking a deep dive into the AS source code I wrote some small test programs to get comfortable with the DBus C API, passwd+shadow database functions, and GLib.
I then started reading the AccountsService source code to understand it better, its main.c just sets up a daemon that’s mostly defined in daemon.c, the rest of the source files are mostly just helpers and wrappers. When the daemon initializes it sets up user enumerators using the entry_generator_* functions. The main one is entry_generator_fgetpwent, this generator uses fgetspent_r to enumerate /etc/passwd, and my idea was to replace it with getpwent + getspnam. But there are two other generators, requested_users and cachedir. requested_users takes a requested user (ex. when manually entering username+password in the login manager), and adds it into /var/lib/AccountsService/users. cachedir looks at that directory and adds these entries into the daemon. It turns out that requesting a non-local LDAP user with the requested_users generator is completely fine, and the login information will be cached in the dir so that the cachedir generator can expose it for future logins. I then looked at some issues in the AccountsService GitLab, and it turns out that enumerating /etc/passwd was intentional to not blow up the login screen with thousands of users on a big LDAP domain for example. So, the rewrite was sadly not needed, but I learned a lot! Still, fixing fgetspent_r and wtmp needs to get done, but I already have a fix for that.

Another thing I spent a lot of time on this week was poxml. This is also an old issue that I put off, mostly because it was too hard at the time. The build crashes because it can’t find the function gl_get_setlocale_null_lock in libgettextpo.so. This shared object belongs to GNU Gettext, so I something was wrong with that. Looking at the so with nm --dynamic /usr/lib/libgettextpo.so I could see that the function was undefined, bad! We reported this issue to upstream and got into a long conversation. Apparently Bruno (GNU) used Alpine Linux which packages GNU libintl, while Gentoo uses the musl libintl implementation. GNU libintl actually provides gl_get_setlocale_null_lock which explains why it worked on Alpine without issue. After grepping for gl_get_setlocale_null_lock I found this:
/* When it is known that the gl_get_setlocale_null_lock function is defined by a dependency library, it should not be defined here. */
#if OMIT_SETLOCALE_LOCK
*do nothing*
#else
*define gl_get_setlocale_null_lock*
#fi

So I tried just forcing the check to false, and it worked! I then looked at the build system and expected something like AC_SEARCH_LIBS([gl_get_setlocale_null_lock], [intl], ...) *set OMIT_SETLOCALE_LOCK*, but it turns out that autotools just forces OMIT_SETLOCALE_LOCK to 1. This is clearly wrong so I sent another comment upstream and temporarily fixed it in the Gentoo tree. Instead of doing it properly I made an ugly hack to not get complacent (sams idea) and hopefully we can get it resolved upstream instead :D.

To summarize I feel like this week has gone pretty good. I’ve solved everything that was left and now I’m ready to start writing a lot of documentation. A lot of the accountsservice setup and work was ultimately unnecessary but I still learned a lot.

This week has mostly been dedicated to fixing old, and harder problems that I had previously put off. I spent a whole lot of time learning about the AccountsService codebase and setting up systems with LDAP authentication, but it turned out it didn’t need a rewrite after reading a couple of issues on the GitLab page, more on that later.

To start with I added a CMAKE_SKIP_TESTS variable to cmake.eclass. Currently you need to specify skipped tests by doing myctestargs=( -E ‘(test1|test2|test3)’ ). This works fine for the most part, but if you need to specify skipped tests multiple times it gets really messy, because ctest does not allow you to pass -E multiple times. Personally I ran into this when fixing tests for kde-apps/okular. Most tests for Okular only pass when it’s installed (#653618), but the ebuild already skips some tests for other reasons. So I needed to first unconditionally disable some tests, and then conditionally with “has_version ${P} || append tests”. To solve it I introduced an array and then parsed it with myctestargs+=( -E '('$( IFS='|'; echo "${CMAKE_SKIP_TESTS[*]}")')' ), but as this was useful for a lot more ebuilds than just Okular I decided to implement it in the eclass.

The second thing I worked on was AccountsService, it’s a daemon that retrieves a list of users on the system and presents them with a DBus interface. It’s used for showing users in things like login managers and accounts settings panels. I actually worked on this a long time ago but I put it off for a while because it required a bigger rewrite, and I had more important things to do back then.
AccountsService has two issues on musl. It uses the glibc function fgetspent_r, and wtmp which is just implemented as stubs in musl (https://wiki.musl-libc.org/faq.html#Q:-Why-is-the-utmp/wtmp-functionality-only-implemented-as-stubs?). I asked in #musl to figure out a fgetspent_r replacement, but we then discussed why it was bad to enumerate /etc/passwd to get a list of users, for example it does not respect non-local (LDAP/NIS users), so AS needed a bigger rewrite, we thought :).
So I started with setting up two virtual machines, one LDAP client, and one server. Having never used LDAP before this was a little hard but I got it working. I also needed to set up QEMU networking so that my VMs could connect to each other, and I also set up an LDAP webui called ldap-ui so I could easily get an overview of my LDAP instance. Because AS works by providing a DBus interface I also learned using the qdbusviewer and dbus-send tools. Before taking a deep dive into the AS source code I wrote some small test programs to get comfortable with the DBus C API, passwd+shadow database functions, and GLib.
I then started reading the AccountsService source code to understand it better, its main.c just sets up a daemon that’s mostly defined in daemon.c, the rest of the source files are mostly just helpers and wrappers. When the daemon initializes it sets up user enumerators using the entry_generator_* functions. The main one is entry_generator_fgetpwent, this generator uses fgetspent_r to enumerate /etc/passwd, and my idea was to replace it with getpwent + getspnam. But there are two other generators, requested_users and cachedir. requested_users takes a requested user (ex. when manually entering username+password in the login manager), and adds it into /var/lib/AccountsService/users. cachedir looks at that directory and adds these entries into the daemon. It turns out that requesting a non-local LDAP user with the requested_users generator is completely fine, and the login information will be cached in the dir so that the cachedir generator can expose it for future logins. I then looked at some issues in the AccountsService GitLab, and it turns out that enumerating /etc/passwd was intentional to not blow up the login screen with thousands of users on a big LDAP domain for example. So, the rewrite was sadly not needed, but I learned a lot! Still, fixing fgetspent_r and wtmp needs to get done, but I already have a fix for that.

Another thing I spent a lot of time on this week was poxml. This is also an old issue that I put off, mostly because it was too hard at the time. The build crashes because it can’t find the function gl_get_setlocale_null_lock in libgettextpo.so. This shared object belongs to GNU Gettext, so I something was wrong with that. Looking at the so with nm --dynamic /usr/lib/libgettextpo.so I could see that the function was undefined, bad! We reported this issue to upstream and got into a long conversation. Apparently Bruno (GNU) used Alpine Linux which packages GNU libintl, while Gentoo uses the musl libintl implementation. GNU libintl actually provides gl_get_setlocale_null_lock which explains why it worked on Alpine without issue. After grepping for gl_get_setlocale_null_lock I found this:
/* When it is known that the gl_get_setlocale_null_lock function is defined by a dependency library, it should not be defined here. */
#if OMIT_SETLOCALE_LOCK
*do nothing*
#else
*define gl_get_setlocale_null_lock*
#fi

So I tried just forcing the check to false, and it worked! I then looked at the build system and expected something like AC_SEARCH_LIBS([gl_get_setlocale_null_lock], [intl], ...) *set OMIT_SETLOCALE_LOCK*, but it turns out that autotools just forces OMIT_SETLOCALE_LOCK to 1. This is clearly wrong so I sent another comment upstream and temporarily fixed it in the Gentoo tree. Instead of doing it properly I made an ugly hack to not get complacent (sams idea) and hopefully we can get it resolved upstream instead :D.

To summarize I feel like this week has gone pretty good. I’ve solved everything that was left and now I’m ready to start writing a lot of documentation. A lot of the accountsservice setup and work was ultimately unnecessary but I still learned a lot.

August 28 2022

Week 11 Report for RISC-V Support for Gentoo Prefix

Gentoo Google Summer of Code (GSoC) August 28, 2022, 9:32

Hello all,

Hope everyone is fine. This is my report for the 11th week of my GSoC project. This week I worked on documentation, closing dangling pr’s and looked into bootstrapping the EESSI compat layer for RISC-V. I spent some of my time learning Ansible as a part of the process.

The documentation[1] is almost complete, I will work on feedbacks of mentors and pass it through some review softwares and fix accordingly. In the upcoming week I will look into EESSI compat layer for RISC-V and a blog for end-term evaluations.

[1] github.com/wiredhikari/prefix_on_riscv/blob/main/docs/porting.md

Regards,

wiredhikari

Hello all,

Hope everyone is fine. This is my report for the 11th week of my GSoC project. This week I worked on documentation, closing dangling pr’s and looked into bootstrapping the EESSI compat layer for RISC-V. I spent some of my time learning Ansible as a part of the process.

The documentation[1] is almost complete, I will work on feedbacks of mentors and pass it through some review softwares and fix accordingly. In the upcoming week I will look into EESSI compat layer for RISC-V and a blog for end-term evaluations.

[1] https://github.com/wiredhikari/prefix_on_riscv/blob/main/docs/porting.md

Regards,

wiredhikari

August 06 2022

Pkgcheck-Flycheck

Maciej Barć (xgqt) August 06, 2022, 0:00
News Repository

With this commit first GNU Emacs integration was merged into the pkgcheck repository.

History
  • github.com/pkgcore/pkgcheck/issues/417
  • github.com/pkgcore/pkgcheck/pull/420
  • github.com/gentoo/gentoo/pull/26700
Thanks

Huge thanks to Sam James and Arthur Zamarin for support and interest in getting this feature done.

Installation Unmasking

The Flycheck integration is unreleased as of now, this will (hopefully) change in the future, but for now You need live versions of snakeoil, pkgcore and pkgcheck.

File: /etc/portage/package.accept_keywords/pkgcore.conf

1
2
3
dev-python/snakeoil  **
sys-apps/pkgcore     **
dev-util/pkgcheck    **

Also You will need to unmask app-emacs/flycheck and its dependencies.

File: /etc/portage/package.accept_keywords/emacs.conf

1
2
3
app-emacs/epl
app-emacs/pkg-info
app-emacs/flycheck
Emerging

Install pkgcheck with the emacs USE flag enabled.

File: /etc/portage/package.use/pkgcore.conf

1
dev-util/pkgcheck    emacs

Afterwards run:

1
2
emerge -1av dev-python/snakeoil sys-apps/pkgcore dev-util/pkgcheck
emerge -av --noreplace dev-util/pkgcheck
Configuration

Following is what I would suggest to put into your Emacs config file:

1
2
3
4
5
6
7
8
(require 'ebuild-mode)
(require 'flycheck)
(require 'flycheck-pkgcheck)

(setq flycheck-pkgcheck-enable t)

(add-hook 'ebuild-mode-hook 'flycheck-mode)
(add-hook 'ebuild-mode-hook 'flycheck-pkgcheck-setup)

If You are using use-package:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
(use-package flycheck
  :ensure nil)

(use-package ebuild-mode
  :ensure nil
  :hook ((ebuild-mode . flycheck-mode)))

(use-package flycheck-pkgcheck
  :ensure nil
  :custom ((flycheck-pkgcheck-enable t))
  :hook ((ebuild-mode . flycheck-pkgcheck-setup)))

The lines with :ensure nil are there to prevent use-package from trying to download the particular package from Elpa (because we use system packages for this configuration).

xgqt (xgqt ) August 06, 2022, 0:00

News

Repository

With this commit first GNU Emacs integration was merged into the pkgcheck repository.

History

Thanks

Huge thanks to Sam James and Arthur Zamarin for support and interest in getting this feature done.

Installation

Unmasking

The Flycheck integration is unreleased as of now, this will (hopefully) change in the future, but for now You need live versions of snakeoil, pkgcore and pkgcheck.

File: /etc/portage/package.accept_keywords/pkgcore.conf

1
2
3
dev-python/snakeoil  **
sys-apps/pkgcore     **
dev-util/pkgcheck    **

Also You will need to unmask app-emacs/flycheck and its dependencies.

File: /etc/portage/package.accept_keywords/emacs.conf

1
2
3
app-emacs/epl
app-emacs/pkg-info
app-emacs/flycheck

Emerging

Install pkgcheck with the emacs USE flag enabled.

File: /etc/portage/package.use/pkgcore.conf

1
dev-util/pkgcheck    emacs

Afterwards run:

1
2
emerge -1av dev-python/snakeoil sys-apps/pkgcore dev-util/pkgcheck
emerge -av --noreplace dev-util/pkgcheck

Configuration

Following is what I would suggest to put into your Emacs config file:

1
2
3
4
5
6
7
8
(require 'ebuild-mode)
(require 'flycheck)
(require 'flycheck-pkgcheck)

(setq flycheck-pkgcheck-enable t)

(add-hook 'ebuild-mode-hook 'flycheck-mode)
(add-hook 'ebuild-mode-hook 'flycheck-pkgcheck-setup)

If You are using use-package:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
(use-package flycheck
  :ensure nil)

(use-package ebuild-mode
  :ensure nil
  :hook ((ebuild-mode . flycheck-mode)))

(use-package flycheck-pkgcheck
  :ensure nil
  :custom ((flycheck-pkgcheck-enable t))
  :hook ((ebuild-mode . flycheck-pkgcheck-setup)))

The lines with :ensure nil are there to prevent use-package from trying to download the particular package from Elpa (because we use system packages for this configuration).

June 29 2022

Binding the World

Tim Harder (pkgcraft) June 29, 2022, 18:29

One of Gentoo’s major weaknesses is the lack of a shared implementation that natively supports bindings to other languages for core, specification-level features such as dependency format parsing. Due to this deficiency, over the years I’ve seen the same algorithms implemented in Python, C, Bash, Go, and more at varying levels of success.

Now, I must note that I don’t mean to disparage these efforts especially when done for fun or to learn a new language; however, it often seems they end up in tools or services used by the wider community. Then as the specification slowly evolves and authors move on, developers are stuck maintaining multiple implementations if they want to keep the related tools or services relevant.

In an ideal world, the canonical implementation for a core feature set is written in a language that can be easily bound by other languages offering developers the choice to reuse this support without having to write their own. To exhibit this possibility, one of pkgcraft’s goals is to act as a core library supporting language bindings.

Design

Interfacing rust code with another language often requires a C wrapper library to perform efficiently while sidestepping rust’s lifetime model that clashes with ownership-based languages. Bindings build on top of this C layer, allowing ignorance of the rust underneath.

For pkgcraft, this C library is provided via pkgcraft-c, currently wrapping pkgcraft’s core depspec functionality (package atoms) in addition to providing the initial interface for config, repo, and package interactions.

For some languages it’s also possible to develop bindings or support directly in rust. There are a decent number of currently evolving, language-specific projects that allow non-rust language development including pyo3 for python, rutie for ruby, neon for Node.js, and others. These projects generally wrap the unsafe C layer internally, allowing for simpler development. Generally speaking, I recommend going this route if performance levels and project goals can be met.

Originally, pkgcraft used pyo3 for its python bindings. If one is familiar with rust and python, the development experience is relatively pleasant and allows simpler builds using maturin rather then the pile of technical debt that distutils, setuptools, and its extensions provide when trying to do anything outside the ordinary.

However, pyo3 has a couple, currently unresolved issues that lead me to abandon it. First, the speed of its class instantiation is slower than then native python implementation, even for simple classes. It should be noted this is only important if your design involves creating thousands of native object instances at a python level. It’s often preferable to avoid this overhead by exposing functionality to interact with large groups of rust objects. In addition, for most developers coming from native python the performance hit won’t be overly noticeable. In any case, class instantiation overhead will probably decrease as the project matures and more work is done on optimization.

More importantly, pyo3 does not support exposing any object that contains fields using explicit lifetimes. This means any struct that contains borrowed fields can’t be directly exported due to the clashes between the memory models and ownership designs of rust and python. It’s quite possible to work around this, but that often means copying data in order for the python side to obtain ownership or redesigning the data structures used on the rust side. Whether this is acceptable will depend on how large the performance hit is or how much work the redesign takes.

For my part, having experience writing native extensions using the CPython API as well as cython, the workarounds necessary to avoid exposing borrowed objects weren’t worth the effort, especially because pkgcraft requires a C API anyway to support C itself and languages lacking compatibility layer projects. Thus I rewrote pkgcraft’s python bindings using cython instead which immediately raised performance near to levels I was initially expecting; however, the downside is quite apparent since the bindings have to manually handle all the type conversions and resource deallocation while calling through the C wrapper. It’s a decent amount more work, but I think the performance benefits are worth it.

Development

First, the tools for building the code should be installed. This includes a recent rust compiler and C compiler. I leave it up to the reader to make use of rustup and/or their distro’s package manager to install the required build tools (and others such as git that are implied).

Next, the code must be pulled down. The easiest way to do this is to recursively clone pkgcraft-workspace which should include semi-recent submodules for all pkgcraft projects:

1
2
git clone --recurse-submodules github.com/pkgcraft/pkgcraft-workspace.git
cd pkgcraft-workspace

From this workspace, pkgcraft-c can be built and various shell variables set in order to build python bindings via the following command:

1
$ source ./build pkgcraft-c

This builds pkgcraft into a shared library that is exposed to the python build via setting $LD_LIBRARY_PATH and $PKG_CONFIG_PATH. Once that completes the python bindings can be built and tested via tox:

1
2
$ cd pkgcraft-python
$ tox -e python

When developing bindings built on top of a C library it’s wise to run the same testsuite under valgrind looking for seemingly inevitable memory leaks, exacerbated by rust requiring all allocations to be returned in order to be freed safely since it historically didn’t use the system allocator. For pkgcraft, this is provided via another tox target:

1
$ tox -e valgrind

If you’re familiar with valgrind, we mainly care about the definitely and indirectly lost categories of memory leaks, the other types relate to global objects or caches that aren’t explicitly deallocated on exit. The valgrind target for tox should error out if any memory leaks are detected so if it completes successfully no leaks were detected.

Benchmarking vs pkgcore and portage

Stepping away from regular development towards more interesting data, pkgcraft provides rough processing and memory benchmark suites in order to compare its nascent python bindings with pkgcore and portage. Currently these only focus on atom object instantiation, but may be extended to include other functionality if the API access isn’t too painful for pkgcore and/or portage.

To run the processing time benchmarks that use pytest-benchmark:

1
$ tox -e bench

For a summary of benchmark results only including the mean and standard deviation:

.ansi2html-content { display: inline; white-space: pre-wrap; word-wrap: break-word; } .ansi1 { font-weight: bold; } .ansi31 { color: #aa0000; } .ansi32 { color: #00aa00; } .ansi33 { color: #aa5500; }
----------------- benchmark 'test_bench_atom_random': 4 tests ------------------
Name (time in us) Mean StdDev
--------------------------------------------------------------------------------
test_bench_atom_random[pkgcraft-Atom]  4.5395 (1.0)  0.3722 (1.0) 
test_bench_atom_random[pkgcraft-cached]  6.2360 (1.37)  1.3386 (3.60) 
test_bench_atom_random[pkgcore-atom]  30.9767 (6.82)  1.1428 (3.07) 
test_bench_atom_random[portage-Atom]  50.2636 (11.07)  19.7562 (53.07) 
--------------------------------------------------------------------------------
--------------------- benchmark 'test_bench_atom_static': 4 tests ----------------------
Name (time in ns) Mean StdDev
----------------------------------------------------------------------------------------
test_bench_atom_static[pkgcraft-cached]  217.2820 (1.0)  5.9821 (1.0) 
test_bench_atom_static[pkgcraft-Atom]  725.2229 (3.34)  41.6775 (6.97) 
test_bench_atom_static[pkgcore-atom]  28,331.4369 (130.39)  942.0003 (157.47) 
test_bench_atom_static[portage-Atom]  33,794.6625 (155.53)  14,358.8390 (>1000.0)
----------------------------------------------------------------------------------------
----------------- benchmark 'test_bench_atom_sorting_best_case': 2 tests ----------------
Name (time in us) Mean StdDev
-----------------------------------------------------------------------------------------
test_bench_atom_sorting_best_case[pkgcraft-Atom]  6.1195 (1.0)  0.2011 (1.0) 
test_bench_atom_sorting_best_case[pkgcore-atom]  936.9403 (153.11)  5.5534 (27.61) 
-----------------------------------------------------------------------------------------
---------------- benchmark 'test_bench_atom_sorting_worst_case': 2 tests -----------------
Name (time in us) Mean StdDev
------------------------------------------------------------------------------------------
test_bench_atom_sorting_worst_case[pkgcraft-Atom]  6.2702 (1.0)  0.3301 (1.0) 
test_bench_atom_sorting_worst_case[pkgcore-atom]  924.1410 (147.39)  6.9942 (21.19) 
------------------------------------------------------------------------------------------

As seen above, pkgcraft is able to instantiate atom objects about 5-6x faster than pkgcore and about 10x faster than portage. For static atoms when using the cached implementation this increases to about 150x faster, meaning portage should look into using an LRU cache for directly created atom objects. With respect to pkgcore’s static result, it also appears to not use caching; however, it does support atom instance caching internally so the benchmark is avoiding that somehow.

When comparing sorting, pkgcraft is well over two orders of magnitude ahead of pkgcore and I imagine portage would fare even worse, but it doesn’t natively support atom object comparisons so isn’t included here.

Beyond processing time it’s often useful to track memory use, especially for languages such as python that are designed more for ease of development than memory efficiency. There are a number of different techniques to track memory use such as projects like guppy3 but they often work with native python objects, ignoring or misrepresenting allocations done in underlying implementations. Instead, pkgcraft includes a simple script that creates a list of a million objects for three different atom types while tracking elapsed time and overall memory use (using resident set size) in separate processes.

To run the memory benchmarks use:

1
$ tox -e membench

Which produces output similar to:

Static atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 474.2 MB (0.94s)
pkgcraft-cached 8.7 MB (0.27s)
pkgcore 8.4 MB (1.12s)
portage 795.5 MB (10.62s)
Dynamic atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 955.2 MB (2.93s)
pkgcraft-cached 957.9 MB (3.56s)
pkgcore 1.3 GB (31.01s)
portage 4.0 GB (56.22s)
Random atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 945.4 MB (3.75s)
pkgcraft-cached 21.3 MB (1.30s)
pkgcore 20.9 MB (2.67s)
portage 3.6 GB (46.77s)

For static atoms, note that pkgcraft-cached and pkgcore’s memory usage is quite close with pkgcore slightly edging ahead due to the extra data pkgcraft stores to speed up comparisons. Another point of interest is that the uncached implementation still beats pkgcore in processing time. This is because the underlying rust implementation has its own cache allowing it to skip unnecessary parsing, leaving the majority of overhead from cython’s object instantiation. Portage is last by a large margin since it doesn’t directly cache atom objects.

Every dynamic atom is different making caching irrelevant so no implementation has a substantial memory usage edge. Without cache speedups, the uncached pkgcraft implementation is the fastest as it has the least overhead. Pkgcore’s memory usage is comparatively respectable, but uses about an order of magnitude more processing time for parsing and instantiation. Portage is again last by an increased margin and appears to perform inefficiently when storing more complex atoms.

Finally, random atoms try to model closer to what is found across the tree in terms of cache hits. As the results show, using cached implementations probably is a good idea for large sets of atoms with occasional overlap in order to save both processing time and memory usage; otherwise, both attributes suffer as seen from portage’s uncached implementation results.

Looking to the future

From the rough benchmarks above, it seems apparent both pkgcore and portage could decrease their overall processing time and/or memory usage by moving to using package atom support from pkgcraft python bindings. While I’m unsure how much of a performance difference it would make, it should at least be noticeably worthwhile when processing large amounts of data, e.g. scanning the entire tree with pkgcheck or sorting atoms during large dependency resolutions.

It’s also clear that using cython’s extension types and C support on top of rust code yield relatively sizeable wins over native python code. From my perspective, it seems worthwhile to implement all core functionality in a similar fashion for projects that last decades like portage already has. The downside of implementing support in a more difficult language should decrease the longer a project remains viable.

In terms of feasibility, it’s probably easier to inject the pkgcraft bindings into portage since its atom support subclasses string objects while pkgcore’s subclasses an internal restriction class, but both should be possible with some redesign. Realistically speaking, neither is likely to occur because both projects lack maintainers with the required combination of skill, time, and interest to perform the rework. In addition, currently doing so in a non-optional fashion would generally restrict projects to fewer targets due to rust’s lack of support for older architectures, but this downside may be somewhat resolved if a viable GCC rust implementation is released in the future.

Other than python, pkgcraft has more basic support available for go supporting package atom and version object interactions. As the core library gains more features, I’ll try to keep working on exposing the same functionality via bindings since I think initial interactions with pkgcraft may be easiest when leveraging it for data processing from scripting languages.

One of Gentoo’s major weaknesses is the lack of a shared implementation that natively supports bindings to other languages for core, specification-level features such as dependency format parsing. Due to this deficiency, over the years I’ve seen the same algorithms implemented in Python, C, Bash, Go, and more at varying levels of success.

Now, I must note that I don’t mean to disparage these efforts especially when done for fun or to learn a new language; however, it often seems they end up in tools or services used by the wider community. Then as the specification slowly evolves and authors move on, developers are stuck maintaining multiple implementations if they want to keep the related tools or services relevant.

In an ideal world, the canonical implementation for a core feature set is written in a language that can be easily bound by other languages offering developers the choice to reuse this support without having to write their own. To exhibit this possibility, one of pkgcraft’s goals is to act as a core library supporting language bindings.

Design

Interfacing rust code with another language often requires a C wrapper library to perform efficiently while sidestepping rust’s lifetime model that clashes with ownership-based languages. Bindings build on top of this C layer, allowing ignorance of the rust underneath.

For pkgcraft, this C library is provided via pkgcraft-c, currently wrapping pkgcraft’s core depspec functionality (package atoms) in addition to providing the initial interface for config, repo, and package interactions.

For some languages it’s also possible to develop bindings or support directly in rust. There are a decent number of currently evolving, language-specific projects that allow non-rust language development including pyo3 for python, rutie for ruby, neon for Node.js, and others. These projects generally wrap the unsafe C layer internally, allowing for simpler development. Generally speaking, I recommend going this route if performance levels and project goals can be met.

Originally, pkgcraft used pyo3 for its python bindings. If one is familiar with rust and python, the development experience is relatively pleasant and allows simpler builds using maturin rather then the pile of technical debt that distutils, setuptools, and its extensions provide when trying to do anything outside the ordinary.

However, pyo3 has a couple, currently unresolved issues that lead me to abandon it. First, the speed of its class instantiation is slower than then native python implementation, even for simple classes. It should be noted this is only important if your design involves creating thousands of native object instances at a python level. It’s often preferable to avoid this overhead by exposing functionality to interact with large groups of rust objects. In addition, for most developers coming from native python the performance hit won’t be overly noticeable. In any case, class instantiation overhead will probably decrease as the project matures and more work is done on optimization.

More importantly, pyo3 does not support exposing any object that contains fields using explicit lifetimes. This means any struct that contains borrowed fields can’t be directly exported due to the clashes between the memory models and ownership designs of rust and python. It’s quite possible to work around this, but that often means copying data in order for the python side to obtain ownership or redesigning the data structures used on the rust side. Whether this is acceptable will depend on how large the performance hit is or how much work the redesign takes.

For my part, having experience writing native extensions using the CPython API as well as cython, the workarounds necessary to avoid exposing borrowed objects weren’t worth the effort, especially because pkgcraft requires a C API anyway to support C itself and languages lacking compatibility layer projects. Thus I rewrote pkgcraft’s python bindings using cython instead which immediately raised performance near to levels I was initially expecting; however, the downside is quite apparent since the bindings have to manually handle all the type conversions and resource deallocation while calling through the C wrapper. It’s a decent amount more work, but I think the performance benefits are worth it.

Development

First, the tools for building the code should be installed. This includes a recent rust compiler and C compiler. I leave it up to the reader to make use of rustup and/or their distro’s package manager to install the required build tools (and others such as git that are implied).

Next, the code must be pulled down. The easiest way to do this is to recursively clone pkgcraft-workspace which should include semi-recent submodules for all pkgcraft projects:

1
2
git clone --recurse-submodules https://github.com/pkgcraft/pkgcraft-workspace.git
cd pkgcraft-workspace

From this workspace, pkgcraft-c can be built and various shell variables set in order to build python bindings via the following command:

1
$ source ./build pkgcraft-c

This builds pkgcraft into a shared library that is exposed to the python build via setting $LD_LIBRARY_PATH and $PKG_CONFIG_PATH. Once that completes the python bindings can be built and tested via tox:

1
2
$ cd pkgcraft-python
$ tox -e python

When developing bindings built on top of a C library it’s wise to run the same testsuite under valgrind looking for seemingly inevitable memory leaks, exacerbated by rust requiring all allocations to be returned in order to be freed safely since it historically didn’t use the system allocator. For pkgcraft, this is provided via another tox target:

1
$ tox -e valgrind

If you’re familiar with valgrind, we mainly care about the definitely and indirectly lost categories of memory leaks, the other types relate to global objects or caches that aren’t explicitly deallocated on exit. The valgrind target for tox should error out if any memory leaks are detected so if it completes successfully no leaks were detected.

Benchmarking vs pkgcore and portage

Stepping away from regular development towards more interesting data, pkgcraft provides rough processing and memory benchmark suites in order to compare its nascent python bindings with pkgcore and portage. Currently these only focus on atom object instantiation, but may be extended to include other functionality if the API access isn’t too painful for pkgcore and/or portage.

To run the processing time benchmarks that use pytest-benchmark:

1
$ tox -e bench

For a summary of benchmark results only including the mean and standard deviation:

----------------- benchmark 'test_bench_atom_random': 4 tests ------------------
Name (time in us) Mean StdDev
--------------------------------------------------------------------------------
test_bench_atom_random[pkgcraft-Atom]  4.5395 (1.0)  0.3722 (1.0) 
test_bench_atom_random[pkgcraft-cached]  6.2360 (1.37)  1.3386 (3.60) 
test_bench_atom_random[pkgcore-atom]  30.9767 (6.82)  1.1428 (3.07) 
test_bench_atom_random[portage-Atom]  50.2636 (11.07)  19.7562 (53.07) 
--------------------------------------------------------------------------------
--------------------- benchmark 'test_bench_atom_static': 4 tests ----------------------
Name (time in ns) Mean StdDev
----------------------------------------------------------------------------------------
test_bench_atom_static[pkgcraft-cached]  217.2820 (1.0)  5.9821 (1.0) 
test_bench_atom_static[pkgcraft-Atom]  725.2229 (3.34)  41.6775 (6.97) 
test_bench_atom_static[pkgcore-atom]  28,331.4369 (130.39)  942.0003 (157.47) 
test_bench_atom_static[portage-Atom]  33,794.6625 (155.53)  14,358.8390 (>1000.0)
----------------------------------------------------------------------------------------
----------------- benchmark 'test_bench_atom_sorting_best_case': 2 tests ----------------
Name (time in us) Mean StdDev
-----------------------------------------------------------------------------------------
test_bench_atom_sorting_best_case[pkgcraft-Atom]  6.1195 (1.0)  0.2011 (1.0) 
test_bench_atom_sorting_best_case[pkgcore-atom]  936.9403 (153.11)  5.5534 (27.61) 
-----------------------------------------------------------------------------------------
---------------- benchmark 'test_bench_atom_sorting_worst_case': 2 tests -----------------
Name (time in us) Mean StdDev
------------------------------------------------------------------------------------------
test_bench_atom_sorting_worst_case[pkgcraft-Atom]  6.2702 (1.0)  0.3301 (1.0) 
test_bench_atom_sorting_worst_case[pkgcore-atom]  924.1410 (147.39)  6.9942 (21.19) 
------------------------------------------------------------------------------------------

As seen above, pkgcraft is able to instantiate atom objects about 5-6x faster than pkgcore and about 10x faster than portage. For static atoms when using the cached implementation this increases to about 150x faster, meaning portage should look into using an LRU cache for directly created atom objects. With respect to pkgcore’s static result, it also appears to not use caching; however, it does support atom instance caching internally so the benchmark is avoiding that somehow.

When comparing sorting, pkgcraft is well over two orders of magnitude ahead of pkgcore and I imagine portage would fare even worse, but it doesn’t natively support atom object comparisons so isn’t included here.

Beyond processing time it’s often useful to track memory use, especially for languages such as python that are designed more for ease of development than memory efficiency. There are a number of different techniques to track memory use such as projects like guppy3 but they often work with native python objects, ignoring or misrepresenting allocations done in underlying implementations. Instead, pkgcraft includes a simple script that creates a list of a million objects for three different atom types while tracking elapsed time and overall memory use (using resident set size) in separate processes.

To run the memory benchmarks use:

1
$ tox -e membench

Which produces output similar to:

Static atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 474.2 MB (0.94s)
pkgcraft-cached 8.7 MB (0.27s)
pkgcore 8.4 MB (1.12s)
portage 795.5 MB (10.62s)
Dynamic atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 955.2 MB (2.93s)
pkgcraft-cached 957.9 MB (3.56s)
pkgcore 1.3 GB (31.01s)
portage 4.0 GB (56.22s)
Random atoms (1000000)
----------------------------------------------
implementation memory (elapsed)
----------------------------------------------
pkgcraft 945.4 MB (3.75s)
pkgcraft-cached 21.3 MB (1.30s)
pkgcore 20.9 MB (2.67s)
portage 3.6 GB (46.77s)

For static atoms, note that pkgcraft-cached and pkgcore’s memory usage is quite close with pkgcore slightly edging ahead due to the extra data pkgcraft stores to speed up comparisons. Another point of interest is that the uncached implementation still beats pkgcore in processing time. This is because the underlying rust implementation has its own cache allowing it to skip unnecessary parsing, leaving the majority of overhead from cython’s object instantiation. Portage is last by a large margin since it doesn’t directly cache atom objects.

Every dynamic atom is different making caching irrelevant so no implementation has a substantial memory usage edge. Without cache speedups, the uncached pkgcraft implementation is the fastest as it has the least overhead. Pkgcore’s memory usage is comparatively respectable, but uses about an order of magnitude more processing time for parsing and instantiation. Portage is again last by an increased margin and appears to perform inefficiently when storing more complex atoms.

Finally, random atoms try to model closer to what is found across the tree in terms of cache hits. As the results show, using cached implementations probably is a good idea for large sets of atoms with occasional overlap in order to save both processing time and memory usage; otherwise, both attributes suffer as seen from portage’s uncached implementation results.

Looking to the future

From the rough benchmarks above, it seems apparent both pkgcore and portage could decrease their overall processing time and/or memory usage by moving to using package atom support from pkgcraft python bindings. While I’m unsure how much of a performance difference it would make, it should at least be noticeably worthwhile when processing large amounts of data, e.g. scanning the entire tree with pkgcheck or sorting atoms during large dependency resolutions.

It’s also clear that using cython’s extension types and C support on top of rust code yield relatively sizeable wins over native python code. From my perspective, it seems worthwhile to implement all core functionality in a similar fashion for projects that last decades like portage already has. The downside of implementing support in a more difficult language should decrease the longer a project remains viable.

In terms of feasibility, it’s probably easier to inject the pkgcraft bindings into portage since its atom support subclasses string objects while pkgcore’s subclasses an internal restriction class, but both should be possible with some redesign. Realistically speaking, neither is likely to occur because both projects lack maintainers with the required combination of skill, time, and interest to perform the rework. In addition, currently doing so in a non-optional fashion would generally restrict projects to fewer targets due to rust’s lack of support for older architectures, but this downside may be somewhat resolved if a viable GCC rust implementation is released in the future.

Other than python, pkgcraft has more basic support available for go supporting package atom and version object interactions. As the core library gains more features, I’ll try to keep working on exposing the same functionality via bindings since I think initial interactions with pkgcraft may be easiest when leveraging it for data processing from scripting languages.

VIEW

SCOPE

FILTER
  from
  to