Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Faulhammer
. Christian Ruppert
. Christopher Harvey
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jauhien Piatlicki
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Kristian Fiskerstrand
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Theo Chatzimichos
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Victor Ostorga
. Vikraman Choudhury
. Vlastimil Babka
. Zack Medico

Last updated:
September 05, 2015, 12:05 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Planet Gentoo, an aggregation of Gentoo-related weblog articles written by Gentoo developers. For a broader range of topics, you might be interested in Gentoo Universe.

September 02, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Maintaining packages and backporting (September 02, 2015, 18:33 UTC)

A few days ago I committed a small update to policycoreutils, a SELinux related package that provides most of the management utilities for SELinux systems. The fix was to get two patches (which are committed upstream) into the existing release so that our users can benefit from the fixed issues without having to wait for a new release.

Getting the patches

To capture the patches, I used git together with the commit id:

~$ git format-patch -n -1 73b7ff41
~$ git format-patch -n -1 4fbc6623

The two generated patch files contain all information about the commit. Thanks to the epatch support in the eutils.eclass, these patch files are immediately usable within Gentoo's ebuilds.

Updating the ebuilds

The SELinux userspace ebuilds in Gentoo all have live ebuilds available which are immediately usable for releases. The idea with those live ebuilds is that we can simply copy them and commit in order to make a new release.

So, in case of the patch backporting, the necessary patch files are first moved into the files/ subdirectory of the package. Then, the live ebuild is updated to use the new patches:

@@ -88,6 +85,8 @@ src_prepare() {
                epatch "${FILESDIR}/0070-remove-symlink-attempt-fails-with-gentoo-sandbox-approach.patch"
                epatch "${FILESDIR}/0110-build-mcstrans-bug-472912.patch"
                epatch "${FILESDIR}/0120-build-failure-for-mcscolor-for-CONTEXT__CONTAINS.patch"
+               epatch "${FILESDIR}/0130-Only-invoke-RPM-on-RPM-enabled-Linux-distributions-bug-534682.patch"
+               epatch "${FILESDIR}/0140-Set-self.sename-to-sename-after-calling-semanage-bug-557370.patch"

        # rlpkg is more useful than fixfiles

The patches themselves do not apply for the live ebuilds themselves (they are ignored there) as we want the live ebuilds to be as close to the upstream project as possible. But because the ebuilds are immediately usable for releases, we add the necessary information there first.

Next, the new release is created:

~$ cp policycoreutils-9999.ebuild policycoreutils-2.4-r2.ebuild

Testing the changes

The new release is then tested. I have a couple of scripts that I use for automated testing. So first I update these scripts to also try out the functionality that was failing before. On existing systems, these tests should fail:

Running task semanage (Various semanage related operations).
    Executing step "perm_port_on   : Marking portage_t as a permissive domain                              " -> ok
    Executing step "perm_port_off  : Removing permissive mark from portage_t                               " -> ok
    Executing step "selogin_modify : Modifying a SELinux login definition                                  " -> failed

Then, on a test system where the new package has been installed, the same testset is executed (together with all other tests) to validate if the problem is fixed.

Pushing out the new release

Finally, with the fixes in and validated, the new release is pushed out (into ~arch first of course) and the bugs are marked as RESOLVED:TEST-REQUEST. Users can confirm that it works (which would move it to VERIFIED:TEST-REQUEST) or we stabilize it after the regular testing period is over (which moves it to RESOLVED:FIXED or VERIFIED:FIXED).

I do still have to get used to Gentoo using git as its repository now. The workflow to use is documented though. Luckily, because I often get that the git push fails (due to changes to the tree since my last pull). So I need to run git pull --rebase=preserve followed by repoman full and then the push again sufficiently quick after each other).

This simple flow is easy to get used to. Thanks to the existing foundation for package maintenance (such as epatch for patching, live ebuilds that can be immediately used for releases and the ability to just cherry pick patches towards our repository) we can serve updates with just a few minutes of work.

Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
LLVM 3.7.0 released, Gentoo package changes (September 02, 2015, 11:53 UTC)

As announced here, the new LLVM release is now available. If you are interested after reading the release notes (or the clang ones), I just added the corresponding packages in Gentoo, although still masked for some additional testing. mesa-11.0 release candidates compile fine with it though.
2015/09/04 edit: llvm 3.7.0 is now unmasked!

On the Gentoo side, this version bump has some nice changes:

  • We now use the cmake/ninja build system: huge thanks to fellow developer mgorny for his hard work on this rewrite. This is the recommended build system upstream now, and some features will only be available with it
  • LLDB, the debugger  is now available via USE=lldb. This is not a separate package as the build system for llvm and subprojects is quite monolithic. clang is in the same situation: sys-devel/clang package currently does almost nothing, the magic is done in llvm[clang])
  • These new ebuilds include the usual series of fixes, including some recent ones by gienah

On the TODO list, some tests are still failing on 32bit multilib (bug #558604), ocaml doc compilation/installation has a strange bug (see the additional call on docs/ocaml_doc target in the ebuild). And let’s not forget adding other tools, like polly. Another recent suggestion: add llvm’s OpenMP library needed for OpenMP with clang.

Also, if people were still using the dragonegg GCC plugin, note that it is not released anymore upstream. As 3.6.x versions had compilation problems, this package received its last rites.

August 28, 2015
Michael Palimaka a.k.a. kensington (homepage, bugs)


For those that are not already aware, KDE’s release structure has evolved. The familiar all-in-one release of KDE SC 4 has been split into three distinct components, each with their own release cycles – KDE Frameworks 5, KDE Plasma 5, and KDE Applications 5.

This means there’s no such thing as KDE 5!

KDE Frameworks 5

KDE Frameworks 5 is a collection of libraries upon which Plasma and Applications are built. Each framework is distinct in terms of functionality, allowing consumers to depend on smaller individual libraries. This has driven adoption in other Qt-based projects such as LXQt as they no longer have to worry about “pulling in KDE”.

We ship the latest version of KDE Frameworks 5 in the main tree, and plan to target it for stabilisation shortly.

KDE Plasma 5

KDE Plasma 5 is the next generation of the Plasma desktop environment. While some might not consider it as mature as Plasma 4, it is in a good state for general use and is shipped as stable by a number of other distributions.

We ship the latest version of KDE Plasma 5 in the main tree, and would expect to target it for stabilisation within a few months.

KDE Applications 5

KDE Applications 5 consists of the remaining applications and supporting libraries. Porting is gradual process, with each new major release containing more KF5-based and fewer KDE 4-based packages.

Unfortunately, current Applications releases are not entirely coherent – some packages have features that require unreleased dependencies, and some cannot be installed at the same time as others. This situation is expected to improve in future releases as porting efforts progress.

Because of this, it’s not possible to ship KDE Applications in its entirety. Rather, we are in the process of cherry-picking known-good packages into the main tree. We have not discussed any stabilisation plan yet.


As Frameworks are just libraries, they are automatically pulled in as required by consuming packages, and no user intervention is required.

To upgrade to Plasma 5, please follow the upgrade guide. Unfortunately it’s not possible to have both Plasma 4 and Plasma 5 installed at the same time, due to an upstream design decision.

Applications appear in the tree as a regular version bump, so will upgrade automatically.

Ongoing KDE 4 support

Plasma 4 has reached end-of-life upstream, and no further releases are expected. As per usual, we will keep it for a reasonable time before removing it completely.

As each Applications upgrade should be invisible, there’s less need to retain old versions. It is likely that the existing policy of removing old shortly after stabilisation will continue.

What is this 15.08.0 stuff, or, why is it upgrading to KDE 5?

As described above, Applications are now released separately from Plasma, following a versioning scheme. This means that, regardless of whether they are KDE 4 or KF5-based, they will work correctly in both Plasma 4 and Plasma 5, or any other desktop environment.

It is the natural upgrade path, and there is no longer a “special relationship” between Plasma and Applications the way there was in KDE SC 4.


As always, feedback is appreciated – especially during major transitions like this. Sharing your experience will help improve it for the next person, and substantial improvements have already been made made thanks to the contributions of early testers.

Feel free to file a bug, send a mail, or drop by #gentoo-kde for a chat any time. Thanks for flying Gentoo KDE!

August 27, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.6 (August 27, 2015, 17:04 UTC)

Ok I was a bit too hasty in my legacy module support code clean up and I broke quite a few things on the latest version 2.5 release sorry ! :(


  • make use of pkgutil to detect properly installed modules even when they are zipped in egg files (manual install)
  • add back legacy modules output support (tuple of position / response)
  • new uname module inspired from issue 117 thanks to @ndalliard
  • remove dead code

thanks !

  • @coelebs on IRC for reporting, testing and the good spirit :)
  • @ndalliard on github for the issue, debug and for inspiring the uname module
  • @Horgix for responding to issues faster than me !

August 25, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Slowly converting from GuideXML to HTML (August 25, 2015, 09:30 UTC)

Gentoo has removed its support of the older GuideXML format in favor of using the Gentoo Wiki and a new content management system for the main site (or is it static pages, I don't have the faintest idea to be honest). I do still have a few GuideXML pages in my development space, which I am going to move to HTML pretty soon.

In order to do so, I make use of the guidexml2wiki stylesheet I developed. But instead of migrating it to wiki syntax, I want to end with HTML.

So what I do is first convert the file from GuideXML to MediaWiki with xsltproc.

Next, I use pandoc to convert this to restructured text. The idea is that the main pages on my devpage are now restructured text based. I was hoping to use markdown, but the conversion from markdown to HTML is not what I hoped it was.

The restructured text is then converted to HTML using In the end, I use the following function (for conversion, once):

# Convert GuideXML to RestructedText and to HTML
gxml2html() {

  # Convert to Mediawiki syntax
  xsltproc ~/dev-cvs/gentoo/xml/htdocs/xsl/guidexml2wiki.xsl $1 > ${basefile}.mediawiki

  if [ -f ${basefile}.mediawiki ] ; then
    # Convert to restructured text
    pandoc -f mediawiki -t rst -s -S -o ${basefile}.rst ${basefile}.mediawiki;

  if [ -f ${basefile}.rst ] ; then
    # Use your own stylesheet links (use full https URLs for this)  --stylesheet=link-to-bootstrap.min.css,link-to-tyrian.min.css --link-stylesheet ${basefile}.rst ${basefile}.html

Is it perfect? No, but it works.

August 17, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v2.5 (August 17, 2015, 15:05 UTC)

This new py3status comes with an amazing number of contributions and new modules !

24 files changed, 1900 insertions(+), 263 deletions(-)

I’m also glad to say that py3status becomes my first commit in the new git repository of Gentoo Linux !


Please note that this version has deprecated the legacy implicit module loading support to favour and focus on the generic i3status order += module loading/ordering !

New modules

  • new aws_bill module, by Anthony Brodard
  • new dropboxd_status module, by Tjaart van der Walt
  • new external_script module, by Dominik
  • new nvidia_temp module for displaying NVIDIA GPUs’ temperature, by J.M. Dana
  • new rate_counter module, by Amaury Brisou
  • new screenshot module, by Amaury Brisou
  • new static_string module, by Dominik
  • new taskwarrior module, by James Smith
  • new volume_status module, by Jan T.
  • new whatismyip module displaying your public/external IP as well as your online status


As usual, full changelog is available here.


Along with all those who reported issues and helped fixed them, quick and surely not exhaustive list:

  • Anthony Brodard
  • Tjaart van der Walt
  • Dominik
  • J.M. Dana
  • Amaury Brisou
  • James Smith
  • Jan T.
  • Zopieux
  • Horgix
  • hlmtre

What’s next ?

Well something tells me @Horgix is working hard on some standardization and on the core of py3status ! I’m sure some very interesting stuff will emerge from this, so thank you !

August 13, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Finding a good compression utility (August 13, 2015, 17:15 UTC)

I recently came across a wiki page written by Herman Brule which gives a quick benchmark on a couple of compression methods / algorithms. It gave me the idea of writing a quick script that tests out a wide number of compression utilities available in Gentoo (usually through the app-arch category), with also a number of options (in case multiple options are possible).

The currently supported packages are:

app-arch/bloscpack      app-arch/bzip2          app-arch/freeze
app-arch/gzip           app-arch/lha            app-arch/lrzip
app-arch/lz4            app-arch/lzip           app-arch/lzma
app-arch/lzop           app-arch/mscompress     app-arch/p7zip
app-arch/pigz           app-arch/pixz           app-arch/plzip
app-arch/pxz            app-arch/rar            app-arch/rzip
app-arch/xar            app-arch/xz-utils       app-arch/zopfli

The script should keep the best compression information: duration, compression ratio, compression command, as well as the compressed file itself.

Finding the "best" compression

It is not my intention to find the most optimal compression, as that would require heuristic optimizations (which has triggered my interest in seeking such software, or writing it myself) while trying out various optimization parameters.

No, what I want is to find the "best" compression for a given file, with "best" being either

  • most reduced size (which I call compression delta in my script)
  • best reduction obtained per time unit (which I call the efficiency)

For me personally, I think I would use it for the various raw image files that I have through the photography hobby. Those image files are difficult to compress (the Nikon DS3200 I use is an entry-level camera which applies lossy compression already for its raw files) but their total size is considerable, and it would allow me to better use the storage I have available both on my laptop (which is SSD-only) as well as backup server.

But next to the best compression ratio, the efficiency is also an important metric as it shows how efficient the algorithm works in a certain time aspect. If one compression method yields 80% reduction in 5 minutes, and another one yields 80,5% in 45 minutes, then I might want to prefer the first one even though that is not the best compression at all.

Although the script could be used to get the most compression (without resolving to an optimization algorithm for the compression commands) for each file, this is definitely not the use case. A single run can take hours for files that are compressed in a handful of seconds. But it can show the best algorithms for a particular file type (for instance, do a few runs on a couple of raw image files and see which method is most succesful).

Another use case I'm currently looking into is how much improvement I can get when multiple files (all raw image files) are first grouped in a single archive (.tar). Theoretically, this should improve the compression, but by how much?

How the script works

The script does not contain much intelligence. It iterates over a wide set of compression commands that I tested out, checks the final compressed file size, and if it is better than a previous one it keeps this compressed file (and its statistics).

I tried to group some of the compressions together based on the algorithm used, but as I don't really know the details of the algorithms (it's based on manual pages and internet sites) and some of them combine multiple algorithms, it is more of a high-level selection than anything else.

The script can also only run the compressions of a single application (which I use when I'm fine-tuning the parameter runs).

A run shows something like the following:

Original file (test.nef) size 20958430 bytes
      package name                                                 command      duration                   size compr.Δ effic.:
      ------------                                                 -------      --------                   ---- ------- -------
app-arch/bloscpack                                               blpk -n 4           0.1               20947097 0.00054 0.00416
app-arch/bloscpack                                               blpk -n 8           0.1               20947097 0.00054 0.00492
app-arch/bloscpack                                              blpk -n 16           0.1               20947097 0.00054 0.00492
    app-arch/bzip2                                                   bzip2           2.0               19285616 0.07982 0.03991
    app-arch/bzip2                                                bzip2 -1           2.0               19881886 0.05137 0.02543
    app-arch/bzip2                                                bzip2 -2           1.9               19673083 0.06133 0.03211
    app-arch/p7zip                                      7za -tzip -mm=PPMd           5.9               19002882 0.09331 0.01592
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=24           5.7               19002882 0.09331 0.01640
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=25           6.4               18871933 0.09955 0.01551
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=26           7.7               18771632 0.10434 0.01364
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=27           9.0               18652402 0.11003 0.01224
    app-arch/p7zip                             7za -tzip -mm=PPMd -mmem=28          10.0               18521291 0.11628 0.01161
    app-arch/p7zip                                       7za -t7z -m0=PPMd           5.7               18999088 0.09349 0.01634
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=24           5.8               18999088 0.09349 0.01617
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=25           6.5               18868478 0.09972 0.01534
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=26           7.5               18770031 0.10442 0.01387
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=27           8.6               18651294 0.11008 0.01282
    app-arch/p7zip                                7za -t7z -m0=PPMd:mem=28          10.6               18518330 0.11643 0.01100
      app-arch/rar                                                     rar           0.9               20249470 0.03383 0.03980
      app-arch/rar                                                 rar -m0           0.0               20958497 -0.00000        -0.00008
      app-arch/rar                                                 rar -m1           0.2               20243598 0.03411 0.14829
      app-arch/rar                                                 rar -m2           0.8               20252266 0.03369 0.04433
      app-arch/rar                                                 rar -m3           0.8               20249470 0.03383 0.04027
      app-arch/rar                                                 rar -m4           0.9               20248859 0.03386 0.03983
      app-arch/rar                                                 rar -m5           0.8               20248577 0.03387 0.04181
    app-arch/lrzip                                                lrzip -z          13.1               19769417 0.05673 0.00432
     app-arch/zpaq                                                    zpaq           0.2               20970029 -0.00055        -0.00252
The best compression was found with 7za -t7z -m0=PPMd:mem=28.
The compression delta obtained was 0.11643 within 10.58 seconds.
This file is now available as test.nef.7z.

In the above example, the test file was around 20 MByte. The best compression compression command that the script found was:

~$ 7za -t7z -m0=PPMd:mem=28 a test.nef.7z test.nef

The resulting file (test.nef.7z) is 18 MByte, a reduction of 11,64%. The compression command took almost 11 seconds to do its thing, which gave an efficiency rating of 0,011, which is definitely not a fast one.

Some other algorithms don't do bad either with a better efficiency. For instance:

   app-arch/pbzip2                                                  pbzip2           0.6               19287402 0.07973 0.13071

In this case, the pbzip2 command got almost 8% reduction in less than a second, which is considerably more efficient than the 11-seconds long 7za run.

Want to try it out yourself?

I've pushed the script to my github location. Do a quick review of the code first (to see that I did not include anything malicious) and then execute it to see how it works:

~$ sw_comprbest -h
Usage: sw_comprbest --infile=<inputfile> [--family=<family>[,...]] [--command=<cmd>]
       sw_comprbest -i <inputfile> [-f <family>[,...]] [-c <cmd>]

Supported families: blosc bwt deflate lzma ppmd zpaq. These can be provided comma-separated.
Command is an additional filter - only the tests that use this base command are run.

The output shows
  - The package (in Gentoo) that the command belongs to
  - The command run
  - The duration (in seconds)
  - The size (in bytes) of the resulting file
  - The compression delta (percentage) showing how much is reduced (higher is better)
  - The efficiency ratio showing how much reduction (percentage) per second (higher is better)

When the command supports multithreading, we use the number of available cores on the system (as told by /proc/cpuinfo).

For instance, to try it out against a PDF file:

~$ sw_comprbest -i MEA6-Sven_Vermeulen-Research_Summary.pdf
Original file (MEA6-Sven_Vermeulen-Research_Summary.pdf) size 117763 bytes
The best compression was found with zopfli --deflate.
The compression delta obtained was 0.00982 within 0.19 seconds.
This file is now available as MEA6-Sven_Vermeulen-Research_Summary.pdf.deflate.

So in this case, the resulting file is hardly better compressed - the PDF itself is already compressed. Let's try it against the uncompressed PDF:

~$ pdftk MEA6-Sven_Vermeulen-Research_Summary.pdf output test.pdf uncompress
~$ sw_comprbest -i test.pdf
Original file (test.pdf) size 144670 bytes
The best compression was found with lrzip -z.
The compression delta obtained was 0.27739 within 0.18 seconds.
This file is now available as test.pdf.lrz.

This is somewhat better:

~$ ls -l MEA6-Sven_Vermeulen-Research_Summary.pdf* test.pdf*
-rw-r--r--. 1 swift swift 117763 Aug  7 14:32 MEA6-Sven_Vermeulen-Research_Summary.pdf
-rw-r--r--. 1 swift swift 116606 Aug  7 14:32 MEA6-Sven_Vermeulen-Research_Summary.pdf.deflate
-rw-r--r--. 1 swift swift 144670 Aug  7 14:34 test.pdf
-rw-r--r--. 1 swift swift 104540 Aug  7 14:35 test.pdf.lrz

The resulting file is 11,22% reduced from the original one.

August 12, 2015
Gentoo Package Repository now using Git (August 12, 2015, 00:00 UTC)

Good things come to those who wait: The main Gentoo package repository (also known as the Portage tree or by its historic name gentoo-x86) is now based on Git.


The Gentoo Git migration has arrived and is expected to be completed soon. As previously announced, the CVS freeze occurred on 8 August and Git commits for developers were opened soon after. As a last step, rsync mirrors are expected to have the updated changelogs again on or after 12 August. Read-only access to gentoo-x86 (and write to the other CVS repositories) was restored on 9 August following the freeze.


Work on migrating the repository from CVS to Git began in 2006 with a proof-of-concept migration project during the Summer of Code. Back then, migrating the repository took a week and using Git was considerably slower than using CVS. While plans to move were shelved for a while, things improved over the coming years. Several features were implemented in Git, Portage, and other tools to meet the requirements of a migrated repository.

What changes?

The repository can be checked out from and is available via our Git web interface.

For users of our package repository, nothing changes: Updates continue to be available via the established mechanisms (rsync, webrsync, snapshots). Options to fetch the package tree via Git are to be announced later.

The migration facilitates the process of new contributors getting involved as proxy maintainers and eventually Developers. Alternate places for users to submit pull requests, such as GitHub, can be expected in the future.

In regards to package signing, the migration will streamline how GPG keys are used. This will allow end-to-end signature tracking from the developer to the final repository, as outlined in GLEP 57 et seq.

While the last issues are being worked on, join us in thanking everyone involved in the project. As always, you can discuss this on our Forums or hit up @Gentoo.

August 07, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently, I’ve spent a huge amount of time working on Apache and PHP-FPM in order to allow for a threaded Apache MPM whilst still using PHP and virtual hosts (vhosts). As this article is going to be rather lengthy, I’m going to split it up into sections below. It’s my hope that after reading the article, you’ll be able to take advantage of Apache’s newer Event MPM and mod_proxy_fcgi in order to get the best performance out of your web server.

1. Definitions:

Before delving into the overarching problem, it might be best to have some definitions in place. If you’re new to the whole idea of web servers, programming languages used for web sites, and such, these definitions may help you become more acquainted with the problem that this article addresses. If you’re familiar with things like Apache, PHP, threading, preforking, and so on, feel free to skip this basic section.

  • Web server – on a physical server, the web server is an application that actually hands out (or serves) web pages to clients as they request them from their browser. Apache and nginx are two popular web servers in the Linux / UNIX world.
  • Apache MPM – a multi-processing module (or MPM) is the pluggable mechanism by which Apache binds to network ports on a server, processes incoming requests for web sites, and creates children to handle each request.
    • Prefork – the older Apache MPM that isn’t threaded, and deals with each request by creating a completely separate Apache process. It is needed for programming languages and libraries that don’t deal well with threading (like some PHP modules).
    • Event – a newer Apache MPM that is threaded, which allows Apache to handle more requests at a time by passing off some of the work to threads, so that the master process can work on other things.
  • Virtual host (vhost) – a method of deploying multiple websites on the same physical server via the same web server application. Basically, you could have and both on the same server, and even using the same IP address.
  • PHP – PHP is arguably one of the most common programming languages used for website creation. Many web content management systems (like WordPress, Joomla!, and MediaWiki) are written in PHP.
  • mod_php – An Apache module for processing PHP. With this module, the PHP interpreter is essentially embedded within the Apache application. It is typically used with the Prefork MPM.
  • mod_proxy_fcgi – An Apache module that allows for passing off operations to a FastCGI processor (PHP can be interpreted this way via PHP-FPM).
  • PHP-FPM – Stands for PHP FastCGI Process Manager, and is exactly what it sounds like: a manager for PHP that’s being interpreted via FastCGI. Unlike mod_php, this means that PHP is being interpreted by FastCGI, and not directly within the Apache application.
  • UNIX socket – a mechanism that allows two processes within the same UNIX/Linux operating system to communicate with one another.

Defining threads versus processes in detail is beyond the scope of this article, but basically, a thread is much smaller and less resource-intense than a process. There are many benefits to using threads instead of full processes for smaller tasks.

2. Introduction:

Okay, so now that we have a few operational definitions in place, let’s get to the underlying problem that this article addresses. When running one or more busy websites on a server, it’s desirable to reduce the amount of processing power and memory that is needed to serve those sites. Doing so will, consequently, allow the sites to be served more efficiently, rapidly, and to more simultaneous clients. One great way to do that is to switch from the Prefork MPM to the new Event MPM in Apache 2.4. However, that requires getting rid of mod_php and switching to a PHP-FPM backend proxied via mod_proxy_fcgi. All that sounds fine and dandy, except that there have been several problems with it in the past (such as .htaccess files not being honoured, effectively passing the processed PHP file back to Apache, and making the whole system work with virtual hosts [vhosts]). Later, I will show you a method that addresses these problems, but beforehand, it might help to see some diagrams that outline these two different MPMs and their respective connections to PHP process.

The de facto way to use PHP and Apache has been with the embedded PHP processor via mod_php:

Apache Prefork MPM with mod_php embedded
Apache with the Prefork MPM and embedded mod_php
Click to enlarge


This new method involves proxying the PHP processing to PHP-FPM:

Apache Event MPM to PHP-FPM via mod_proxy_fcgi
Apache with the Event MPM offloading to PHP-FPM via mod_proxy_fcgi
Click to enlarge

3. The Setup:

On the majority of my servers, I use Gentoo Linux, and as such, the exact file locations or methods for each task may be different on your server(s). Despite the distro-specific differences, though, the overall configuration should be mostly the same.

3a. OS and Linux kernel:

Before jumping into the package rebuilds needed to swap Apache to the Event MPM and to use PHP-FPM, a few OS/kernel-related items should be confirmed or modified: 1) epoll support, 2) the maximum number of open file descriptors, and 3) the number of backlogged sockets.

Epoll support
The Apache Event MPM and PHP-FPM can use several different event mechanisms, but on a Linux system, it is advisable to use epoll. Even though epoll has been the default since the 2.6 branch of the Linux kernel, you should still confirm that it is available on your system. That can be done with two commands (one for kernel support, and one for glibc support):

Kernel support for epoll (replace /usr/src/linux/.config with the actual location of your kernel config):
# grep -i epoll /usr/src/linux/.config

glibc support for epoll
# nm -D /lib/ | grep -i epoll
00000000000e7dc0 T epoll_create
00000000000e7df0 T epoll_create1
00000000000e7e20 T epoll_ctl
00000000000e7a40 T epoll_pwait
00000000000e7e50 T epoll_wait

Max open file descriptors
Another aspect you will need to tune based on your specific needs is the maximum number of open file descriptors. Basically, since there will be files opened for the Apache connections, the UNIX socket to PHP-FPM, and for the PHP-FPM processes themselves, it is a good idea to have a high maximum number of open files. You can check the current kernel-imposed limit by using the following command:

# cat /proc/sys/fs/file-max

Though that number is usually very high, you also need to check the limits based on user. For sake of ease, and seeing as servers running many vhosts may frequently add and modify user accounts, I generally set the max file descriptors pretty high for *all* users.

# grep nofile /etc/security/limits.conf
# - nofile - max number of open file descriptors
* soft nofile 16384
* hard nofile 16384

The specifics of modifying the limits.conf file are outside of the scope of this article, but the above changes will set the soft and hard limits to 16,384 open files for any user on the system.

Backlogged socket connections
The last OS configuration that you might want to initially configure is the maximum number of backlogged socket connections. This number is important because we will be proxying Apache to PHP-FPM via UNIX sockets, and this value sets the maximum number of connections per socket. It should be noted that this is the maximum number per socket, so setting this kernel parameter to some outlandish value doesn’t really make a lot of sense. As you will see later in this tutorial, I create a socket for each virtual host, so unless you plan on having more than 1024 simultaneous connections to PHP-FPM per site, this value is sufficient. You can change it permanently in /etc/sysctl.conf:

# grep somaxconn /etc/sysctl.conf
net.core.somaxconn = 1024

and remember to run sysctl -p thereafter in order to make the changes active.

3b. Package rebuilds:

Now that you’ve checked (and possibly modified) some OS-related configurations, a few packages needed to be recompiled in order to change from the Apache Prefork MPM to the Event MPM and to use PHP-FPM. Here’s a list of the changes that I needed to make:

  • In /etc/portage/make.conf
    • Change the MPM to read APACHE2_MPMS="event"
    • Ensure you have at least the following APACHE2_MODULES enabled: cgid proxy proxy_fcgi
    • Per the mod_cgid documentation you should enable mod_cgid and disable mod_cgi when using the Event MPM
  • In /etc/portage/package.use
    • Apache: www-servers/apache threads
    • PHP must include at least: dev-lang/php cgi fpm threads
      • Note that you do not need to have the ‘apache2′ USE flag enabled for PHP since you’re not using Apache’s PHP module
      • However, you will still need to have the MIME types and the DirectoryIndex directives set (see subsection 3d below)
    • Eselect for PHP must have: app-eselect/eselect-php fpm

Now that those changes have been made, and you’ve recompiled the appropriate packages (Apache, and PHP), it’s time to start configuring the packages before reloading them. In Gentoo, these changes won’t take effect until you restart the running instances. For me, that meant that I could compile everything ahead of time, and then make the configuration changes listed in the remaining 3.x subsections before restarting. That procedure allowed for very little downtime!

3c. Apache:

Within the configurations for the main Apache application, there are several changes that need to be made due to switching to a threaded version with the Event MPM. The first three bullet points for httpd.conf listed below relate to the proxy modules, and the fourth bullet point relates to the directives specific to the Event MPM. For more information about the MPM directives, see Apache’s documentation.

  • In /etc/apache2/httpd.conf
    • The proxy and proxy_fcgi modules need to be loaded:
      LoadModule proxy_module modules/
      LoadModule proxy_fcgi_module modules/
    • Comment out LoadModule cgi_module modules/ so it does not get loaded
    • Replace that with the one for mod_cgid: LoadModule cgid_module modules/
    • Update the MPM portion for Event:
      • <IfModule event.c>
        StartServers 2
        ServerLimit 16
        MinSpareThreads 75
        MaxSpareThreads 250
        ThreadsPerChild 25
        MaxRequestWorkers 400
        MaxConnectionsPerChild 10000

You will also need to make sure that Apache loads certain modules at runtime. In Gentoo, that configuration is done in /etc/conf.d/apache, and needs to contain at least the two modules listed below (note that your configuration will likely have other modules loaded as well):

  • In /etc/conf.d/apache

3d. Apache PHP module:

The last change that needs to be made for Apache is to the configuration file for how it handles PHP processing (in Gentoo, this configuration is found in /etc/apache2/modules.d/70_mod_php5.conf. I would suggest making a backup of the existing file, and making changes to a copy so that it can be easily reverted, if necessary.

70_mod_php5.conf BEFORE changes
<IfDefine PHP5>
  <IfModule !mod_php5.c>
    LoadModule php5_module modules/
  <FilesMatch "\.(php|php5|phtml)$">
    SetHandler application/x-httpd-php
  <FilesMatch "\.phps$">
    SetHandler application/x-httpd-php-source

  DirectoryIndex index.php index.phtml

70_mod_php5.conf AFTER changes
<IfDefine PHP5>
  ## Define FilesMatch in each vhost
  <IfModule mod_mime.c>
    AddHandler application/x-httpd-php .php .php5 .phtml
    AddHandler application/x-httpd-php-source .phps

  DirectoryIndex index.php index.phtml

Basically, you’re getting rid of the LoadModule reference for mod_php, and all the FileMatch directives. Now, if you’re not using virtual hosts, you can put the FilesMatch and Proxy directives from the vhosts section (3e) below in your main PHP module configuration. That being said, the point of this article is to set up everything for use with vhosts, and as such, the module should look like the example immediately above.

As mentioned in section 3b above, the reason you still need the 70_mod_php5.conf at all is so that you can definite the MIME types for PHP and the DirectoryIndex. If you would rather not have this file present, (since you’re not really using mod_php), you can just place these directives (the AddHandler and DirectoryIndex ones) in your main Apache configuration—your choice.

3e. vhosts:

For each vhost, I would recommend creating a user and corresponding group. How you manage local users and groups on your server is beyond the scope of this document. For this example, though, we’re going to have and (very creative, I know 😎 ). To keep it simple, the corresponding users will be ‘site1′ and ‘site2′, respectively. For each site, you will have a separate vhost configuration (and again, how you manage your vhosts is beyond the scope of this document). Within each vhost, the big change that you need to make is to add the FilesMatch and Proxy directives. Here’s the generic template for the vhost configuration additions:

Generic template for vhost configs
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/$pool.sock|fcgi://$pool/"

<Proxy fcgi://$pool/ enablereuse=on max=10>

In that template, you will replace $pool with the name of each PHP-FPM pool, which will be explained in the next subsection (3f). In my opinion, it is easiest to keep the name of the pool the same as the user assigned to each site, but you’re free to change those naming conventions so that they make sense to you. Based on my / example, the vhost configuration additions would be: vhost config
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/site1.sock|fcgi://site1/"

<Proxy fcgi://site1/ enablereuse=on max=10>
</Proxy> vhost config
<FilesMatch "\.php$">
  SetHandler "proxy:unix:///var/run/php-fpm/site2.sock|fcgi://site2/"

<Proxy fcgi://site2/ enablereuse=on max=10>

3f. PHP-FPM:

Since you re-compiled PHP with FPM support (in step 3b), you need to actually start the new process with /etc/init.d/php-fpm start and add it to the default runlevel so that it starts automatically when the server boots (note that this command varies based on your init system):

rc-update add php-fpm default (Gentoo with OpenRC)

Now it’s time to actually configure PHP-FPM via the php-fpm.conf file, which, in Gentoo, is located at /etc/php/fpm-php$VERSION/php-fpm.conf. There are two sections to this configuration file: 1) global directives, which apply to all PHP-FPM instances, and 2) pool directives, which can be changed for each pool (e.g. each vhost). For various non-mandatory options, please see the full list of PHP-FPM directives.

At the top of php-fpm.conf are the global directives, which are:
error_log = /var/log/php-fpm.log
events.mechanism = epoll
emergency_restart_threshold = 0

These directives should be fairly self-explanatory, but here’s a quick summary:

  • [global] – The intro block stating that the following directives are global and not pool-specific
  • error_log – The file to which PHP-FPM will log errors and other notices
  • events.mechanism – What should PHP-FPM use to process events (relates to the epoll portion in subsection 3a)
  • emergency_restart_threshold – How many PHP-FPM children must die improperly before it automatically restarts (0 means the threshold is disabled)

Thereafter are the pool directives, for which I follow this template, with one pool for each vhost:
listen = /var/run/php-fpm/$USER.sock
listen.owner = $USER = apache
listen.mode = 0660
user = $USER
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

It may look a bit intimidating at first, but only the $SITE, $TLD, and $USER variables change based on your vhosts and corresponding users. When the template is used for the / example, the pools look like:

listen = /var/run/php-fpm/site1.sock
listen.owner = site1 = apache
listen.mode = 0660
user = site1
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

listen = /var/run/php-fpm/site2.sock
listen.owner = site2 = apache
listen.mode = 0660
user = site2
group = apache
pm = dynamic
pm.start_servers = 3
pm.max_children = 100
pm.min_spare_servers = 2
pm.max_spare_servers = 5
pm.max_requests = 10000
request_terminate_timeout = 300

Since the UNIX sockets are created in /var/run/php-fpm/, you need to make sure that that directory exists and is owned by root. The listen.owner and directives will make the actual UNIX socket for the pool owned by $USER:apache, and will make sure that the apache group can write to it (which is necessary).

The following directives are ones that you will want to change based on site traffic and server resources:

  • pm.start_servers – The number of PHP-FPM children that should be spawned automatically
  • pm.max_children – The maximum number of children allowed (connection limit)
  • pm.min_spare_servers – The minimum number of spare idle PHP-FPM servers to have available
  • pm.max_spare_servers – The maximum number of spare idle PHP-FPM servers to have available
  • pm.max_requests – Maximum number of requests each child should handle before re-spawning
  • pm.request_terminate_timeout – Maximum amount of time to process a request (similar to max_execution_time in php.ini

These directives should all be adjusted based on the needs of each site. For instance, a really busy site with many, many, simultaneous PHP connections may need 3 servers each with 100 children, whilst a low-traffic site may only only need 1 server with 10 children. The thing to remember is that those children are only active whilst actually processing PHP, which could be a very short amount of time (possibly measured in milliseconds). Contrapositively, setting the numbers too high for the server to handle (in terms of available memory and processor) will result in poor performance when sites are under load. As with anything, tuning the pool requirements will take time and analysis to get it right.

Remember to reload (or restart) PHP-FPM after making any changes to its configuration (global or pool-based):

/etc/init.d/php-fpm reload

4. Verification:

After you have configured Apache, your vhosts, PHP, PHP-FPM, and the other components mentioned throughout this article, you will need to restart them all (just for sake of cleanness). To verify that things are as they should be, you can simply browse to one of the sites configured in your vhosts and make sure that it functions as intended. You can also verify some of the individual components with the following commands:

Apache supports threads, and is using the Event MPM
# apache2 -V | grep 'Server MPM\|threaded'
Server MPM:     event
  threaded:     yes (fixed thread count)

Apache / vhost syntax
# apache2ctl configtest
 * Checking apache2 configuration ...     [ ok ]

5. Troubleshooting:

So you followed the directions here, and it didn’t go flawlessly?! Hark! 😮 In all seriousness, there are several potential points of failure for setting up Apache to communicate with PHP-FPM via mod_proxy_fcgi, but fortunately, there are some methods for troubleshooting (one of the beauties of Linux and other UNIX derivatives is logging).

By default, mod_proxy and mod_proxy_fcgi will report errors to the log that you have specified via the ErrorLog directive, either in your overall Apache configuration, or within each vhost. Those logs serve as a good starting point for tracking down problems. For instance, before I had appropriately set the UNIX socket permission options in php-fpm.conf, I noticed these errors:

[Wed Jul 08 00:03:26.717538 2015] [proxy:error] [pid 2582:tid 140159555143424] (13)Permission denied: AH02454: FCGI: attempt to connect to Unix domain socket /var/run/php-fpm/site1.sock (*) failed
[Wed Jul 08 00:03:26.717548 2015] [proxy_fcgi:error] [pid 2582:tid 140159555143424] [client 52350] AH01079: failed to make connection to backend: httpd-UDS

That error in bold text indicated that Apache didn’t have permission to write to the UNIX socket. Setting the permissions accordingly—see the listen.owner and directives portion of subsection 3f—fixed the problem.

Say that the errors don’t provide detailed enough information, though. You can set the LogLevel directive (again, either in your overall Apache configuration, or per vhost) to substantially increase the verbosity—to debug or even trace levels. You can even change the log level just for certain modules, so that you don’t get bogged down with information that is irrelevant to your particular error. For instance:

LogLevel warn proxy:debug proxy_fcgi:debug

will set the overall LogLevel to “warn,” but the LogLevel for mod_proxy and mod_proxy_fcgi to “debug,” in order to make them more verbose.

6. Conclusion:

If you’ve stuck with me throughout this gigantic post, you’re either really interested in systems engineering and the web stack, or you may have some borderline masochistic tendencies. 😛 I hope that you have found the article helpful in setting up PHP to proxy PHP interpretation to PHP-FPM via mod_proxy_fcgi, and that you’re able to see the benefits of this type of offloading. Not only does this method of processing PHP files free up resources so that Apache can serve more simultaneous connections with fewer resources, but it also more closely adheres to the UNIX philosophy of having applications perform only one task, and do so in the best way possible.

If you have any questions, comments, concerns, or suggestions, please feel free to leave a comment.


August 04, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

About five years ago, I became interested in alternative C standard libraries like uClibc and musl and their application to more than just embedded systems.  Diving into the implementation details of those old familiar C functions can be tedious, but also challenging especially under constraints of size, performance, resource consumption, features and correctness.  Yet these details can make a big difference as you can see from this comparison of glibc, uClibc, dietlibc and musl.  I first encountered libc performance issues when I was doing number crunching work for my ph.d. in physics at Cornell  in the late 80’s.  I was using IBM RS6000’s (yes!) and had to jump into the assembly.  It was lots of fun and I’ve loved this sort of low level stuff ever since.

Over the past four years, I’ve been working towards producing stage3 tarballs for both uClibc and musl systems on various arches, and with help from generous contributors (thanks guys!), we now have a pretty good selection on the mirrors.  These stages are not strictly speaking embedded in that they do not make use of busybox to provide their base system.  Rather, they employ the same packages as our glibc stages and use coreutils, util-linux, net-tools and friends.  Except for small details here and there, they only differ from our regular stages in the libc they use.

If you read my last blog posting on this new release engineering tool I’m developing called GRS, you’ll know that I recently hit a milestone in this work.  I just released three hardened, fully featured XFCE4 desktop systems for amd64.  These systems are identical to each other except for their libc, again modulo a few details here and there.  I affectionately dubbed these Bluemoon for glibc, Lilblue for uClibc, and Bluedragon for musl.  (If you’re curious about the names, read their homepages.)  You can grab all three off my dev space , or off the mirrors under experimental/amd64/{musl,uclibc} if you’re looking for just Lilblue or Bluedragon — the glibc system is too boring to merit mirror space.  I’ve been maintaining Lilblue for a couple of years now, but with GRS, I can easily maintain all three and its nice to have them for comparison.

If you play with these systems, don’t expect to be blown away by some amazing differences.  They are there and they are noticeable, but they are also subtle.  For example, you’re not going to notice code correctness in, say, pthread_cancel() unless you’re coding some application and expect certain behavior but don’t get it because of some bad code in your libc.  Rather,  the idea here is push the limits of uClibc and musl to see what breaks and then fix it, at least on amd64 for now.  Each system includes about 875 packages in the release tarballs, and an extra 5000 or so binpkgs built using GRS.  This leads to lots of breakage which I can isolate and address.  Often the problem is in the package itself, but occasionally it’s the libc and that’s where the fun begins!  I’ve asked Patrick Lauer for some space where I can set up my GRS builds and serve out the binpkgs.  Hopefully he’ll be able to set me up with something.  I’ll also be able to make the build.log’s available for packages that fail via the web, so that GRS will double as a poor man’s tinderbox.

In a future article I’ll discuss musl, but in the remainder of this post, I want to highlight some big ticket items we’ve hit in uClibc.  I’ve spent a lot of time building up machinery to maintain the stages and desktops, so now I want to focus my attention on fixing the libc problems.  The following laundry list is as much a TODO for me as it is for your entertainment.  I won’t blame you if you want to skip it.  The selection comes from Gentoo’s bugzilla and I have yet to compare it to upstream’s bugs since I’m sure there’s some overlap.

Currently there are thirteen uClibc stage3’s being supported:

  • stage3-{amd64,i686}-uclibc-{hardened,vanilla}
  • stage3-armv7a_{softfp,hardfp}-uclibc-{hardened,vanilla}
  • stage3-mipsel3-uclibc-{hardened,vanilla}
  • stage3-mips32r2-uclibc-{hardened,vanilla}
  • stage3-ppc-uclibc-vanilla

Here hardened and vanilla refer to the toolchain hardening as we do for regular glibc systems.  Some bugs are arch specific some are common.  Let’s look at each in turn.

* amd64 and i686 are the only stages considered stable and are distributed along side our regular stage3 releases in the mirrors.  However, back in May I hit a rather serious bug (#548950) in amd64 which is our only 64-bit arch.  The problem was in the the implementation of pread64() and pwrite64() and was triggered by a change in the fsck code with e2fsprogs-1.42.12.  The bug led to data corruption of ext3/ext4 filesystem which is a very serious issue for a release advertised as stable.  The problem was that the wrong _syscall wrapper was being used for amd64.  If we don’t require 64-bit alignment, and you don’t on a 64-bit arch (see uClibc/libc/sysdeps/linux/x86_64/bits/uClibc_arch_features.h), then you need to use _syscall4, not _syscall6.

The issue was actually fixed by Mike Frysinger (vapier) in uClibc’s git HEAD but not in the 0.9.33 branch which is the basis of the stages.  Unfortunately, there hasn’t been a new release of uClibc in over three year so backporting meant disentangling the fix from some new thread cancel stuff and was annoying.

* The armv7a stages are close to being stable, but they are still being distributed in the mirrors under experimental.  The problem here is not uClibc specific, but due to hardened gcc-4.8 and it affects all our hardened arm stages.  With gcc-4.8, we’re turning on -fstack-check=yes by default in the specs and this breaks alloca().  The workaround for now is to use gcc-4.7, but we should turn off -fstack-check for arm until bug #518598 – (PR65958) is fixed.

* I hit this one weird bug when building the mips stages, bug #544756.  An illegal instruction is encountered when building any version of python using gcc-4.9 with -O1 optimization or above, yet it succeeds with -O0.  What I suspect happened here is some change in the optimization code for mips between gcc-4.8 and 4.9 introduced the problem.  I need to distill out some reduced code before I submit to gcc upstream.   For now I’ve p.masked >=gcc-4.9 in default/linux/uclibc/mips.  Since mips lacks stable keywords, this actually brings the mips stages in better line with the other stages that use gcc-4.8 (or 4.7 in the case of hardened arm).

* Unfortunately, ppc is plagued with bug #517160PIE code is broken on ppc and causes a seg fault in plt_pic32.__uClibc_main ().  Since PIE is an integral part of how we do hardening in Gentoo, there’ s no hardened ppc-uclibc stage.  Luckily, there are no known issues with the vanilla ppc stage3.

Finally, there are five interesting bugs which are common to all arches.  These problems lie in the implementation of functions in uClibc and deviate from the expected behavior.  I’m particularly grateful to who found them by running the test suite for various packages.

* Bug 527954 – One of wget’s tests makes use of fnmatch(3) which intelligently matches file names or paths.  On uClibc, there is an unexpected failure in a test where it should return a non-match when fnmatch’s options contains FNM_PATHNAME and a matching slash is not present in both strings.  Apparently this is a duplicate of a really old bug (#181275).

* Bug 544118René noticed this problem in an e2fsprogs test for  The failure here is due to the fact that setbuf(3) is ineffective at changing the buffer size of stdout if it is run after some printf(1).  Output to stdout is buffered while output to stderr is not.  This particular test tries to preserve the order of output from a sequence of writes to stdout and stderr by setting the buffer size of stdout to zero.  But setbuf() only works on uClibc if it is invoked before any output to stdout.  As soon as there is even one printf(), all invocations to setbuf(stdout, …) are ineffecitve!

* Bug 543972 – This one came up in the gzip test suite.  One of the tests there checks to make sure that gzip properly fails if it runs out of disk space.  It uses /dev/full, which is a pseudo-device provided by the kernel that pretends to always be full.  The expectation is that fclose() should set errno = ENOSPC when closing /dev/full.  It does on glibc but it doesn’t in uClibc.  It actually happens when piping stdout to /dev/full, so the problem may even be in dup(2).

* Bug 543668 – There is some problem in uClibc’s dopen()/dlclose() code.  I wrote about this in a previous blog post and also hit it with syslog-ng’s plugin system.  A seg fault occurs when unmapping a plugin from the memory space of a process during do_close() in uClibc/ldso/libdl/libdl.c.  My guess is that the problem lies in uClibc’s accounting of the mappings and when it tries to unmap an area of memory which is not mapped or was previously unmapped, it seg faults.

July 31, 2015
Anthony Basile a.k.a. blueness (homepage, bugs)

The other day I installed Ubuntu 15.04 on one of my boxes.  I just needed something where I could throw in a DVD, hit install and be done.  I didn’t care about customization or choice, I just needed a working Linux system from which I could chroot work.  Thousands of people around the world install Ubuntu this way and when they’re done, they have a stock system like any other Ubuntu installation, all identical like frames in a Andy Warhol lithograph.   Replication as a form of art.

In contrast, when I install a Gentoo system, I enjoy the anxiety of choice.  Should I use syslog-ng, metalog, or skip a system logger altogether?  If I choose syslog-ng, then I have a choice of 14 USE flags for 2^14 possible configurations for just that package.  And that’s just one of some 850+ packages that are going to make up my desktop.  In contrast to Ubuntu where every installation is identical (whatever “idem” means in this context), the shear space of possibilities make no two Gentoo systems the same unless there is some concerted effort to make them so.  In fact, Gentoo doesn’t even have a notion of a “stock” system unless you count the stage3s which are really bare bones.  There is no “stock” Gentoo desktop.

With the work I am doing with uClibc and musl, I needed a release tool that would build identical desktops repeatedly and predictably where all the choices of packages and USE flags were layed out a priori in some specifications.  I considered catalyst stage4, but catalyst didn’t provide the flexibility I wanted.  I initially wrote some bash scripts to build an XFCE4 desktop from uClibc stage3 tarballs (what I dubbed “Lilblue Linux“), but this was very much ad hoc code and I needed something that could be generalized so I could do the same for a musl-based desktop, or indeed any Gentoo system I could dream up.

This led me to formulate the notion of what I call a “Gentoo Reference System” or GRS for short — maybe we could make stock Gentoo systems available.  The idea here is that one should be able to define some specs for a particular Gentoo system that will unambiguously define all the choices that go into building that system.  Then all instances built according to those particular GRS specs would be identical in much the same way that all Ubuntu systems are the same.  In a Warholian turn, the artistic choices in designing the system would be pushed back into the specs and become part of the automation.  You draw one frame of the lithograph and you magically have a million.

The idea of these systems being “references” was also important for my work because, with uClibc or musl, there’s a lot of package breakages — remember you pushing up against actual implementations of C functions and nearly everything in your systems written in C.  So, in the space of all possible Gentoo systems, I needed some reference points that worked.  I needed that magical combinations of flags and packages that would build and yield useful systems.  It was also important that these references be easily kept working over time since Gentoo systems evolve as the main tree, or overlays, are modified.  Since on some successive build something might break, I needed to quickly identify the delta and address it.  The metaphor that came up in my head from my physics background is that of phase space.  In the swirling mass of evolving dynamical systems, I pictured these “Gentoo Reference Systems” as markers etching out a well defined path over time.

Enough with the metaphors, how does GRS work?  There are two main utilities, grsrun and grsup.  The first is run on a build machine and generates the GRS release as well as any extra packages and updates.  These are delivered as binpkgs.  In contrast, grsup is run on an installed GRS instance and its used for package management.  Since we’re working in a world of identical systems, grsup prefers working with binpkgs that are downloaded from some build machine, but it can revert to building locally as well.

The GRS specs for some system are found on a branch of a git repository.  Currently the repo at has four branches, each for one of the four GRS specs housed there.  grsrun is then directed to sync the remote repo locally, check out the branch of the GRS system we want to build and begin reading a script file called build which directs grsrun on what steps to take.  The scripting language is very simple and contains only a handful of different directives.  After a stage tarball is unpacked, build can direct grsrun to do any of the following:

mount and umount – Do a bind mount of /dev/, /dev/pts/ and other directories that are required to get a chroot ready.

populate – Selectively copy files from the local repo to the chroot.  Any files can be copied in, so, for example, you can prepare a pristine home directory for some user with a pre-configured desktop.  Or you can add customized configuration files to /etc for services you plan to run.

runscript – This will run some bash or python script in the chroots.  The scripts are copied from the local repo to /tmp of the chroot and executed there.  These scripts can be like the ones that catalyst runs during stage1/2/3 but can also be scripts to add users and groups, to add services to runlevels, etc.  Think of anything you would do when growing a stage3 into the system you want, script it up and GRS will automated it for you.

kernel – This looks for a kernel config file in the local repo, parses it for the version, builds it and both bundles it as a packages called linux-image-<version>.tar.xz for later distribution as well as installs it into the chroot.  grsup knows how to work with these linux-image-<version>.tar.xz files and can treat them like binpkgs.

tarit and hashit – These directives create a release tarball of the entire chroot and generate the digests.

pivot – If you built a chroot within a chroot, like catalyst does during stage1, then this pivots the inner chroot out so that further building can make use of it.

From an implementation point of view, the GRS suite is written in python and each of the above directives is backed by a simple python class.  Its easy, for instance, to implement more directives this way.  E.g. if you want to build a bootable CD image, you can include a directive called isoit, write a python class for what’s required to construct the iso image and glue this new class into the grs module.

If you’re familiar with catalyst, at this point you might be wondering what’s the difference?  Can’t you do all of this with catalyst?  There is a lot of overlap, but the emphasis is different.  For example, I wanted to be able to drop in a pre-configured desktop for a user.  How would I do that with catalyst?  I guess I could create an overlay with packages for some pre-built home directory but that’s a perversion of what ebuilds are for — we should never be installing into /home.  Rather with grsrun I can just populate the chroot with whatever files I like anywhere in the filesystem.  More importantly, I want to be able control what USE flags are set and, in general, manage all of /etc/portage/catalyst does provide portage_configdir which populates /etc/portage when building stages, but its pretty static.  Instead, grsup and two other utilities, install-worldconf and clean-worldconf, can dynamically manage files under /etc/portage/ according to a configuration file called world.conf.

Lapsing back into metaphor, I see catalyst as rigid and frozen whereas grsrun is loose and fluid.  You can use grsrun to build stage1/2/3 tarballs which are identical to those built with catalyst, and in fact I’ve done so for hardened amd64 mutlilib stages so I could compare.  But with grsrun you have too much freedom in writing the scripts and file that go into the GRS specs and chances are you’ll get something wrong, whereas with catalyst the build is pretty regimented and you’re guaranteed to get uniformity across arches and profiles.  So while you can do the same things with each tool, its not recommended that you use grsrun to do catalyst stage builds — there’s too much freedom.  Whereas when building desktops or servers you might welcome that freedom.

Finally, let me close with how grsup works.  As mentioned above, the GRS specs for some system include a file called world.conf.  Its in configparser format and it specifies files and their contents in the /etc/portage/ directory.  An example section in the file looks like:

package.use : app-crypt/gpgme:1 -common-lisp static-libs
package.env : app-crypt/gpgme:1 app-crypt_gpgme_1
env : LDFLAGS=-largp

This says, for package app-crypt/gpgme:1, drop a file called app-crypt_gpgme_1 in /etc/portage/package.use/ that contains the line “app-crypt/gpgme:1 -common-lisp static-libs”, drop another file by the same name in /etc/portage/package.env/ with line “app-crypt/gpgme:1 app-crypt_gpgme_1″, and finally drop a third file by the same name in /etc/portage/env/ with line “LDFLAGS=-largp”.   grsup is basically a wrapper to emerge which first populates /etc/portage/ according to the world.conf file, then emerges the requested pkg(s) preferring the use of binpkgs over building locally as stated above, and finally does a clean up on /etc/portage/install-worldconf and clean-worldconf isolate the populate and clean up steps so they can be used in scripts run by grsrun when building the release.  To be clear, you don’t have to use grsup to maintain a GRS system.  You can maintain it just like any other Gentoo system, but if you manage your own /etc/portage/, then you are no longer tracking the GRS specs.  grsup is meant to make sure you update, install or remove packages in a manner that keeps the local installation in compliance with the GRS specs for that system.

All this is pretty alpha stuff, so I’d appreciate comments on design and implementation before things begin to solidify.  I am using GRS to build three desktop systems which I’ll blog about next.  I’ve dubbed these systems Lilblue which is a hardened amd64 XFCE4 desktop with uClibc as its standard libc, Bluedragon that uses musl, and finally Bluemoon which uses good old glibc.  (Lilblue is actually a few years old, but the latest release is the first built using GRS.)  All three desktops are identical with respect to the choice of packages and USE flags, and differ only in their libc’s so one can compare the three.  Lilbue and Bluedragon are on the mirrors, or you can get all three from my dev space at  I didn’t push out bluemoon on the mirrors because a glibc based desktop is nothing special.  But since building with GRS is as simple as cloning a git branch and tweaking, and since the comparison is useful, why not?

The GRS home page is at

July 23, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Tasty calamares in Gentoo (July 23, 2015, 20:36 UTC)

First of all it’s nothing to eat. So what is it then? This is the introduction by upstream:

Calamares is an installer framework. By design it is very customizable, in order to satisfy a wide variety of needs and use cases. Calamares aims to be easy, usable, beautiful, pragmatic, inclusive and distribution-agnostic. Calamares includes an advanced partitioning feature, with support for both manual and automated partitioning operations. It is the first installer with an automated “Replace Partition” option, which makes it easy to reuse a partition over and over for distribution testing. Got a Linux distribution but no system installer? Grab Calamares, mix and match any number of Calamares modules (or write your own in Python or C++), throw together some branding, package it up and you are ready to ship!

I have just added newest release version (1.1.2) to the tree and in my dev overlay a live version (9999). The underlaying technology stack is mainly Qt5, KDE Frameworks, Python3, YAML and systemd. It’s picked up and of course in evaluation process by several Linux distributions.

You may asking why i have added it to Gentoo then where we have OpenRC as default init system?! You are right at the moment it is not very useful for Gentoo. But for example Sabayon as a downstream of us will (maybe) use it for the next releases, so in the first place it is just a service for our downstreams.

The second reason, there is a discussion on gentoo-dev mailing list at the moment to reboot the Gentoo installer. Instead of creating yet another installer implementation, we have two potential ways to pick it up, which are not mutual exclusive:

1. Write modules to make it work with sysvinit aka OpenRC
2. Solve Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

Have fun!

[2] johu dev overlay
[3] gentoo-dev ml – Rebooting the Installer Project
[4] Bug #482702 – Provide alternative stage3 tarballs using sys-apps/systemd

July 22, 2015
Rafael Goncalves Martins a.k.a. rafaelmartins (homepage, bugs)
Hello, World! (July 22, 2015, 06:45 UTC)

Hi all,

I'm starting a new blog!

Actually, it is almost the same blog, but powered by a new "blogging engine", and I don't want to spend time migrating the old posts, that are mostly outdated right now.

The old content is archived here, if you need it due to some crazy reason:

For Gentoo planet readers, everything should be working just fine. I created rewrite rules to keep the old atom feeds working.

I'll publish another blog post soon, talking about the "blogging engine" and my next plans.


July 20, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Since updating to VMware Workstation 11 (from the Gentoo vmware overlay), I've experienced a lot of hangs of my KDE environment whenever a virtual machine was running. Basically my system became unusable, which is bad if your workflow depends on accessing both Linux and (gasp!) Windows 7 (as guest). I first suspected a dbus timeout (doing the "stopwatch test" for 25s waits), but it seems according to some reports that this might be caused by buggy behavior in kwin (4.11.21). Sadly I haven't been able to pinpoint a specific bug report.

Now, I'm not sure if the problem is really 100% fixed, but at least now the lags are much smaller- and here's how to do it (kudos to matthewls and vrenn): 

  • Add to /etc/xorg.conf in the Device section
    Option "TripleBuffer" "True"
  • Create a file in /etc/profile.d with content
    (yes that starts with a double underscore).
  • Log out, stop your display manager, restart it.
I'll leave it as an exercise to the reader to figure out what these settings do. (Feel free to explain it in a comment. :) No guarantees of any kind. If this kills kittens you have been warned. Cheers.

July 18, 2015
Richard Freeman a.k.a. rich0 (homepage, bugs)
Running cron jobs as units automatically (July 18, 2015, 16:00 UTC)

I just added sys-process/systemd-cron to the Gentoo repository.  Until now I’ve been running it from my overlay and getting it into the tree was overdue.  I’ve found it to be an incredibly useful tool.

All it does is install a set of unit files and a crontab generator.  The unit files (best used by starting/enabling will run jobs from /etc/cron.* at the appropriate times.  The generator can parse /etc/crontab and create timer units for every line dynamically.

Note that the default Gentoo install runs the /etc/cron.* jobs from /etc/crontab, so if you aren’t careful you might end up running them twice.  The simplest solutions this are to either remove those lines from /etc/crontab, or install systemd-cron using USE=etc-crontab-systemd which will have the generator ignore /etc/crontab and instead look for /etc/crontab-systemd where you can install jobs you’d like to run using systemd.

The generator works like you’d expect it to – if you edit the crontab file the units will automatically be created/destroyed dynamically.

One warning about timer units compared to cron jobs is that the jobs are run as services, which means that when the main process dies all its children will be killed.  If you have anything in /etc/cron.* which forks you’ll need to have the main script wait at the end.

On the topic of race conditions, each cron.* directory and each /etc/crontab line will create a separate unit.  Those units will all run in parallel (to the extent that one is still running when the next starts), but within a cron.* directory the scripts will run in series.  That may be a bit different from some cron implementations which may limit the number of simultaneous jobs globally.

All the usual timer unit logic applies.  stdout goes to the journal, systemctl list-timers shows what is scheduled, etc.

Filed under: gentoo, linux, systemd

July 16, 2015

Libav is an open source set of tools for audio and video processing.

After talking with Luca Barbato which is both a Gentoo and Libav developer, I spent a bit of my time fuzzing libav and in particular I fuzzed libavcodec though avplay.
I hit a crash and after I reported it to upstream, they confirmed the issue as a divide-by-zero.

The complete gdb output:

ago@willoughby $ gdb --args /usr/bin/avplay avplay.crash 
GNU gdb (Gentoo 7.7.1 p1) 7.7.1
Copyright (C) 2014 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later 
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-pc-linux-gnu".
Type "show configuration" for configuration details.
For bug reporting instructions, please see:
Find the GDB manual and other documentation resources online at:
For help, type "help".
Type "apropos word" to search for commands related to "word"...
Reading symbols from /usr/bin/avplay...Reading symbols from /usr/lib64/debug//usr/bin/avplay.debug...done.
(gdb) run
Starting program: /usr/bin/avplay avplay.crash
warning: Could not load shared library symbols for
Do you need "set solib-search-path" or "set sysroot"?
[Thread debugging using libthread_db enabled]
Using host libthread_db library "/lib64/".
avplay version 11.3, Copyright (c) 2003-2014 the Libav developers
  built on Jun 19 2015 09:50:59 with gcc 4.8.4 (Gentoo 4.8.4 p1.6, pie-0.6.1)
[New Thread 0x7fffec4c7700 (LWP 7016)]
[New Thread 0x7fffeb166700 (LWP 7017)]
INFO: AddressSanitizer ignores mlock/mlockall/munlock/munlockall
[New Thread 0x7fffe9e28700 (LWP 7018)]
[h263 @ 0x60480000f680] Format detected only with low score of 25, misdetection possible!
[h263 @ 0x60440001f980] Syntax-based Arithmetic Coding (SAC) not supported
[h263 @ 0x60440001f980] Reference Picture Selection not supported
[h263 @ 0x60440001f980] Independent Segment Decoding not supported
[h263 @ 0x60440001f980] header damaged

Program received signal SIGFPE, Arithmetic exception.
[Switching to Thread 0x7fffe9e28700 (LWP 7018)]
0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
142     /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c: No such file or directory.
(gdb) bt
#0  0x00007ffff21e3313 in ff_h263_decode_mba (s=s@entry=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:142
#1  0x00007ffff21f3c2d in ff_h263_decode_picture_header (s=0x60720005a100) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/ituh263dec.c:1112
#2  0x00007ffff1ae16ed in ff_h263_decode_frame (avctx=0x60440001f980, data=0x60380002f480, got_frame=0x7fffe9e272f0, avpkt=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/h263dec.c:444
#3  0x00007ffff2cd963e in avcodec_decode_video2 (avctx=0x60440001f980, picture=0x60380002f480, got_picture_ptr=got_picture_ptr@entry=0x7fffe9e272f0, avpkt=avpkt@entry=0x7fffe9e273b0) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavcodec/utils.c:1600
#4  0x00007ffff44d4fb4 in try_decode_frame (st=st@entry=0x60340002fb00, avpkt=avpkt@entry=0x601c00037b00, options=) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:1910
#5  0x00007ffff44ebd89 in avformat_find_stream_info (ic=0x60480000f680, options=0x600a00009e80) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/libavformat/utils.c:2276
#6  0x0000000000431834 in decode_thread (arg=0x7ffff7e0b800) at /tmp/portage/media-video/libav-11.3/work/libav-11.3/avplay.c:2268
#7  0x00007ffff0284b08 in ?? () from /usr/lib64/
#8  0x00007ffff02b4be9 in ?? () from /usr/lib64/
#9  0x00007ffff4e65aa8 in ?? () from /usr/lib/gcc/x86_64-pc-linux-gnu/4.8.4/
#10 0x00007ffff0062204 in start_thread () from /lib64/
#11 0x00007fffefda957d in clone () from /lib64/

Affected version:
11.3 (and maybe past versions)

Fixed version:
11.5 and 12.0

Commit fix:;a=commitdiff;h=0a49a62f998747cfa564d98d36a459fe70d3299b;hp=6f4cd33efb5a9ec75db1677d5f7846c60337129f

This bug was discovered by Agostino Sarubbo of Gentoo.


2015-06-21: bug discovered
2015-06-22: bug reported privately to upstream
2015-06-30: upstream commit the fix
2015-07-14: CVE assigned
2015-07-16: advisory release

This bug was found with American Fuzzy Lop.
This bug does not affect ffmpeg.


July 14, 2015
siege: off-by-one in load_conf() (July 14, 2015, 19:04 UTC)

Siege is an http load testing and benchmarking utility.

During the test of a webserver, I hit a segmentation fault. I recompiled siege with ASan and it clearly show an off-by-one in load_conf(). The issue is reproducible without passing any arguments to the binary.
The complete output:

ago@willoughby ~ # siege
==488==ERROR: AddressSanitizer: heap-buffer-overflow on address 0x60200000d7f1 at pc 0x00000051ab64 bp 0x7ffcc3d19a70 sp 0x7ffcc3d19a68
READ of size 1 at 0x60200000d7f1 thread T0
#0 0x51ab63 in load_conf /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263:12
#1 0x515486 in init_config /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:96:7
#2 0x5217b9 in main /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/main.c:324:7
#3 0x7fb2b1b93aa4 in __libc_start_main /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/csu/libc-start.c:289
#4 0x439426 in _start (/usr/bin/siege+0x439426)

0x60200000d7f1 is located 0 bytes to the right of 1-byte region [0x60200000d7f0,0x60200000d7f1)
allocated by thread T0 here:
#0 0x4c03e2 in __interceptor_malloc /var/tmp/portage/sys-devel/llvm-3.6.1/work/llvm-3.6.1.src/projects/compiler-rt/lib/asan/
#1 0x7fb2b1bf31e9 in __strdup /var/tmp/portage/sys-libs/glibc-2.20-r2/work/glibc-2.20/string/strdup.c:42

SUMMARY: AddressSanitizer: heap-buffer-overflow /var/tmp/portage/app-benchmarks/siege-3.1.0/work/siege-3.1.0/src/init.c:263 load_conf
Shadow bytes around the buggy address:
0x0c047fff9aa0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ab0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ac0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ad0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
0x0c047fff9ae0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
=>0x0c047fff9af0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[01]fa
0x0c047fff9b00: fa fa 03 fa fa fa fd fd fa fa fd fa fa fa fd fd
0x0c047fff9b10: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
0x0c047fff9b20: fa fa fd fd fa fa fd fa fa fa fd fa fa fa fd fa
0x0c047fff9b30: fa fa fd fa fa fa fd fd fa fa fd fa fa fa fd fa
0x0c047fff9b40: fa fa fd fa fa fa fd fa fa fa fd fa fa fa fd fd
Shadow byte legend (one shadow byte represents 8 application bytes):
Addressable: 00
Partially addressable: 01 02 03 04 05 06 07
Heap left redzone: fa
Heap right redzone: fb
Freed heap region: fd
Stack left redzone: f1
Stack mid redzone: f2
Stack right redzone: f3
Stack partial redzone: f4
Stack after return: f5
Stack use after scope: f8
Global redzone: f9
Global init order: f6
Poisoned by user: f7
Container overflow: fc
Array cookie: ac
Intra object redzone: bb
ASan internal: fe
Left alloca redzone: ca
Right alloca redzone: cb

Affected version:
3.1.0 (and maybe past versions).

Fixed version:
Not available.

Commit fix:
Not available.

This bug was discovered by Agostino Sarubbo of Gentoo.

Not really qualifiable, is more a programming bug.

2015-06-09: bug discovered
2015-06-10: bug reported privately to upstream
2015-07-13: no upstream response
2015-07-14: advisory release


July 10, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
Plasma 5 and kdbus testing (July 10, 2015, 22:03 UTC)

Thanks to Mike Pagano who enabled kdbus support in Gentoo kernel sources almost 2 weeks ago. Which gives us the choice to test it. As described in Mikes blog post you will need to enable the use flags kdbus and experimental on sys-kernel/gentoo-sources and kdbus on sys-apps/systemd.

root # echo "sys-kernel/gentoo-sources kdbus experimental" >> /etc/portage/package.use/kdbus

If you are running >=sys-apps/systemd-221 kdbus is already enabled by default otherwise you have to enable it.

root # echo "sys-apps/systemd kdbus" >> /etc/portage/package.use/kdbus

Any packages affected by the change need to be rebuilt.

root # emerge -avuND @world

Enable kdbus option in kernel.

General setup --->
<*> kdbus interprocess communication

Build the kernel, install it and reboot. Now we can check if kdbus is enabled properly. systemd should automatically mask dbus.service and start systemd-bus-proxyd.service instead (Thanks to eliasp for the info).

root # systemctl status dbus
● dbus.service
Loaded: masked (/dev/null)
Active: inactive (dead)

root # systemctl status systemd-bus-proxyd
● systemd-bus-proxyd.service - Legacy D-Bus Protocol Compatibility Daemon
Loaded: loaded (/usr/lib64/systemd/system/systemd-bus-proxyd.service; static; vendor preset: enabled)
Active: active (running) since Fr 2015-07-10 22:42:16 CEST; 16min ago
Main PID: 317 (systemd-bus-pro)
CGroup: /system.slice/systemd-bus-proxyd.service
└─317 /usr/lib/systemd/systemd-bus-proxyd --address=kernel:path=/sys/fs/kdbus/0-system/bus

Plasma 5 starts fine here using sddm as login manager. On Plasma 4 you may be interested in Bug #553460.

Looking forward when Plasma 5 will get user session support.

Have fun!

Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

For quite some time, I have tried to get links in Thunderbird to open automatically in Chrome or Chromium instead of defaulting to Firefox. Moreover, I have Chromium start in incognito mode by default, and I would like those links to do the same. This has been a problem for me since I don’t use a full desktop environment like KDE, GNOME, or even XFCE. As I’m really a minimalist, I only have my window manager (which is Openbox), and the applications that I use on a regular basis.

One thing I found, though, is that by using PCManFM as my file manager, I do have a few other related applications and utilities that help me customise my workspace and workflows. One such application is libfm-pref-apps, which allows for setting preferred applications. I found that I could do just what I wanted to do without mucking around with manually setting MIME types, writing custom hooks for Thunderbird, or any of that other mess.

Here’s how it was done:

  1. Execute /usr/bin/libfm-pref-apps from your terminal emulator of choice
  2. Under “Web Browser,” select “Customise” from the drop-down menu
  3. Select the “Custom Command Line” tab
  4. In the “Command line to execute” box, type /usr/bin/chromium --incognito --start-maximized %U
  5. In the “Application name” box, type “Chromium incognito” (or however else you would like to identify the application)

Voilà! After restarting Thunderbird, my links opened just like I wanted them to. The only modification that you might need to make is the “Command line to execute” portion. If you use the binary of Chrome instead of building the open-source Chromium browser, you would need to change it to the appropriate executable (and the path may be different for you, depending on your system and distribution). Also, in the command line that I have above, here are some notes about the switches used:

  • –incognito starts Chromium in incognito mode by default (that one should be obvious)
  • –start-maximized makes the browser window open in the full size of your screen
  • %U allows Chromium to accept a URL or list of URLs, and thus, opens the link that you clicked in Thunderbird

Under the hood, it seems like libfm-pref-apps is adding some associations in the ~/.config/mimeapps.list file. The relevant lines that I found were:

[Added Associations]
x-scheme-handler/http=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;
x-scheme-handler/https=userapp-chromium --incognito --start-maximized-8KZNYX.desktop;

Hope this information helps you get your links to open in your browser of choice (and with the command-line arguments that you want)!


July 09, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In our previous attempt to upgrade our production cluster to 3.0, we had to roll back from the WiredTiger engine on primary servers.

Since then, we switched back our whole cluster to 3.0 MMAPv1 which has brought us some better performances than 2.6 with no instability.

Production checklist

We decided to use this increase in performance to allow us some time to fulfil the entire production checklist from MongoDB, especially the migration to XFS. We’re slowly upgrading our servers kernels and resynchronising our data set after migrating from ext4 to XFS.

Ironically, the strong recommendation of XFS in the production checklist appeared 3 days after our failed attempt at WiredTiger… This is frustrating but gives some kind of hope.

I’ll keep on posting on our next steps and results.

Our hero WiredTiger Replica Set

While we were battling with our production cluster, we got a spontaneous major increase in the daily volumes from another platform which was running on a single Replica Set. This application is write intensive and very disk I/O bound. We were killing the disk I/O with almost a continuous 100% usage on the disk write queue.

Despite our frustration with WiredTiger so far, we decided to give it a chance considering that this time we were talking about a single Replica Set. We were very happy to see WiredTiger keep up to its promises with an almost shocking serenity.

Disk I/O went down dramatically, almost as if nothing was happening any more. Compression did magic on our disk usage and our application went Roarrr !

July 06, 2015
Zack Medico a.k.a. zmedico (homepage, bugs)

I’ve created a utility called tardelta (ebuild available) that people using containers may be interested in. Here’s the README:

It is possible to optimize docker containers such that multiple containers are based off of a single copy of a common base image. If containers are constructed from tarballs, then it can be useful to create a delta tarball which contains the differences between a base image and a derived image. The delta tarball can then be layered on top of the base image using a Dockerfile like the following:

FROM base
ADD delta.tar.xz /

Many different types of containers can thus be derived from a common base image, while sharing a single copy of the base image. This saves disk space, and can also reduce memory consumption since it avoids having duplicate copies of base image data in the kernel’s buffer cache.

July 03, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
KDEPIM without Akonadi (July 03, 2015, 14:16 UTC)

As you know, Gentoo is all about flexibility. You can run bleeding edge code (portage, our package manager, even provides you with installation from git master KF5 and friends) or you can focus on stability and trusted code. This is why we've been offering our users for the last years KDEPIM (the version where KMail e-mail storage was not integrated with Akonadi yet, also known as KMail1) as a drop-in replacement for the newer versions.
Recently the Nepomuk search framework has been replaced by Baloo, and after some discussion we decided that for the Nepomuk-related packages it's now time to go. Problem is, the old KDEPIM packages still depend on it via their Akonadi version. This is why - for those of our users who prefer to run KDEPIM 4.4 / KMail1 - we've decided to switch to Pali Rohár's kdepim-noakonadi fork (see also his 2013 blog post and the code).The packages are right now in the KDE overlay, but will move to the main tree after a few days of testing and be treated as an update of KDEPIM
The fork is essentially KDEPIM 4.4 including some additional bugfixes from the KDE/4.4 git branch, with KAddressbook patched back to KDEPIM 4.3 state and references to Akonadi removed elsewhere. This is in some ways a functionality regression since the integration of e.g. different calendar types is lost, however in that version it never really worked perfectly anyway.

For now, you will still need the akonadi-server package, since kdepimlibs (outside kdepim and now at version 4.14.9) requires it to build, but you'll never need to start the Akonadi server. As a consequence, Nepomuk support can be disabled everywhere, and the Nepomuk core and client and Akonadi client packages can be removed by the package manager (--depclean, make sure to first globally disable the nepomuk useflag and rebuild accordingly).

You might ask "Why are you still doing this?"... well. I've been told Akonadi and Baloo is working very nicely, and again I've considered upgrading all my installations... but then on my work desktop where I am using newest and greatest KDE4PIM bug 338658 pops up regularly and stops syncing of important folders. I just don't have the time to pointlessly dig deep into the Akonadi database every few days. So KMail1 it is, and I'll rather spend some time occasionally picking and backporting bugfixes.

June 26, 2015
Mike Pagano a.k.a. mpagano (homepage, bugs)
kdbus in gentoo-sources (June 26, 2015, 23:35 UTC)


Keeping with the theme of ‘Gentoo is about choice” I’ve added the ability for users to include kdbus into their gentoo-sources kernel.  I wanted an easy way for gentoo users to test the patchset while maintaining the default installation of not having it at all.

In order to include the patchset on your gentoo-sources you’ll need the following:

1. A kernel version >= 4.1.0-r1

2. the ‘experimental’ use flag

3. the ‘kdbus’ use flag

I am not a systemd user, but from the ebuild it looks like if you build systemd with the ‘kdbus’ use flag it will use it.

Please send all kdbus bugs upstream by emailing the developers and including in the CC .

Read as much as you can about kdbus before you decided to build it into your kernel.  There have been security concerns mentioned (warranted or not), so following the upstream patch review at would probably be prudent.

When a new version is released, wait a week before opening a bug.  Unless I am on vacation, I will most likely have it included before the week is out. Thanks!

NOTE: This is not some kind of Gentoo endorsement of kdbus.  Nor is it a Mike Pagano endorsement of kdbus.  This is no different then some of the other optional and experimental patches we carry.  I do all the genpatches work which includes the patches, the ebuilds and the bugs therefore since I don’t mind the extra work of keeping this up to date, then I can’t see any reason not to include it as an option.



June 25, 2015
Johannes Huber a.k.a. johu (homepage, bugs)
KDE Plasma 5.3.1 testing (June 25, 2015, 23:15 UTC)

After several month of packaging in kde overlay and almost a month in tree, we have lifted the mask for KDE Plasma 5.3.1 today. If you want to test it out, now some infos how to get it.

For easy transition we provide two new profiles, one for OpenRC and the other for systemd.

root # eselect profile list
[8] default/linux/amd64/13.0/desktop/plasma
[9] default/linux/amd64/13.0/desktop/plasma/systemd

Following example activates the Plasma systemd profile:

root # eselect profile set 9

On stable systems you need to unmask the qt5 use flag:

root # echo "-qt5" >> /etc/portage/profile/use.stable.mask

Any packages affected by the profile change need to be rebuilt:

root # emerge -avuND @world

For stable users, you also need to keyword the required packages. You can let portage handle it with autokeyword feature or just grep the keyword files for KDE Frameworks 5.11 and KDE Plasma 5.3.1 from kde overlay.

Now just install it (this is full Plasma 5, the basic desktop would be kde-plasma/plasma-desktop):

root # emerge -av kde-plasma/plasma-meta

KDM is not supported for Plasma 5 anymore, so if you have installed it kill it with fire. Possible and tested login managers are SDDM and LightDM.

For detailed instructions read the full upgrade guide. Package bugs can be filed to and about the software to

Have fun,
the Gentoo KDE Team

June 23, 2015
Hanno Böck a.k.a. hanno (homepage, bugs)

tl;dr Most servers running a multi-user webhosting setup with Apache HTTPD probably have a security problem. Unless you're using Grsecurity there is no easy fix.

I am part of a small webhosting business that I run as a side project since quite a while. We offer customers user accounts on our servers running Gentoo Linux and webspace with the typical Apache/PHP/MySQL combination. We recently became aware of a security problem regarding Symlinks. I wanted to share this, because I was appalled by the fact that there was no obvious solution.

Apache has an option FollowSymLinks which basically does what it says. If a symlink in a webroot is accessed the webserver will follow it. In a multi-user setup this is a security problem. Here's why: If I know that another user on the same system is running a typical web application - let's say Wordpress - I can create a symlink to his config file (for Wordpress that's wp-config.php). I can't see this file with my own user account. But the webserver can see it, so I can access it with the browser over my own webpage. As I'm usually allowed to disable PHP I'm able to prevent the server from interpreting the file, so I can read the other user's database credentials. The webserver needs to be able to see all files, therefore this works. While PHP and CGI scripts usually run with user's rights (at least if the server is properly configured) the files are still read by the webserver. For this to work I need to guess the path and name of the file I want to read, but that's often trivial. In our case we have default paths in the form /home/[username]/websites/[hostname]/htdocs where webpages are located.

So the obvious solution one might think about is to disable the FollowSymLinks option and forbid users to set it themselves. However symlinks in web applications are pretty common and many will break if you do that. It's not feasible for a common webhosting server.

Apache supports another Option called SymLinksIfOwnerMatch. It's also pretty self-explanatory, it will only follow symlinks if they belong to the same user. That sounds like it solves our problem. However there are two catches: First of all the Apache documentation itself says that "this option should not be considered a security restriction". It is still vulnerable to race conditions.

But even leaving the race condition aside it doesn't really work. Web applications using symlinks will usually try to set FollowSymLinks in their .htaccess file. An example is Drupal which by default comes with such an .htaccess file. If you forbid users to set FollowSymLinks then the option won't be just ignored, the whole webpage won't run and will just return an error 500. What you could do is changing the FollowSymLinks option in the .htaccess manually to SymlinksIfOwnerMatch. While this may be feasible in some cases, if you consider that you have a lot of users you don't want to explain to all of them that in case they want to install some common web application they have to manually edit some file they don't understand. (There's a bug report for Drupal asking to change FollowSymLinks to SymlinksIfOwnerMatch, but it's been ignored since several years.)

So using SymLinksIfOwnerMatch is neither secure nor really feasible. The documentation for Cpanel discusses several possible solutions. The recommended solutions require proprietary modules. None of the proposed fixes work with a plain Apache setup, which I think is a pretty dismal situation. The most common web server has a severe security weakness in a very common situation and no usable solution for it.

The one solution that we chose is a feature of Grsecurity. Grsecurity is a Linux kernel patch that greatly enhances security and we've been very happy with it in the past. There are a lot of reasons to use this patch, I'm often impressed that local root exploits very often don't work on a Grsecurity system.

Grsecurity has an option like SymlinksIfOwnerMatch (CONFIG_GRKERNSEC_SYMLINKOWN) that operates on the kernel level. You can define a certain user group (which in our case is the "apache" group) for which this option will be enabled. For us this was the best solution, as it required very little change.

I haven't checked this, but I'm pretty sure that we were not alone with this problem. I'd guess that a lot of shared web hosting companies are vulnerable to this problem.

Here's the German blog post on our webpage and here's the original blogpost from an administrator at Uberspace (also German) which made us aware of this issue.

June 21, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Perl 5.22 testers needed! (June 21, 2015, 08:14 UTC)

Gentoo users rejoice, for a few days already we have Perl 5.22.0 packaged in the main tree. Since we don't know yet how much stuff will break because of the update, it is masked for now. Which means, we need daring testers (preferably running ~arch systems, stable is also fine but may need more work on your part to get things running) who unmask the new Perl, upgrade, and file bugs if needed!!!
Here's what you need in /etc/portage/package.unmask (and possibly package.accept_keywords) to get started (download); please always use the full block, since partial unmasking will lead to chaos. We're looking forward to your feedback!
# Perl 5.22.0 mask / unmask block

# end of the Perl 5.22.0 mask / unmask block
After the update, first run
emerge --depclean --ask
and afterwards
perl-cleaner --all
perl-cleaner should not need to do anything, ideally. If you have depcleaned first and it still wants to rebuild something, that's a bug. Please file a bug report for the package that is getting rebuilt (but check our wiki page on known Perl 5.22 issues first to avoid duplicates).

June 19, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Recently, I wrote an article about amavisd not running with Postfix, and getting a “Connection refused to″ error message that wasn’t easy to diagnose. Yesterday, I ran into another problem with amavisd refusing to start properly, and I wasn’t readily able to figure out why. By default, amavisd logs to your mail log, which for me is located at /var/log/mail.log, but could be different for you based on your syslogger preferences. The thing is, though, that it will not log start-up errors there. So basically, one is seemingly left in the dark if you start amavisd and then realise it isn’t running immediately thereafter.

I decided to take a look at the init script for amavisd, and saw that there were some non-standard functions in it:

# grep 'extra_commands' /etc/init.d/amavisd
extra_commands="debug debug_sa"

These extra commands map to the following functions:

debug() {
ebegin "Starting ${progname} in debug mode"
"${prog}" debug
eend $?

debug_sa() {
ebegin "Starting ${progname} in debug-sa mode"
"${prog}" debug-sa
eend $?

Though these extra commands may be Gentoo-specific, they are pretty easy to implement on other distributions by directly calling the binary itself. For instance, if you wanted the debug function, it would be the location of the binary with ‘debug’ appended to it. On my system, that would be:

/usr/sbin/amavisd -c $LOCATION_OF_CONFIG_FILE debug

replacing the $LOCATION_OF_CONFIG_FILE with your actual config file location.

When I started amavisd in debug mode, the start-up problem that it was having became readily apparent:

# /etc/init.d/amavisd debug
* Starting amavisd-new in debug mode ...
Jun 18 12:48:21.948 /usr/sbin/amavisd[4327]: logging initialized, log level 5, syslog: amavis.mail
Jun 18 12:48:21.948 /usr/sbin/amavisd[4327]: starting. /usr/sbin/amavisd at amavisd-new-2.10.1 (20141025), Unicode aware, LANG="en_GB.UTF-8"

Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: Net::Server: 2015/06/18-12:48:22 Amavis (type Net::Server::PreForkSimple) starting! pid(4327)
Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: (!)Net::Server: 2015/06/18-12:48:22 Unresolveable host [::1]:10024 - could not load IO::Socket::INET6: Can't locate in @INC (you may need to install the Socket6 module) (@INC contains: lib /etc/perl /usr/local/lib64/perl5/5.20.2/x86_64-linux /usr/local/lib64/perl5/5.20.2 /usr/lib64/perl5/vendor_perl/5.20.2/x86_64-linux /usr/lib64/perl5/vendor_perl/5.20.2 /usr/local/lib64/perl5 /usr/lib64/perl5/vendor_perl/5.20.1/x86_64-linux /usr/lib64/perl5/vendor_perl/5.20.1 /usr/lib64/perl5/vendor_perl /usr/lib64/perl5/5.20.2/x86_64-linux /usr/lib64/perl5/5.20.2) at /usr/lib64/perl5/vendor_perl/5.20.1/Net/Server/ line 122.\n\n at line 82 in file /usr/lib64/perl5/vendor_perl/5.20.1/Net/Server/
Jun 18 12:48:22.200 /usr/sbin/amavisd[4327]: Net::Server: 2015/06/18-12:48:22 Server closing!

In that code block, the actual error (in bold text) indicates that it couldn’t find the Perl module IO:Socket::INET6. This problem was easily fixed in Gentoo with emerge -av dev-perl/IO-Socket-INET6, but could be rectified by installing the module from your distribution’s repositories, or by using CPAN. In my case, it was caused by my recent compilation and installation of a new kernel that, this time, included IPV6 support.

The point of my post, however, wasn’t about my particular problem with amavisd starting, but rather how one can debug start-up problems with the daemon. Hopefully, if you run into woes with amavisd logging, these debug options will help you track down the problem.


June 10, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Live SELinux userspace ebuilds (June 10, 2015, 18:07 UTC)

In between courses, I pushed out live ebuilds for the SELinux userspace applications: libselinux, policycoreutils, libsemanage, libsepol, sepolgen, checkpolicy and secilc. These live ebuilds (with Gentoo version 9999) pull in the current development code of the SELinux userspace so that developers and contributors can already work with in-progress code developments as well as see how they work on a Gentoo platform.

That being said, I do not recommend using the live ebuilds for anyone else except developers and contributors in development zones (definitely not on production). One of the reasons is that the ebuilds do not apply Gentoo-specific patches to the ebuilds. I would also like to remove the Gentoo-specific manipulations that we do, such as small Makefile adjustments, but let's start with just ignoring the Gentoo patches.

Dropping the patches makes sure that we track upstream libraries and userspace closely, and allows developers to try and send out patches to the SELinux project to fix Gentoo related build problems. But as not all packages can be deployed successfully on a Gentoo system some patches need to be applied anyway. For this, users can drop the necessary patches inside /etc/portage/patches as all userspace ebuilds use the epatch_user method.

Finally, observant users will notice that "secilc" is also provided. This is a new package, which is probably going to have an official release with a new userspace release. It allows for building CIL-based SELinux policy code, and was one of the drivers for me to create the live ebuilds as I'm experimenting with the CIL constructions. So expect more on that later.

June 07, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs) relaunched (June 07, 2015, 23:02 UTC)


Two months ago, the website redesign was announced. For a few hours now, is following. The similarities in look are not by coincidence: Both design were done by Alex Legler.

The new website is based on Jekyll rather than GuideXML previously. Any bugs you find on the website can be reported on GitHub where the website sources are hosted.

I would like to take the opportunity to thank both Manitu and SysEleven for providing the hardware running (and other Gentoo e.V. services) free of charge. Very kind!

Best, Sebastian

June 06, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
LibreSSL, OpenSSL, collisions and problems (June 06, 2015, 12:33 UTC)

Some time ago, on the gentoo-dev mailing list, there has been an interesting thread on the state of LibreSSL in Gentoo. In particular I repeated some of my previous concerns about ABI and API compatibility, especially when trying to keep both libraries on the same system.

While I hope that the problems I pointed out are well clear to the LibreSSL developers, I thought reiterating them again clearly in a blog post would give them a wider reach and thus hope that they can be addressed. Please feel free to reshare in response to people hand waving the idea that LibreSSL can be either a drop-in, or stand-aside replacement for OpenSSL.

Last year, when I first blogged about LibreSSL, I had to write a further clarification as my post was used to imply that you could just replace the OpenSSL binaries with LibreSSL and be done with it. This is not the case and I won't even go back there. What I'm concerned about this time is whether you can install the two in the same system, and somehow decide which one you want to use on a per-package basis.

Let's start with the first question: why would you want to do that? Everybody at this point knows that LibreSSL was forked from the OpenSSL code and started removing code that has been needed unnecessary or even dangerous – a very positive thing, given the amount of compatibility kludges around OpenSSL! – and as such it was a subset of the same interface as its parent, thus there would be no reason to wanting the two libraries on the same system.

But then again, LibreSSL never meant to be considered a drop-in replacement, so they haven't cared as much for the evolution of OpenSSL, and just proceeded in their own direction; said direction included building a new library, libtls, that implements higher-level abstractions of TLS protocol. This vaguely matches the way NSS (the Netscape-now-Mozilla TLS library) is designed, and I think it makes sense: it reduces the amount of repetition that needs to be coded in multiple parts of the software stack to implement HTTPS for instance, reducing the chance of one of them making a stupid mistake.

Unfortunately, this library was originally tied firmly to LibreSSL and there was no way for it to be usable with OpenSSL — I think this has changed recently as a "portable" build of libtls should be available. Ironically, this wouldn't have been a problem at all if it wasn't that LibreSSL is not a superset of OpenSSL, as this is where the core of the issue lies.

By far, this is not the first time a problem like this happens in Open Source software communities: different people will want to implement the same concept in different ways. I like to describe this as software biodiversity and I find it generally a good thing. Having more people looking at the same concept from different angles can improve things substantially, especially in regard to finding safe implementations of network protocols.

But there is a problem when you apply parallel evolution to software: if you fork a project and then evolve it on your own agenda, but keep the same library names and a mostly compatible (thus conflicting) API/ABI, you're going to make people suffer, whether they are developers, consumers, packagers or users.

LibreSSL, libav, Ghostscript, ... there are plenty of examples. Since the features of the projects, their API and most definitely their ABIs are not the same, when you're building a project on top of any of these (or their originators), you'll end up at some point making a conscious decision on which one you want to rely on. Sometimes you can do that based only on your technical needs, but in most cases you end up with a compromise based on technical needs, licensing concerns and availability in the ecosystem.

These projects didn't change the name of their libraries, that way they can be used as drop-rebuild replacement for consumers that keep to the greatest common divisor of the interface, but that also means you can't easily install two of them in the same system. And since most distributions, with the exception of Gentoo, would not really provide the users with choice of multiple implementations, you end up with either a fractured ecosystem, or one that is very much non-diverse.

So if all distributions decide to standardize on one implementation, that's what the developers will write for. And this is why OpenSSL will likely to stay the standard for a long while still. Of course in this case it's not as bad as the situation with libav/ffmpeg, as the base featureset is going to be more or less the same, and the APIs that have been dropped up to now, such as the entropy-gathering daemon interface, have been considered A Bad Idea™ for a while, so there are not going to be OpenSSL-only source projects in the future.

What becomes an issue here is that software is built against OpenSSL right now, and you can't really change this easily. I've been told before that this is not true, because OpenBSD switched, but there is a huge difference between all of the BSDs and your usual Linux distributions: the former have much more control on what they have to support.

In particular, the whole base system is released in a single scoop, and it generally includes all the binary packages you can possibly install. Very few third party software providers release binary packages for OpenBSD, and not many more do for NetBSD or FreeBSD. So as long as you either use the binaries provided by those projects or those built by you on the same system, switching the provider is fairly easy.

When you have to support third-party binaries, then you have a big problem, because a given binary may be built against one provider, but depend on a library that depends on the other. So unless you have full control of your system, with no binary packages at all, you're going to have to provide the most likely provider — which right now is OpenSSL, for good or bad.

Gentoo Linux is, once again, in a more favourable position than many others. As long as you have a full source stack, you can easily choose your provider without considering its popularity. I have built similar stacks before, and my servers deploy stacks similarly, although I have not tried using LibreSSL for any of them yet. But on the desktop it might be trickier, especially if you want to do things like playing Steam games.

But here's the harsh reality, even if you were to install the libraries in different directories, and you would provide a USE flag to choose between the two, it is not going to be easy to apply the right constraints between final executables and libraries all the way into the tree.

I'm not sure if I have an answer to balance the ability to just make the old software use the new library and the side-installation. I'm scared that a "solution" that can be found to solve this problem is bundling and you can probably figure out that doing so for software like OpenSSL or LibreSSL is a terrible idea, given how fast you should update in response to a security vulnerability.

June 05, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


This article is one of the most-viewed on The Z-Issue, and is sometimes read thousands of times per day. If it has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

So, for several weeks now, I’ve been struggling with disk naming and UDEV as they relate to RHEL or CentOS Kickstart automation via PXE. Basically, what used to be a relatively straight-forward process of setting up partitions based on disk name (e.g. /dev/sda for the primary disk, /dev/sdb for the secondary disk, and so on) has become a little more complicated due to the order in which the kernel may identify disks with UDEV implementations. The originally-used naming conventions of /dev/sdX may no longer be consistent across reboots or across machines. How this problem came to my attention was when I was Kickstarting a bunch of hosts, and realised that several of them were indicating that there wasn’t enough space on the drives that I had referenced. After doing some digging, I realised that some servers were identifying KVM-based media (like via a Cisco UCS CIMC interface), or other USB media as primary disks instead of the actual primary SCSI-based disks (SATA or SAS drives).

Though I agree that identifying disks by their path, their UUID, or an otherwise more permanent name is preferred to the ambiguous /dev/sdX naming scheme, it did cause a problem, and one that didn’t have a readily-identifiable workaround for my situation. My first thought was to use UUID, or gather the disk ID from /dev/disk/by-id/. However, that wouldn’t work because it wouldn’t be the same across all servers, so automatic installations via PXE wouldn’t be feasible. Next, I looked at referencing disks by their PCI location, which can be found within /dev/disk/by-path/. Unfortunately, I found that the PCI location might vary between servers as well. For example, I found:

Server 1:
/dev/disk/by-path/pci-0000:82:00.0-scsi-0:2:0:0 –> primary SCSI disk
/dev/disk/by-path/pci-0000:82:00.0-scsi-0:2:0:0 –> secondary SCSI disk

Server 2:
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:2:0:0 –> primary SCSI disk
/dev/disk/by-path/pci-0000:0b:00.0-scsi-0:2:0:0 –> secondary SCSI disk

That being said, I did notice one commonality between the servers that I used for testing. The primary and secondary disks (which, by the way, are RAID arrays attached to the same controller) all ended with ‘scsi-0:2:0:0 ‘ for the primary disk, and ‘scsi-0:2:1:0 ‘ for the secondary disk. Thinking that I could possibly just specify using a wildcard, I tried:

part /boot --fstype ext2 --size=100 --ondisk=disk/by-path/*scsi-0:2:0:0

but alas, that caused Anaconda to error out stating that the specified drive did not exist. At this point, I thought that all hope was lost in terms of automation. Then, however, I had a flash of genius (they don’t happen all that often). I could probably gather the full path ID in a pre-installation script, and then get the FULL path ID into the Kickstart configuration file with an include. Here are the code snippets that I wrote:

%pre --interpreter=/bin/bash
## Get the full by-path ID of the primary and secondary disks
osdisk=$(ls -lh /dev/disk/by-path/ | grep 'scsi-0:2:0:0 ' | awk '{print $9}')
datadisk=$(ls -lh /dev/disk/by-path/ | grep 'scsi-0:2:1:0 ' | awk '{print $9}')

## Create a temporary file with the partition scheme to be included in the KS config
echo "# Partitioning scheme" > /tmp/partition_layout
echo "clearpart --all" >> /tmp/partition_layout
echo "zerombr" >> /tmp/partition_layout
echo "part /boot --fstype ext2 --size=100 --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part swap --recommended --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part pv.01 --size=100 --grow --ondisk=disk/by-path/$osdisk" >> /tmp/partition_layout
echo "part pv.02 --size=100 --grow --ondisk=disk/by-path/$datadisk" >> /tmp/partition_layout
echo "volgroup vg_os pv.01" >> /tmp/partition_layout
echo "volgroup vg_data pv.02" >> /tmp/partition_layout
echo "logvol / --vgname=vg_os --size=100 --name=lv_os --fstype=ext4 --grow" >> /tmp/partition_layout
echo "logvol /data --vgname=vg_data --size=100 --name=lv_data --fstype=ext4 --grow" >> /tmp/partition_layout
echo "# Bootloader location" >> /tmp/partition_layout
echo "bootloader --location=mbr --driveorder=/dev/disk/by-path/$osdisk,/dev/disk/by-path/$datadisk --append=\"rhgb\"" >> /tmp/partition_layout

Some notes about the pre-installation script would be: 1) yes, I know that it is a bit clumsy, but it functions despite the lack of elegance; 2) in lines 3 and 4, make sure that the grep statements include a final space. The final space (e.g. grep 'scsi-0:2:0:0 ' instead of just grep 'scsi-0:2:0:0') is important because the machine may have partitions already set up, and in that case, those partitions would be identified as '*scsi-0:2:0:0-part#' where # is the partition number.

So, this pre-installation script generates a file called /tmp/partition_layout that essentially has a normal partition scheme for a Kickstart configuration file, but references the primary and secondary SCSI disks (again, as RAID arrays attached to the same controller) by their full ‘by-path’ IDs. Then, the key is to include that file within the Kickstart configuration via:

# Partitioning scheme information gathered from pre-install script below
%include /tmp/partition_layout

I am happy to report that for the servers that I have tested thus far, this method works. Is it possible that there will be primary and secondary disks that are identified via different path IDs not caught by my pre-installation script? Yes, that is possible. It’s also possible that your situation will require a completely different pre-installation script to gather the unique identifier of your choice. That being said, I hope that this post will help you devise your method of reliably identifying disks so that you can create the partition scheme automatically via your Kickstart configuration file. That way, you can then deploy it to many servers via PXE (or any other implementation that you’re currently utilising for mass deployment).


Robin Johnson a.k.a. robbat2 (homepage, bugs)
gnupg-2.1 mutt (June 05, 2015, 17:25 UTC)

For the mutt users with GnuPG, depending on your configuration, you might notice that mutt's handling of GnuPG mail stopped working with GnuPG. There were a few specific cases that would have caused this, which I'll detail, but if you just want it to work again, put the below into your Muttrc, and make the tweak to gpg-agent.conf. The underlying cause for most if it is that secret key operations have moved to the agent, and many Mutt users used the agent-less mode, because Mutt handled the passphrase nicely on it's own.

  • -u must now come BEFORE --cleansign
  • Add allow-loopback-pinentry to gpg-agent.conf, and restart the agent
  • The below config adds --pinentry-mode loopback before --passphrase-fd 0, so that GnuPG (and the agent) will accept it from Mutt still.
  • --verbose is optional, depending what you're doing, you might find --no-verbose cleaner.
  • --trust-model always is a personal preference for my Mutt mail usage, because I do try and curate my keyring
set pgp_autosign = yes
set pgp_use_gpg_agent = no
set pgp_timeout = 600
set pgp_sign_as="(your key here)"
set pgp_ignore_subkeys = no

set pgp_decode_command="gpg %?p?--pinentry-mode loopback  --passphrase-fd 0? --verbose --no-auto-check-trustdb --batch --output - %f"
set pgp_verify_command="gpg --pinentry-mode loopback --verbose --batch --output - --no-auto-check-trustdb --verify %s %f"
set pgp_decrypt_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - %f"
set pgp_sign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_clearsign_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --output - --armor --textmode %?a?-u %a? --detach-sign %f"
set pgp_encrypt_sign_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --textmode --trust-model always --output - %?a?-u %a? --armor --encrypt --sign --armor -- -r %r -- %f"
set pgp_encrypt_only_command="pgpewrap gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --trust-model always --output --output - --encrypt --textmode --armor -- -r %r -- %f"
set pgp_import_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --import -v %f"
set pgp_export_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --export --armor %r"
set pgp_verify_key_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --fingerprint --check-sigs %r"
set pgp_list_pubring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-keys %r"
set pgp_list_secring_command="gpg %?p?--pinentry-mode loopback --passphrase-fd 0? --verbose --batch --with-colons --list-secret-keys %r"

This entry was originally posted at Please comment there using OpenID.

May 26, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

In my previous post regarding the migration of our production cluster to mongoDB 3.0 WiredTiger, we successfully upgraded all the secondaries of our replica-sets with decent performances and (almost, read on) no breakage.

Step 2 plan

The next step of our migration was to test our work load on WiredTiger primaries. After all, this is where the new engine would finally demonstrate all its capabilities.

  • We thus scheduled a step down from our 3.0 MMAPv1 primary servers so that our WiredTiger secondaries would take over.
  • Not migrating the primaries was a safety net in case something went wrong… And boy it went so wrong we’re glad we played it safe that way !
  • We rolled back after 10 minutes of utter bitterness.

The failure

After all the wait and expectation, I can’t express our level of disappointment at work when we saw that the WiredTiger engine could not handle our work load. Our application started immediately to throw 50 to 250 WriteConflict errors per minute !

Turns out that we are affected by this bug and that, of course, we’re not the only ones. So far it seems that it affects collections with :

  • heavy insert / update work loads
  • an unique index (or compound index)

The breakages

We also discovered that we’re affected by a weird mongodump new behaviour where the dumped BSON file does not contain the number of documents that mongodump said it was exporting. This is clearly a new problem because it happened right after all our secondaries switched to WiredTiger.

Since we have to ensure a strong consistency of our exports and that the mongoDB guys don’t seem so keen on moving on the bug (which I surely can understand) there is a large possibility that we’ll have to roll back even the WiredTiger secondaries altogether.

Not to mention that since the 3.0 version, we experience some CPU overloads crashing the entire server on our MMAPv1 primaries that we’re still trying to tackle before opening another JIRA bug…

Sad panda

Of course, any new major release such as 3.0 causes its headaches and brings its lot of bugs. We were ready for this hence the safety steps we took to ensure that we could roll back on any problem.

But as a long time advocate of mongoDB I must admit my frustration, even more after the time it took to get this 3.0 out and all the expectations that came with it.

I hope I can share some better news on the next blog post.

May 23, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)


Die Troisdorfer Linux User Group (kurz TroLUG) veranstaltet

am Samstag den 01.08.2015
in Troisdorf nahe Köln/Bonn

einen Gentoo-Workshop, der sich an fortgeschrittene User richtet.

Mehr Details, die genaue Adresse und der Ablauf finden sich auf der entsprechenden Seite der TroLUG.




Alexys Jacob a.k.a. ultrabug (homepage, bugs)

Good news for gevent users blocked on python < 2.7.9 due to broken SSL support since python upstream dropped the private API _ssl.sslwrap that eventlet was using.

This issue was starting to get old and problematic since GLSA 2015-0310 but I’m happy to say that almost 6 hours after the gevent-1.0.2 release, it is already available on portage !

We were also affected by this issue at work so I’m glad that the tension between ops and devs this issue was causing will finally be over ;)

May 20, 2015
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)


My tech articles—especially Linux ones—are some of the most-viewed on The Z-Issue. If this one has helped you, please consider a small donation to The Parker Fund by using the top widget at the right. Thanks!

Late last night, I decided to apply some needed updates to my personal mail server, which is running Gentoo Linux (OpenRC) with a mail stack of Postfix & Dovecot with AMaViS (filtering based on SpamAssassin, ClamAV, and Vipul’s Razor). After applying the updates, and restarting the necessary components of the mail stack, I ran my usual test of sending an email from one of my accounts to another one. It went through without a problem.

However, I realised that it isn’t a completely valid test to send an email from one internal account to another because I have amavisd configured to not scan anything coming from my trusted IPs and domains. I noticed several hundred mails in the queue when I ran postqueue -p, and they all had notices similar to:

status=deferred (delivery temporarily suspended:
connect to[]:10024: Connection refused)

That indicated to me that it wasn’t a problem with Postfix (and I knew it wasn’t a problem with Dovecot, because I could connect to my accounts via IMAP). Seeing as amavisd is running on localhost:10024, I figured that that is where the problem had to be. A lot of times, when there is a “connection refused” notification, it is because no service is listening on that port. You can test to see what ports are in a listening state and what processes, applications, or daemons are listening by running:

netstat -tupan | grep LISTEN

When I did that, I noticed that amavisd wasn’t listening on port 10024, which made me think that it wasn’t running at all. That’s when I ran into the strange part of the problem: the init script output:

# /etc/init.d/amavisd start
* WARNING: amavisd has already been started
# /etc/init.d/amavisd stop
The amavisd daemon is not running                [ !! ]
* ERROR: amavisd failed to start

So, apparently it is running and not running at the same time (sounds like a Linux version of Schrödinger’s cat to me)! It was obvious, though, that it wasn’t actually running (which could be verified with ‘ps -elf | grep -i amavis’). So, what to do? I tried manually removing the PID file, but that actually just made matters a bit worse. Ultimately, this combination is what fixed the problem for me:

/etc/init.d/amavisd zap
/etc/init.d/amavisd start

It seems that the SpamAssassin rules file had gone missing, and that was causing amavisd to not start properly. Manually updating the rules file (with ‘sa-update’) regenerated it, and then I zapped amavisd completely, and lastly restarted the daemon.

Hope that helps anyone running into the same problem.


EDIT: I have included a new post about debugging amavisd start-up problems.

May 13, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
uWSGI, gevent and pymongo 3 threads mayhem (May 13, 2015, 14:56 UTC)

This is a quick heads-up post about a behaviour change when running a gevent based application using the new pymongo 3 driver under uWSGI and its gevent loop.

I was naturally curious about testing this brand new and major update of the python driver for mongoDB so I just played it dumb : update and give a try on our existing code base.

The first thing I noticed instantly is that a vast majority of our applications were suddenly unable to reload gracefully and were force killed by uWSGI after some time !

worker 1 (pid: 9839) is taking too much time to die...NO MERCY !!!

uWSGI’s gevent-wait-for-hub

All our applications must be able to be gracefully reloaded at any time. Some of them are spawning quite a few greenlets on their own so as an added measure of making sure we never loose any running greenlet we use the gevent-wait-for-hub option, which is described as follow :

wait for gevent hub's death instead of the control greenlet

… which does not mean a lot but is explained in a previous uWSGI changelog :

During shutdown only the greenlets spawned by uWSGI are taken in account,
and after all of them are destroyed the process will exit.

This is different from the old approach where the process wait for
ALL the currently available greenlets (and monkeypatched threads).

If you prefer the old behaviour just specify the option gevent-wait-for-hub

pymongo 3

Compared to its previous 2.x versions, one of the overall key aspect of the new pymongo 3 driver is its intensive usage of threads to handle server discovery and connection pools.

Now we can relate this very fact to the gevent-wait-for-hub behaviour explained above :

the process wait for ALL the currently available greenlets
(and monkeypatched threads)

This explained why our applications were hanging until the reload-mercy (force kill) timeout option of uWSGI hit the fan !


When using pymongo 3 with the gevent-wait-for-hub option, you have to keep in mind that all of pymongo’s threads (so monkey patched threads) are considered as active greenlets and will thus be waited for termination before uWSGI recycles the worker !

Two options come in mind to handle this properly :

  1. stop using the gevent-wait-for-hub option and change your code to use a gevent pool group to make sure that all of your important greenlets are taken care of when a graceful reload happens (this is how we do it today, the gevent-wait-for-hub option usage was just over protective for us).
  2. modify your code to properly close all your pymongo connections on graceful reloads.

Hope this will save some people the trouble of debugging this ;)

May 09, 2015
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)
Reviving Gentoo VMware support (May 09, 2015, 23:41 UTC)

Sadly over the last months the support for VMware Workstation and friends in Gentoo dropped a lot. Why? Well, I was the only developer left who cared, and it's definitely not at the top of my Gentoo priorities list. To be honest that has not really changed. However... let's try to harness the power of the community now.

I've pushed a mirror of the Gentoo vmware overlay to Github, see

If you have improvements, version bumps, ... - feel free to generate pull requests. Everything related to VMware products is acceptable. I hope some more people will over time sign up and help merging. Just be careful when using the overlay, it likely won't get the same level of review as ebuilds in the main tree.

May 07, 2015
Alexys Jacob a.k.a. ultrabug (homepage, bugs)

We’ve been running a nice mongoDB cluster in production for several years now in my company.

This cluster suits quite a wide range of use cases from very simple configuration collections to complex queried ones and real time analytics. This versatility has been the strong point of mongoDB for us since the start as it allows different teams to address their different problems using the same technology. We also run some dedicated replica sets for other purposes and network segmentation reasons.

We’ve waited a long time to see the latest 3.0 release features happening. The new WiredTiger storage engine hit the fan at the right time for us since we had reached the limits of our main production cluster and were considering alternatives.

So as surprising it may seem, it’s the first of our mongoDB architecture we’re upgrading to v3.0 as it has become a real necessity.

This post is about sharing our first experience about an ongoing and carefully planned major upgrade of a production cluster and does not claim to be a definitive migration guide.

Upgrade plan and hardware

The upgrade process is well covered in the mongoDB documentation already but I will list the pre-migration base specs of every node of our cluster.

  • mongodb v2.6.8
  • RAID1 spinning HDD 15k rpm for the OS (Gentoo Linux)
  • RAID10 4x SSD for mongoDB files under LVM
  • 64 GB RAM

Our overall philosophy is to keep most of the configuration parameters to their default values to start with. We will start experimenting with them when we have sufficient metrics to compare with later.

Disk (re)partitioning considerations

The master-get-all-the-writes architecture is still one of the main limitation of mongoDB and this does not change with v3.0 so you obviously need to challenge your current disk layout to take advantage of the new WiredTiger engine.

mongoDB 2.6 MMAPv1

Considering our cluster data size, we were forced to use our 4 SSD in a RAID10 as it was the best compromise to preserve performance while providing sufficient data storage capacity.

We often reached the limits of our I/O and moved the journal out of the RAID10 to the mostly idle OS RAID1 with no significant improvements.

mongoDB 3.0 WiredTiger

The main consideration point for us is the new feature allowing to store the indexes in a separate directory. So we anticipated the data storage consumption reduction thanks to the snappy compression and decided to split our RAID10 in two dedicated RAID1.

Our test layout so far is :

  • RAID1 SSD for the data
  • RAID1 SSD for the indexes and journal

Our first node migration

After migrating our mongos and config servers to 3.0, we picked our worst performing secondary node to test the actual migration to WiredTiger. After all, we couldn’t do worse right ?

We are aware that the strong suit of WiredTiger is actually about having the writes directed to it and will surely share our experience of this aspect later.

compression is bliss

To make this comparison accurate, we resynchronized this node totally before migrating to WiredTiger so we could compare a non fragmented MMAPv1 disk usage with the WiredTiger compressed one.

While I can’t disclose the actual values, compression worked like a charm for us with a gain ratio of 3,2 on disk usage (data + indexes) which is way beyond our expectations !

This is the DB Storage graph from MMS, showing a gain ratio of 4 surely due to indexes being in a separate disk now.






memory usage

As with the disk usage, the node had been running hot on MMAPv1 before the actual migration so we can compare memory allocation/consumption of both engines.

There again the memory management of WiredTiger and its cache shows great improvement. For now, we left the default setting which has WiredTiger limit its cache to half the available memory of the system. We’ll experiment with this setting later on.







This I’m still not sure of the actual cause yet but the connections count is higher and more steady than before on this node.


First impressions

The node is running smooth for several hours now. We are getting acquainted to the new metrics and statistics from WiredTiger. The overall node and I/O load is better than before !

While all the above graphs show huge improvements there is no major change from our applications point of view. We didn’t expect any since this is only one node in a whole cluster and that the main benefits will also come from master node migrations.

I’ll continue to share our experience and progress about our mongoDB 3.0 upgrade.

April 29, 2015
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Code Hygiene (April 29, 2015, 03:03 UTC)

Some convenient Makefile targets that make it very easy to keep code clean:

        scan-build clang foo.c -o foo

        indent -linux *.c
scan-build is llvm/clang's static analyzer and generates some decent warnings. Using clang to build (in addition to 'default' gcc in my case) helps diversity and sometimes catches different errors.

indent makes code pretty, the 'linux' default settings are not exactly what I want, but close enough that I don't care to finetune yet.

Every commit should be properly indented and not cause more warnings to appear!

April 27, 2015
Sven Vermeulen a.k.a. swift (homepage, bugs)
Moving closer to 2.4 stabilization (April 27, 2015, 17:18 UTC)

The SELinux userspace project has released version 2.4 in february this year, after release candidates have been tested for half a year. After its release, we at the Gentoo Hardened project have been working hard to integrate it within Gentoo. This effort has been made a bit more difficult due to the migration of the policy store from one location to another while at the same time switching to HLL- and CIL based builds.

Lately, 2.4 itself has been pretty stable, and we're focusing on the proper migration from 2.3 to 2.4. The SELinux policy has been adjusted to allow the migrations to work, and a few final fixes are being tested so that we can safely transition our stable users from 2.3 to 2.4. Hopefully we'll be able to stabilize the userspace this month or beginning of next month.

Sebastian Pipping a.k.a. sping (homepage, bugs)
Comment vulnerability in WordPress 4.2 (April 27, 2015, 13:21 UTC)

Hanno Böck tweeted about WordPress 4.2 Stored XSS rather recently. The short version is: if an attacker can comment on your blog, your blog is owned.

Since the latest release is affected and is the version I am using, I have been looking for a way to disable comments globally, at least until a fix has been released.
I’m surprised how difficult disabling comments globally is.

Option “Allow people to post comments on new articles” is filed under “Default article settings”, so it applies to new posts only. Let’s disable that.
There is a plug-in Disable comments, but since it claims to not alter the database (unless in persistent mode), I have a feeling that it may only remove commenting forms but leave commenting active to hand-made GET/POST requests, so that may not be safe.

So without studying WordPress code in depth my impression is that I have two options:

  • a) restrict comments to registered users, deactivate registration (hoping that all existing users are friendly and that disabled registration is waterproof in 4.2) and/or
  • b) disable comments for future posts in the settings (in case I post again before an update) and for every single post from the past.

On database level, the former can be seen here:

mysql> SELECT option_name, option_value FROM wp_options
           WHERE option_name LIKE '%regist%';
| option_name          | option_value |
| users_can_register   | 0            |
| comment_registration | 1            |
2 rows in set (0.01 sec)

For the latter, this is how to disable comments on all previous posts:

mysql> UPDATE wp_posts SET comment_status = 'closed';
Query OK, .... rows affected (.... sec)
Rows matched: ....  Changed: ....  Warnings: 0

If you have comments to share, please use e-mail this time. Upgraded to 4.2.1 now.

April 26, 2015
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
New devbox running (April 26, 2015, 16:31 UTC)

I announced it in February that Excelsior, which ran the Tinderbox, was no longer at Hurricane Electric. I have also said I'll start on working on a new generation Tinderbox, and to do that I need a new devbox, as the only three Gentoo systems I have at home are the laptops and my HTPC, not exactly hardware to run compilation all the freaking time.

So after thinking of options, I decided that it was much cheaper to just rent a single dedicated server, rather than a full cabinet, and after asking around for options I settled for, because of price and recommendation from friends. Unfortunately they do not support Gentoo as an operating system, which makes a few things a bit more complicated. They do provide you with a rescue system, based on Ubuntu, which is enough to do the install, but not everything is easy that way either.

Luckily, most of the configuration (but not all) was stored in Puppet — so I only had to rename the hosts there, changed the MAC addresses for the LAN and WAN interfaces (I use static naming of the interfaces as lan0 and wan0, which makes many other pieces of configuration much easier to deal with), changed the IP addresses, and so on. Unfortunately since I didn't start setting up that machine through Puppet, it also meant that it did not carry all the information to replicate the system, so it required some iteration and fixing of the configuration. This also means that the next move is going to be easier.

The biggest problem has been setting up correctly the MDRAID partitions, because of GRUB2: if you didn't know, grub2 has an automagic dependency on mdadm — if you don't install it it won't be able to install itself on a RAID device, even though it can detect it; the maintainer refused to add an USE flag for it, so you have to know about it.

Given what can and cannot be autodetected by the kernel, I had to fight a little more than usual and just gave up and rebuilt the two (/boot and / — yes laugh at me but when I installed Excelsior it was the only way to get GRUB2 not to throw up) arrays as metadata 0.90. But the problem was being able to tell what the boot up errors were, as I have no physical access to the device of course.

The server I rented is a Dell server, that comes with iDRAC for remote management (Dell's own name for IPMI, essentially), and allows you to set up connections to through your browser, which is pretty neat — they use a pool of temporary IP addresses and they only authorize your own IP address to connect to them. On the other hand, they do not change the default certificates, which means you end up with the same untrustable Dell certificate every time.

From the iDRAC console you can't do much, but you can start up the remove, JavaWS-based console, which reminded me of something. Unfortunately the JNLP file that you can download from iDRAC did not work on either Sun, Oracle or IcedTea JREs, segfaulting (no kidding) with an X.509 error log as last output — I seriously thought the problem was with the certificates until I decided to dig deeper and found this set of entries in the JNLP file:

 <resources os="Windows" arch="x86">
   <nativelib href="https://idracip/software/avctKVMIOWin32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin32.jar" download="eager"/>
 <resources os="Windows" arch="amd64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
 <resources os="Windows" arch="x86_64">
   <nativelib href="https://idracip/software/avctKVMIOWin64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLWin64.jar" download="eager"/>
  <resources os="Linux" arch="x86">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i386">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i586">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="i686">
    <nativelib href="https://idracip/software/avctKVMIOLinux32.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux32.jar" download="eager"/>
  <resources os="Linux" arch="amd64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  <resources os="Linux" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOLinux64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLLinux64.jar" download="eager"/>
  <resources os="Mac OS X" arch="x86_64">
    <nativelib href="https://idracip/software/avctKVMIOMac64.jar" download="eager"/>
   <nativelib href="https://idracip/software/avctVMAPI_DLLMac64.jar" download="eager"/>

Turns out if you remove everything but the Linux/x86_64 option, it does fetch the right jar and execute the right code without segfaulting. Mysteries of Java Web Start I guess.

So after finally getting the system to boot, the next step is setting up networking — as I said I used Puppet to set up the addresses and everything, so I had working IPv4 at boot, but I had to fight a little longer to get IPv6 working. Indeed IPv6 configuration with servers, virtual and dedicated alike, is very much an unsolved problem. Not because there is no solution, but mostly because there are too many solutions — essentially every single hosting provider I ever used had a different way to set up IPv6 (including none at all in one case, so the only option was a tunnel) so it takes some fiddling around to set it up correctly.

To be honest, has a better set up than OVH or Hetzner, the latter being very flaky, and a more self-service one that Hurricane, which was very flexible, making it very easy to set up, but at the same time required me to just mail them if I wanted to make changes. They document for dibbler, as they rely on DHCPv6 with DUID for delegation — they give you a single /56 v6 net that you can then split up in subnets and delegate independently.

What DHCPv6 in this configuration does not give you is routing — which kinda make sense, as you can use RA (Route Advertisement) for it. Unfortunately at first I could not get it to work. Turns out that, since I use subnets for the containerized network, I enabled IPv6 forwarding, through Puppet of course. Turns out that Linux will ignore Route Advertisement packets when forwarding IPv6 unless you ask it nicely to — by setting accept_ra=2 as well. Yey!

Again this is the kind of problems that finding this information took much longer than it should have been; Linux does not really tell you that it's ignoring RA packets, and it is by far not obvious that setting one sysctl will disable another — unless you go and look for it.

Luckily this was the last problem I had, after that the server was set up fine and I just had to finish configuring the domain's zone file, and the reverse DNS and the SPF records… yes this is all the kind of trouble you go through if you don't just run your whole infrastructure, or use fully cloud — which is why I don't consider self-hosting a general solution.

What remained is just bits and pieces. The first was me realizing that Puppet does not remove the entries from /etc/fstab by default, so I noticed that the Gentoo default /etc/fstab file still contains the entries for CD-ROM drives as well as /dev/fd0. I don't remember which was the last computer with a floppy disk drive that I used, let alone owned.

The other fun bit has been setting up the containers themselves — similarly to the server itself, they are set up with Puppet. Since the server used to be running a tinderbox, it used to also host a proper rsync mirror, it was just easier, but I didn't want to repeat that here, and since I was unable to find a good mirror through mirrorselect (longer story), I configured Puppet to just provide to all the containers with as their sync server, which did not work. Turns out that our default mirror address does not have any IPv6 hosts on it ­– when I asked Robin about it, it seems like we just don't have any IPv6-hosted mirror that can handle that traffic, it is sad.

So anyway, I now have a new devbox and I'm trying to set up the rest of my repositories and access (I have not set up access to Gentoo's repositories yet which is kind of the point here.) Hopefully this will also lead to more technical blogging in the next few weeks as I'm cutting down on the overwork to relax a bit.

April 25, 2015
Git changes & impact to Overlays hostnames (April 25, 2015, 00:00 UTC)

Changes to the Gentoo Git hosting setup may require URL changes in your checkouts: Repositories are now only available via for authenticated users and for read-only traffic.

As previously announced [1] [2], and previously in the discussion of merging Overlays with Gentoo’s primary SCM hosting (CVS+Git): The old overlays hostnames ( and have now been disabled, as well as non-SSH traffic to This was a deliberate move to separate anonymous versus authenticated Git traffic, and ensure that anonymous Git traffic can continued to be scaled when we go ahead with switching away from CVS. Anonymous and authenticated Git is now served by separate systems, and no anonymous Git traffic is permitted to the authenticated Git server.

If you have anonymous Git checkouts from any of the affected hostnames, you should switch them to using one of these new URLs:

  • git://$REPO

If you have authenticated Git checkouts from the same hosts, you should switch them to this new URL:

  • git+ssh://$REPO

In either case, you can trivially update any existing checkout with:
git remote set-url origin git+ssh://$REPO
(be sure to adjust the path of the repository and the name of the remote as needed).

April 23, 2015
Denis Dupeyron a.k.a. calchan (homepage, bugs)

is 46.

In a previous post I described how to patch QEMU to allow building binutils in a cross chroot. In there I increased the maximal number of argument pages to 64 because I was just after a quick fix. Today I finally bisected that, and the result is you need at least 46 for MAX_ARG_PAGES in order for binutils to build.

In bug 533882 it is discussed that LibreOffice requires an even larger number of pages. It is possible other packages also require such a large limit. Note that it may not be a good idea to increase the MAX_ARG_PAGES limit to an absurdly high number and leave it at that. A large amount of memory will be allocated in the target’s memory space and that may be a problem.

Hopefully QEMU switches to a dynamic limit someday like the kernel. In the meantime, my upcoming crossroot tool will offer a way to more easily deal with that.

April 21, 2015
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
How to give a great talk, the lazy way (April 21, 2015, 15:42 UTC)

presenter modeGot a talk coming up? Want it to go well? Here’s some starting points.

I give a lot of talks. Often I’m paid to give them, and I regularly get very high ratings or even awards. But every time I listen to people speaking in public for the first time, or maybe the first few times, I think of some very easy ways for them to vastly improve their talks.

Here, I wanted to share my top tips to make your life (and, selfishly, my life watching your talks) much better:

  1. Presenter mode is the greatest invention ever. Use it. If you ignore or forget everything else in this post, remember the rainbows and unicorns of presenter mode. This magical invention keeps the current slide showing on the projector while your laptop shows something different — the current slide, a small image of the next slide, and your slide notes. The last bit is the key. What I put on my notes is the main points of the current slide, followed by my transition to the next slide. Presentations look a lot more natural when you say the transition before you move to the next slide rather than after. More than anything else, presenter mode dramatically cut down on my prep time, because suddenly I no longer had to rehearse. I had seamless, invisible crib notes while I was up on stage.
  2. Plan your intro. Starting strong goes a long way, as it turns out that making a good first impression actually matters. It’s time very well spent to literally script your first few sentences. It helps you get the flow going and get comfortable, so you can really focus on what you’re saying instead of how nervous you are. Avoid jokes unless most of your friends think you’re funny almost all the time. (Hint: they don’t, and you aren’t.)
  3. No bullet points. Ever. (Unless you’re an expert, and you probably aren’t.) We’ve been trained by too many years of boring, sleep-inducing PowerPoint presentations that bullet points equal naptime. Remember presenter mode? Put the bullet points in the slide notes that only you see. If for some reason you think you’re the sole exception to this, at a minimum use visual advances/transitions. (And the only good transition is an instant appear. None of that fading crap.) That makes each point appear on-demand rather than all of them showing up at once.
  4. Avoid text-filled slides. When you put a bunch of text in slides, people inevitably read it. And they read it at a different pace than you’re reading it. Because you probably are reading it, which is incredibly boring to listen to. The two different paces mean they can’t really focus on either the words on the slide or the words coming out of your mouth, and your attendees consequently leave having learned less than either of those options alone would’ve left them with.
  5. Use lots of really large images. Each slide should be a single concept with very little text, and images are a good way to force yourself to do so. Unless there’s a very good reason, your images should be full-bleed. That means they go past the edges of the slide on all sides. My favorite place to find images is a Flickr advanced search for Creative Commons licenses. Google also has this capability within Search Tools. Sometimes images are code samples, and that’s fine as long as you remember to illustrate only one concept — highlight the important part.
  6. Look natural. Get out from behind the podium, so you don’t look like a statue or give the classic podium death-grip (one hand on each side). You’ll want to pick up a wireless slide advancer and make sure you have a wireless lavalier mic, so you can wander around the stage. Remember to work your way back regularly to check on your slide notes, unless you’re fortunate enough to have them on extra monitors around the stage. Talk to a few people in the audience beforehand, if possible, to get yourself comfortable and get a few anecdotes of why people are there and what their background is.
  7. Don’t go over time. You can go under, even a lot under, and that’s OK. One of the best talks I ever gave took 22 minutes of a 45-minute slot, and the rest filled up with Q&A. Nobody’s going to mind at all if you use up 30 minutes of that slot, but cutting into their bathroom or coffee break, on the other hand, is incredibly disrespectful to every attendee. This is what watches, and the timer in presenter mode, and clocks, are for. If you don’t have any of those, ask a friend or make a new friend in the front row.
  8. You’re the centerpiece. The slides are a prop. If people are always looking at the slides rather than you, chances are you’ve made a mistake. Remember, the focus should be on you, the speaker. If they’re only watching the slides, why didn’t you just post a link to Slideshare or Speakerdeck and call it a day?

I’ve given enough talks that I have a good feel on how long my slides will take, and I’m able to adjust on the fly. But if you aren’t sure of that, it might make sense to rehearse. I generally don’t rehearse, because after all, this is the lazy way.

If you can manage to do those 8 things, you’ve already come a long way. Good luck!

Tagged: communication, gentoo

April 16, 2015
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)
Removing old NX packages from tree (April 16, 2015, 21:30 UTC)

I already sent the last rites announce a few days ago, but here is a more detailed post on the coming up removal of “old” NX packages. Long story short: migrate to X2Go if possible, or use the NX overlay (“best-effort” support provided).
2015/04/26 note: treecleaning done!

Affected packages

Basically, all NX clients and servers except x2go and nxplayer! Here is the complete list with some specific last rites reasons:

  • net-misc/nxclient,net-misc/nxnode,net-misc/nxserver-freeedition: binary-only original NX client and server. Upstream has moved on to a closed-source technology, and this version  bundles potientally vulnerable binary code. It does not work as well as before with current libraries (like Cairo).
  • net-misc/nxserver-freenx, net-misc/nxsadmin: the first open-source alternative server. It could be tricky to get working, and is not updated anymore (last upstream activity around 2009)
  • net-misc/nxcl, net-misc/qtnx: an open-source alternative client (last upstream activity around 2008)
  • net-misc/neatx: Google’s take on a NX server, sadly it never took off (last upstream activity around 2010)
  • app-admin/eselect-nxserver (an eselect module to switch active NX server, useless without these servers in tree)

Continue using these packages on Gentoo

These packages will be dropped from the main tree by the end of this month (2015/04), and then only available in the NX overlay. They will still be supported there in a “best effort” way (no guarantee how long some of these packages will work with current systems).

So, if one of these packages still works better for you, or you need to keep them around before migrating, just run:

# layman -a nx


While it is not a direct drop-in replacement, x2go is the most complete solution currently in Gentoo tree (and my recommendation), with a lot of possible advanced features, active upstream development, … You can connect to net-misc/x2goserver with net-misc/x2goclient, net-misc/pyhoca-gui, or net-misc/pyhoca-cli.

If you want to try NoMachine’s (the company that created NX) new system, well the client is available in Portage as net-misc/nxplayer. The server itself is not packaged yet, if you are interested in it, this is bug #488334

April 11, 2015
Sebastian Pipping a.k.a. sping (homepage, bugs)
Firefox: You may want to update to 37.0.1 (April 11, 2015, 19:23 UTC)

I was pointed to this Mozilla Security Advisory:

Certificate verification bypass through the HTTP/2 Alt-Svc header

While it doesn’t say if all versions prior to 37.0.1 are affected, it does say that sending a certain server response header disabled warnings of invalid SSL certificates for that domain. Ooops.

I’m not sure how relevant HTTP/2 is by now.

Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

Slot conflicts can be annoying. It's worse when an attempt to fix them leads to an even bigger mess. I hope this post helps you with some cases - and that portage will keep getting smarter about resolving them automatically.

Read more »

Sebastian Pipping a.k.a. sping (homepage, bugs)

While is not really new, I learned about that site only recently.
And while I knew that browser self-identification would reduce my anonymity on the Internet, I didn’t expect this result:

Your browser fingerprint appears to be unique among the 5,198,585 tested so far.

Wow. Why? Let’s try one of the others browsers I use. “Appears to be unique”, again (where Flash is enabled).

What’s so unique about my setup? The two reports say about my setup:

Characteristic One in x browsers have this value
Browser Firefox
Google Chrome
User Agent 2,030.70 472,599.36 16,576.56
HTTP_ACCEPT Headers 12.66 5477.97 5,477.97
Browser Plugin Details 577,620.56 259,929.65 7,351.75
Time Zone 6.51 6.51 6.51
Screen Size and Color Depth 13.72 13.72 13.72
System Fonts 5,198,585.00 5,198,585.00 5.10
(Flash and Java disabled)
Are Cookies Enabled? 1.35 1.35 1.35
Limited supercookie test 1.83 1.83 1.83

User agent and browser plug-ins hurt, fonts alone kill me altogether. Ouch.


  • It’s the very same when browsing with an incognito window. Re-reading, what that feature is officially intended to do (being incognito to your own history), that stops being a surprise.
  • Chromium (with Flash/Java disabled) added

Thoughts on fixing this issue:

I’m not sure about how I want to fix this myself. Faking popular values (in a popular combination to not fire back) could work using a local proxy, a browser patch, a browser plug-in maybe. Obtaining true popular value combinations is another question. Fake values can reduce the quality of the content I am presented, e.g. I would not fake my screen resolution or be sure to not deviate by much, probably.