Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
April 26, 2013, 23:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 26, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB and Pacemaker recent bumps (April 26, 2013, 14:23 UTC)

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

All of this is live on portage, enjoy.

Sven Vermeulen a.k.a. swift (homepage, bugs)
New SELinux userspace release (April 26, 2013, 01:50 UTC)

A new release of the SELinux userspace utilities was recently announced. I have made the packages for Gentoo available and they should now be in the main tree (~arch of course). During the testing of the packages however, I made a stupid mistake of running the tests on the wrong VM, one that didn’t contain the new packages. Result: no regressions (of course). My fault for not using in-ebuild tests properly, as I should. So you’ll probably see me blogging about the in-ebuild testing soon ;-)

In any case, the regressions I did find out (quite fast after I updated my main laptop with them as well) where a missing function in libselinux, a referral to a non-existing makefile when using “semanage permissive” and the new sepolicy application requiring yum python bindings. At least, with the missing function (hopefully correctly) resolved, all tests I usually do (except for the permissive domains) are now running well again.

This only goes to show how important testing is. Of course, I reported the bugs on the mailinglist of the userspace utilities as well. Hopefully they can look at them while I’m asleep so I can integrate fixes tomorrow more easily ;-)

April 25, 2013
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Recently, I have been toying around with GateOne, a web-based SSH client/terminal emulator. However, installing it on my server proved to be a bit challenging: it requires tornado as a webserver, and uses websockets, while I have an Apache 2.2 instance already running with a few sites on it (and my authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were not officially supported by Apache... until recently! Jim Jagielski added the proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the mailing list, backporting to 2.4 is easy to do (and was suggested as an official backport), but 2.2 required a few additional changes to the original patch (and current upstream trunk).

A few fixes later, I got a working patch (based on Apache 2.2.24), available here:

Recompile with this patch, and you will get a nice and shiny module file!

Now just load it (in /etc/apache2/httpd.conf in Gentoo):
<IfDefine PROXY>
LoadModule proxy_wstunnel_module modules/

and add a location pointing to your GateOne installation:

<Location /gateone/ws>
    ProxyPass wss://
    ProxyPassReverse wss://

<Location /gateone>
    Order deny,allow
    Deny from all
    Allow from #your favorite rule


Reload Apache, and you now have Gateone running behind your Apache server :) If it does not work, first check GateOne log and configuration, especially the "origins" variable

For other websocket applications, Jim Jagielski comments here :

ProxyPass /whatever ws://websocket-srvr.example/com/

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed protocols between the client and the backend, so you tell Apache that you'll be talking 'ws' with the backend (same as ajp://whatever sez that httpd will be talking ajp to the backend).

Sven Vermeulen a.k.a. swift (homepage, bugs)
Gentoo protip: using buildpkgonly (April 25, 2013, 01:50 UTC)

If you don’t want to have the majority of builds run in the background while you are busy on the system, but you don’t want to automatically install software in the background when you are not behind your desk, then perhaps you can settle for using binary packages. I’m not saying you need to setup a build server and such or do your updates first in a chroot.

No, what this tip is for is to use the –buildpkgonly parameter of emerge at night, building some of your software (often the larger ones) as binary packages only (storing those in the ${PKGDIR} which defaults to /usr/portage/packages). When you are then on your system, you can run the update with binary package included:

# emerge -puDk world

To use –buildpkgonly, all package(s) that Portage wants to build must have all their dependencies met. If not, then the build will not go through and you’re left with no binary packages at all. So what we do is to create a script that looks at the set of packages that would be build, and then one for one building the binary package.

When ran, the script will attempt to build binary packages for those that have no dependency requirements anymore. Those builds will then create a binary package but will not be merged on the system. When you later update your system, including binary packages, those packages that have been build during the night will be merged quickly, reducing the build load on your system while you are working on it.



emerge -puDN --color=n --columns --quiet=y world | awk '{print $2}' > ${LIST};

for PACKAGE in $(cat ${LIST});
  printf "Building binary package for ${PACKAGE}... "
  emerge -uN --quiet-build --quiet=y --buildpkgonly ${PACKAGE};
  if [[ $? -eq 0 ]];
    echo "ok";
    echo "failed";

I ran this a couple of days ago when -uDN world showed 46 package updates (including a few hefty ones like chromium). After running this script, 35 of them had a binary package ready so the -uDN world now only needed to build 11 packages, merging the remainder from binary packages.

April 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Hello Gentoo Planet (April 24, 2013, 08:51 UTC)

Hey Gentoo folks !

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Using strace to troubleshoot SELinux problems (April 24, 2013, 01:50 UTC)

When SELinux is playing tricks on you, you can just “allow” whatever it wants to do, but that is not always an option: sometimes, there is no denial in sight because the problem lays within SELinux-aware applications (applications that might change their behavior based on what the policy sais or even based on if SELinux is enabled or not). At other times, you get a strange behavior that isn’t directly visible what the cause is. But mainly, if you want to make sure that allowing something is correct (and not just a corrective action), you need to be absolutely certain that what you want to allow is security-wise acceptable.

To debug such issues, I often take the strace command to debug the application at hand. To use strace, I toggle the allow_ptrace boolean (strace uses ptrace() which, by default, isn’t allowed policy-wise) and then run the offending application through strace (or attach to the running process if it is a daemon). For instance, to debug a tmux issue we had with the policy not that long ago:

# setsebool allow_ptrace on
# strace -o strace.log -f -s 256 tmux

The resulting log file (strace.log) might seem daunting at first to look at. What you see are the system calls that the process is performing, together with their options but also the return code of each call. This is especially important as SELinux, if it denies something, often returns something like EACCESS (Permission Denied).

7313  futex(0x349e016f080, FUTEX_WAKE_PRIVATE, 2147483647) = 0
7313  futex(0x5aad58fd84, FUTEX_WAKE_PRIVATE, 2147483647) = 0
7313  stat("/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
7313  stat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
7313  stat("/home/swift", {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0
7313  stat("/home/swift/.pki", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
7313  stat("/home/swift/.pki/nssdb", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
7313  statfs("/home/swift/.pki/nssdb", 0x3c3cab6fa50) = -1 EACCES (Permission denied)

Most (if not all) of the methods shown in a strace log are documented through manpages, so you can quickly find out that futex() is about fast user-space locking, stat() (man 2 stat to see the information about the method instead of the application) is about getting file status and statfs() is for getting file system statistics.

The most common permission issues you’ll find are file related:

7313  open("/proc/filesystems", O_RDONLY) = -1 EACCES (Permission denied)

In the above case, you notice that the application is trying to open the /proc/filesystems file read-only. In the SELinux logs, this might be displayed as follows:

audit.log:type=AVC msg=audit(1365794728.180:3192): avc:  denied  { read } for  
pid=860 comm="nacl_helper_boo" name="filesystems" dev="proc" ino=4026532034 
scontext=staff_u:staff_r:chromium_naclhelper_t tcontext=system_u:object_r:proc_t tclass=file

Now the case of tmux before was not an obvious one. In the end, I compared the strace output’s of two runs (one in enforcing and one in permissive) to find what the difference would be. This is the result:


10905 fcntl(9, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE) 


10905 fcntl(9, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 

You notice the difference? In enforcing-mode, one of the flags on the file descriptor has O_RDONLY whereas the one in permissive mode as O_RDWR. This means that the file descriptor in enforcing mode is read-only whereas in permissive-mode is read-write. What we then do in the strace logs is to see where this file descriptor (with id=9) comes from:

10905 dup(0)     = 9
10905 dup(1)     = 10
10905 dup(2)     = 11

As the man-pages sais, dup() duplicates a file descriptor. And because, by convention, the first three file descriptors of an application correspond with standard input (0), standard output (1) and error output (2), we now know that the file descriptor with id=9 comes from the standard input file descriptor. Although this one should be read-only (it is the input that the application gets = reads), it seems that tmux might want to use this for writes as well. And that is what happens – tmux sends the file descriptor to the tmux server to check if it is a tty and then uses it to write to the screen.

Now what does that have to do with SELinux? It has to mean something, otherwise running in permissive mode would give the same result. After some investigation, we found out that using newrole to switch roles changes the flags of the standard input (as then provided by newrole) from O_RDWR to O_RDONLY (code snippet from newrole.c – look at the first call to open()):

/* Close the tty and reopen descriptors 0 through 2 */
if (ttyn) {
        if (close(fd) || close(0) || close(1) || close(2)) {
                fprintf(stderr, _("Could not close descriptors.\n"));
                goto err_close_pam;
        fd = open(ttyn, O_RDONLY | O_NONBLOCK);
        if (fd != 0)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);
        fd = open(ttyn, O_RDWR | O_NONBLOCK);
        if (fd != 1)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);
        fd = open(ttyn, O_RDWR | O_NONBLOCK);
        if (fd != 2)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);

Such obscure problems are much easier to detect and troubleshoot thanks to tools like strace.

April 23, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SLOT’ing the old swig-1 (April 23, 2013, 01:50 UTC)

The SWIG tool helps developers in building interfaces/libraries that can be accessed from many other languages than the ones the library is initially written in or for. The SELinux userland utility setools uses it to provide Python and Ruby interfaces even though the application itself is written in C. Sadly, the tool currently requires swig-1 for its building of the interfaces and uses constructs that do not seem to be compatible with swig-2 (same with the apse package, btw).

I first tried to patch setools to support swig-2, but eventually found regressions in the libapol library it provides so the patch didn’t work out (that is why some users mentioned that a previous setools version did build with swig – yes it did, but the result wasn’t correct). Recently, a post on Google Plus’ SELinux community showed me that I wasn’t wrong in this matter (it really does require swig-1 and doesn’t seem to be trivial to fix).

Hence, I have to fix the gentoo build problem where one set of tools requires swig-1 and another swig-2. Otherwise world-updates and even building stages for SELinux systems would fail as Portage finds incompatible dependencies. One way to approach this is to use Gentoo’s support for SLOTs. When a package (ebuild) in Gentoo defines a SLOT, it tells the package manager that the same package but a different version might be installed alongside the package if that has a different SLOT version. In case of swig, the idea is to give swig-1 a different slot than swig-2 (which uses SLOT="0") and make sure that both do not install the same files (otherwise you get file collisions).

Luckily, swig places all of its files except for the swig binary itself in /usr/share/swig/<version>, so all I had left to do was to make sure the binary itself is renamed. I chose to use swig1.3 (similar as to how tools like ruby and python and for some packages even java is implemented on Gentoo). The result (through bug 466650) is now in the tree, as well as an adapted setools package that uses the new swig SLOT.

Thanks to Samuli Suominen for getting me on the (hopefully ;-) right track. I don’t know why I was afraid of doing this, it was much less complex than I thought (now let’s hope I didn’t break other things ;-)

April 22, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Mitigating DDoS attacks (April 22, 2013, 01:50 UTC)

Lately, DDoS attacks have been in the news more than I was hoping for. It seems that the botnets or other methods that are used to generate high-volume traffic to a legitimate service are becoming more and more easy to get and direct. At the time that I’m writing this post (a few days before its published though), the popular Reddit site is undergoing a DDoS attack which I hope will be finished (or mitigated) soon.

But what can a service do against DDoS attacks? After all, DDoS is like gasping for air if you can’t swim and are (almost) drowning: the air is the legitimate traffic, but the water is overwhelming and your mouth, pharynx and trachea just aren’t made to deal with this properly. And unlike specific Denial-of-Service attacks that use a vulnerability or malcrafted URL, you cannot just install some filter or upgrade a component to be safe again.

Methods for mitigating DDoS attacks (beyond increasing your bandwidth as that is very expensive and the botnets involved can go up to 130 Gbps, not a bandwidth you are probably willing to pay for if legitimate services on your site have enough with 10 Mbps) that come to mind are of all sorts of “classes”…

Configure your servers and services that they stay alive under pressure. Look for the sweet spot where performance of the services is still stable where a higher load means performance degradation. If you have some experience with load testing, you know that throughput on a service initially goes up linearly with the load (first phase). Then, it slows down (but still rises – phase 2) up to a point that, when you increase the load even further just a bit, the service degrades (and sometimes doesn’t even get back to its feed when you remove the additional load again – phase3). You need to look for the spot where load and performance is stable (somewhere at the middle of the second phase) and configure your systems so that additional load is dropped. Yes, this means that the DDoS will be more effective, but also means that your systems can easily get back up to their feet when the attack has finished (and you get a more predictable load and consequences).

Investigate if you can have a backup service that has a higher throughput ability (with reduced functionality). If the DDoS attack focuses on the system resources rather than network resources involved, such a backup “lighter” service can be used to still provide basic functionality (for instance a more static website), but even in case of network resource consumption it can have the advantage that the network consumption that your servers are placing (while replying to the requests) are lower.

Depending on the service you offer (and financial means you have at your disposal) you can look at redirecting traffic to more specialized services. Companies like Prolexic have systems that “scrub” the DDoS traffic from all traffic and only send legitimate requests to your systems. There are several methods for redirecting load, but a common one is to change the DNS records for your service(s) to point to the addresses of those specialized services instead. The lower the TTL (Time To Live) is of the records, the faster the redirect might take place. If you want to be able to handle an increase in load without specialized services, you might want to be able to redirect traffic to cloud services (where you host your service as well) which are generally capable of handling higher throughput than your own equipment (but this too comes at an additional cost).

Some people mention that you can switch IP address. This is true only if the DDoS attack is targeting IP addresses and not (DNS-resolved) URIs. You could set up additional IP addresses that are not registered in DNS (yet) and during the attack, extend the service resolving towards the additional addresses as well. If you do not notice a load spread of the DDoS attack towards the new addresses, you can remove the old addresses from DNS. But again, this won’t work generally – not only are most DDoS attacks using DNS-resolved URIs, most of the time attackers are actively involved in the attack and will quickly notice if such a “failover” has occurred (and react against it).

Depending on your relationship with your provider or location service, you can ask if the edge routers (preferably those of the ISP) can have fallback source filtering rules available to quickly enable. Those fallback rules would then only allow traffic from networks that you know most (all?) of your customers and clients are at. This isn’t always possible, but if you have a service that targets mainly people within your country, have the filter only allow traffic from networks of that country. If the DDoS attack uses geographically spread resources, it might be that the number of bots inside those allowed networks are low enough that your service can continue.

Configure your firewalls (and ask that your ISP does the same) to not accept (drop) traffic not expected. If the services on your architecture do not use external DNS, then you can drop incoming DNS response packets (a popular DDoS attack method is by using spoofed addresses towards open DNS resolvers; called a DNS reflection attack).

And finally, if you are not bound to a single data center, you might want to spread services across multiple locations. Although more difficult from a management point of view, a dispersed/distributed architecture allows other services to continue running while one is being attacked.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.


For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.


The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is a follow-up on my last post for autotools introduction. I’m trying to keep these posts bite sized both because it seems to work nicely, and because this way I can avoid leaving the posts rotting in the drafts set.

So after creating a simple autotools build system in the previous now you might want to know how to build a library — this is where the first part of complexity kicks in. The complexity is not, though, into using libtool, but into making a proper library. So the question is “do you really want to use libtool?”

Let’s start from a fundamental rule: if you’re not going to install a library, you don’t want to use libtool. Some projects that only ever deal with programs still use libtool because that way they can rely on .la files for static linking. My suggestion is (very simply) not to rely on them as much as you can. Doing it this way means that you no longer have to care about using libtool for non-library-providing projects.

But in the case you are building said library, using libtool is important. Even if the library is internal only, trying to build it without libtool is just going to be a big headache for the packager that looks into your project (trust me I’ve seen said projects). Before entering the details on how you use libtool, though, let’s look into something else: what you need to make sure you think about, in your library.

First of all, make sure to have an unique prefix to your public symbols, be them constants, variables or functions. You might also want to have one for symbols that you use within your library on different translation units — my suggestion in this example is going to be that symbols starting with foo_ are public, while symbols starting with foo__ are private to the library. You’ll soon see why this is important.

Reducing the amount of symbols that you expose is not only a good performance consideration, but it also means that you avoid the off-chance to have symbol collisions which is a big problem to debug. So do pay attention.

There is another thing that you should consider when building a shared library and that’s the way the library’s ABI is versioned but it’s a topic that, in and by itself, takes more time to discuss than I want to spend in this post. I’ll leave that up to my full guide.

Once you got these details sorted out, you should start by slightly change the file from the previous post so that it initializes libtool as well:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Now it is possible to provide a few options to LT_INIT for instance to disable by default the generation of static archives. My personal recommendation is not to touch those options in most cases. Packagers will disable static linking when it makes sense, and if the user does not know much about static and dynamic linking, they are better off getting everything by default on a manual install.

On the side, the changes are very simple. Libraries built with libtool have a different class than programs and static archives, so you declare them as lib_LTLIBRARIES with a .la extension (at build time this is unavoidable). The only real difference between _LTLIBRARIES and _PROGRAMS is that the former gets its additional links from _LIBADD rather than _LDADD like the latter.

bin_PROGRAMS = fooutil1 fooutil2 fooutil3

libfoo_la_SOURCES = lib/foo1.c lib/foo2.c lib/foo3.c
libfoo_la_LIBADD = -lz
libfoo_la_LDFLAGS = -export-symbols-regex &apos^foo_[^_]&apos

fooutil1_LDADD =
fooutil2_LDADD =
fooutil3_LDADD = -ldl

pkginclude_HEADERS = lib/foo1.h lib/foo2.h lib/foo3.h

The _HEADERS variable is used to define which header files to install and where. In this case, it goes into ${prefix}/include/${PACKAGE}, as I declared it a pkginclude install.

The use of -export-symbols-regex ­– further documented in the guide – ensures that only the symbols that we want to have publicly available are exported and does so in an easy way.

This is about it for now — one thing that I haven’t added in the previous post, but which I’ll expand in the next iteration or the one after, is that the only command you need to regenerate autotools is autoreconf -fis and that still applies after introducing libtool support.

Sven Vermeulen a.k.a. swift (homepage, bugs)

When working with a SELinux-enabled system, administrators will eventually need to make small updates to the existing policy. Instead of building their own full policy (always an option, but most likely not maintainable in the long term) one or more SELinux policy modules are created (most distributions use a modular approach to the SELinux policy development) which are then loaded on their target systems.

In the past, users had to create their own policy module by creating (and maintaining) the necessary .te file(s), building the proper .pp files and loading it in the active policy store. In Gentoo, from policycoreutils-2.1.13-r11 onwards, a script is provided to the users that hopefully makes this a bit more intuitive for regular users: selocal.

As the name implies, selocal aims to provide an interface for handling local policy updates that do not need to be packaged or distributed otherwise. It is a command-line application that you feed single policy rules at one at a time. Each rule can be accompanied with a single-line comment, making it obvious for the user to know why he added the rule in the first place.

# selocal --help
Usage: selocal [] [] 

Command can be one of:
  -l, --list            List the content of a SELinux module
  -a, --add             Add an entry to a SELinux module
  -d, --delete          Remove an entry from a SELinux module
  -M, --list-modules    List the modules currently known by selocal
  -u, --update-dep      Update the dependencies for the rules
  -b, --build           Build the SELinux module (.pp) file (requires privs)
  -L, --load            Load the SELinux module (.pp) file (requires privs)

Options can be one of:
  -m, --module          Module name to use (default: selocal)
  -c, --comment        Comment (with --add)

The option -a requires that a rule is given, like so:
  selocal -a "dbadm_role_change(staff_r)"
The option -d requires that a line number, as shown by the --list, is given, like so:
  selocal -d 12

Let’s say that you need to launch a small script you written as a daemon, but you want this to run while you are still in the staff_t domain (it is a user-sided daemon you use personally). As regular staff_t isn’t allowed to have processes bind on generic ports/nodes, you need to enhance the SELinux policy a bit. With selocal, you can do so as follows:

# selocal --add "corenet_tcp_bind_generic_node(staff_t)" --comment "Launch local daemon"
# selocal --add "corenet_tcp_bind_generic_port(staff_t)" --comment "Launch local daemon"
# selocal --build --load
(some output on building the policy module)

When finished, the local policy is enhanced with the two mentioned rules. You can query which rules are currently stored in the policy:

# selocal --list
12: corenet_tcp_bind_generic_node(staff_t) # Launch local daemon
13: corenet_tcp_bind_generic_port(staff_t) # Launch local daemon

If you need to delete a rule, just pass the line number:

# selocal --delete 13

Having this tool around also makes it easier to test out changes suggested through bugreports. When I test such changes, I add in the bug report ID as the comment so I can track which settings are still local and which ones have been pushed to our policy repository. Underlyingly, selocal creates and maintains the necessary policy file in ~/.selocal and by default uses the selocal policy module name.

I hope this tool helps users with their quest on using SELinux. Feedback and comments are always appreciated! It is a small bash script and might still have a few bugs in it, but I have been using it for a few months so most quirks should be handled.

April 20, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Transforming GuideXML to DocBook (April 20, 2013, 01:50 UTC)

I recently committed an XSL stylesheet that allows us to transform the GuideXML documents (both guides and handbooks) to DocBook. This isn’t part of a more elaborate move to try and push DocBook instead of GuideXML for the Gentoo Documentation though (I’d rather direct documentation development more to the Gentoo wiki instead once translations are allowed): instead, I use it to be able to generate our documentation in other formats (such as PDF but also ePub) when asked.

If you’re not experienced with XSL: XSL stands for Extensible Stylesheet Language and can be seen as a way of “programming” in XML. A stylesheet allows developers to transform one XML document towards another format (either another XML, or as text-like output like wiki) while manipulating its contents. In case of documentation, we try to keep as much structure in the document as possible, but other uses could be to transform a large XML with only a few interesting fields towards a very small XML (only containing those fields you need) for further processing.

For now (and probably for the foreseeable future), the stylesheet is to be used in an offline mode (we are not going to provide auto-generated PDFs of all documents) as the process to convert a document from GuideXML to DocBook to XML:FO to PDF is quite resource-intensive. But users that are interested can use the stylesheet as linked above to create their own PDFs of the documentation.

Assuming you have a checkout of the Gentoo documentation, this process can be done as follows (example for the AMD64 handbook):

$ xsltproc docbook.xsl /path/to/handbook-amd64.xml > /somewhere/handbook-amd64.docbook
$ cd /somewhere
$ xsltproc --output --stringparam paper.type A4 \
  /usr/share/sgml/docbook/xsl-stylesheets/fo/docbook.xsl handbook-amd64.docbook
$ fop handbook-amd64.pdf

The docbook stylesheets are offered by the app-text/docbook-xsl-stylesheets package whereas the fop command is provided by dev-java/fop.

I have an example output available (temporarily) at my dev space (amd64 handbook) but I’m not going to maintain this for long (so the link might not work in the near future).

April 19, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

So in the past few posts I discussed how sysbench can be used to simulate some workloads, specific to a particular set of tasks. I used the benchmark application to look at the differences between the guest and host on my main laptop, and saw a major performance regression with the memory workload test. Let’s view this again, using parameters more optimized to view the regressions:

$ sysbench --test=memory --memory-total-size=32M --memory-block-size=64 run
  Operations performed: 524288 (2988653.44 ops/sec)
  32.00 MB transferred (182.41 MB/sec)

  Operations performed: 524288 (24920.74 ops/sec)
  32.00 MB transferred (1.52 MB/sec)

$ sysbench --test=memory --memory-total-size=32M --memory-block-size=32M run
  Operations performed: 1 (  116.36 ops/sec)
  32.00 MB transferred (3723.36 MB/sec)

  Operations performed: 1 (   89.27 ops/sec)
  32.00 MB transferred (2856.77 MB/sec)

From looking at the code (gotta love Gentoo for making this obvious ;-) we know that the memory workload, with a single thread, does something like the following:

total_bytes = 0;
repeat until total_bytes >= memory-total-size:
  total_bytes += memory-block-size
  (start event timer)
  pointer -> buffer;
  while pointer <-> end-of(buffer)
    write somevalue at pointer
  (stop event timer)

Given that the regression is most noticeable when the memory-block-size is very small, the part of the code whose execution count is much different between the two runs is the mutex locking, global memory increment and the start/stop of event timer.

In a second phase, we also saw that mutex locking itself is not impacted. In the above case, we have 524288 executions. However, if we run the mutex workload this number of times, we see that this hardly has any effect:

$ sysbench --test=mutex --mutex-num=1 --mutex-locks=524288 --mutex-loops=0 run
Host:      total time:        0.0275s
Guest:     total time:        0.0286s

The code for the mutex workload, knowing that we run with one thread, looks like this:

mutex_locks = 524288
(start event timer)
  lock = get_mutex()
until mutex_locks = 0;
(stop event timer)

To check if the timer might be the culprit, let’s look for a benchmark that mainly does timer checks. The cpu workload can be used, when we tell sysbench that the prime to check is 3 (as its internal loop runs from 3 till the given number, and when the given number is 3 it skips the loop completely) and we ask for 524288 executions.

$ sysbench --test=cpu --cpu-max-prime=3 --max-requests=524288 run
Host:  total time:  0.1640s
Guest: total time: 21.0306s

Gotcha! Now, the event timer (again from looking at the code) contains two parts: getting the current time (using clock_gettime()) and logging the start/stop (which is done in memory structures). Let’s make a small test application that gets the current time (using the real-time clock as the sysbench application does) and see if we get similar results:

$ cat test.c
#include <stdio.h>
#include <time.h>

int main(int argc, char **argv, char **arge) {
  struct timespec tps;
  long int i = 524288;
  while (i-- > 0)
    clock_gettime(CLOCK_REALTIME, &tps);

$ gcc -lrt -o test test.c
$ time ./test
Host:  0m0.019s
Guest: 0m5.030s

So given that the clock_gettime() is ran twice in the sysbench, we already have 10 seconds of overhead on the guest (and only 0,04s on the host). When such time-related functions are slow, it is wise to take a look at the clock source configured on the system. On Linux, you can check this by looking at /sys/devices/system/clocksource/*.

# cd /sys/devices/system/clocksource/clocksource0
# cat current_clocksource
# cat available_clocksource
kvm-clock tsc hpet acpi_pm

Although kvm-clock is supposed to be the best clock source, let’s switch to the tsc clock:

# echo tsc > current_clocksource

If we rerun our test application, we get a much more appreciative result:

$ time ./test
Host:  0m0.019s
Guest: 0m0.024s

So, what does that mean for our previous benchmark results?

$ sysbench --test=cpu --cpu-max-prime=20000 run
Host:            35,3049 sec
Guest (before):  36,5582 sec
Guest (now):     35,6416 sec

$ sysbench --test=fileio --file-total-size=6G --file-test-mode=rndrw --max-time=300 --max-requests=0 --file-extra-flags=direct run
Host:            1,8424 MB/sec
Guest (before):  1,5591 MB/sec
Guest (now):     1,5912 MB/sec

$ sysbench --test=memory --memory-block-size=1M --memory-total-size=10G run
Host:            3959,78 MB/sec
Guest (before)   3079,29 MB/sec
Guest (now):     3821,89 MB/sec

$ sysbench --test=threads --num-threads=128 --max-time=10s run
Host:            9765 executions
Guest (before):   512 executions
Guest (now):      529 executions

So we notice that this small change has nice effects on some of the tests. The CPU benchmark improves from 3,55% overhead to 0,95%; fileio is the same (from 15,38% to 13,63%), memory improves from 22,24% overhead to 3,48% and threads remains about status quo (from 94,76% slower to 94,58%).

That doesn’t mean that the VM is now suddenly faster or better than before – what we changed was how fast a certain time measurement takes, which the benchmark software itself uses rigorously. This goes to show how important it is to

  1. understand fully how the benchmark software works and measures
  2. realize the importance of access to source code is not to be misunderstood
  3. know that performance benchmarks give figures, but do not tell you how your users will experience the system

That’s it for the sysbench benchmark for now (the MySQL part will need to wait until a later stage).

In the previous post, I gave some feedback on the cpu and fileio workload tests that sysbench can handle. Next on the agenda are the memory, threads and mutex workloads.

When using the memory workload, sysbench will allocate a buffer (provided through the –memory-block-size parameter, defaults to 1kbyte) and each execution will read or write to this memory (–memory-oper, defaults to write) in a random or sequential manner (–memory-access-mode, defaults to sequential).

$ sysbench --test=memory --memory-block-size=1M --memory-total-size=10G run
Host throughput, 1M:  3959,78 MB/sec
Guest throughput, 1M: 3079,29 MB/sec

The guest has a lower throughput (about 77% of the host), which is lower than what most online posts provide on KVM performance. We’ll get back to that later. Let’s look at the default block size of 1k (meaning that the benchmark will do a lot more executions before it reaches the total memory (in load):

$ sysbench --test=memory --memory-total-size=1G run
Host throughput, 1k:  1702,59 MB/sec
Guest throughput, 1k:   23,67 MB/sec

This is a lot worse: the guest’ throughput is only 1,4% of the host throughput! The qemu-kvm process on the host is also taking up a lot of CPU.

Now let’s take a look at the other workload, threads. In this particular workload, you identify the number of threads (–num-threads), the number of locks (–thread-locks) and the number of times a thread should run its ‘lock-yield..unlock’ workload (–thread-yields). The more locks you identify, the less number of threads will have the same lock (each thread is allocated a single lock during an execution, but every new execution will give it a new lock so the threads do not always take the same lock).

Note that parts of this is also handled by the other tests: mutex’es are used when a new operation (execution) for the thread is prepared. In case of the memory-related workload above, the smaller the buffer size, the more frequent thread operations are needed. In the last run we did (with the bad performance), millions of operations were executed (although no yields were performed). Something similar can be simulated using a single lock, single thread and a very high number of operations and no yields:

$ sysbench --test=threads --num-threads=1 --thread-yields=0 --max-requests=1000000 --thread-locks=1 run
Host runtime:    0,3267 s  (event:    0,2278)
Guest runtime:  40,7672 s  (event:   30,6084)

This means that the guest “throughput” problems from the memory identified above seem to be related to this rather than memory-specific regressions. To verify if the scheduler itself also shows regressions, we can run more threads concurrently. For instance, running 128 threads simultaneously, using the otherwise default settings, during 10 seconds:

$ sysbench --test=threads --num-threads=128 --max-time=10s run
Host:   9765 executions (events)
Guest:   512 executions (events)

Here we get only 5% throughput.

Let’s focus on the mutex again, as sysbench has an additional mutex workload test. The workload has each thread running a local fast loop (simple increments, –mutex-loops) after which it takes a random mutex (one of –mutex-num), locks it, increments a global variable and then releases the mutex again. This is repeated for the number of locks identified (–mutex-locks). If mutex operations would be the cause of the performance issues above, then we would notice that the mutex operations are a major performance regression on my system.

Let’s run that workload with a single thread (default), no loops and a single mutex.

$ sysbench --test=mutex --mutex-num=1 --mutex-locks=50000000 --mutex-loops=1 run
Host (duration):   2600,57 ms
Guest (duration):  2571,44 ms

In this example, we see that the mutex operations are almost at the same speed (99%) of the host, so pure mutex operations are not likely to be the cause of the performance regressions earlier on. So what does give the performance problems? Well, that investigation will be for the third and last post in this series ;-)

April 18, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Another Gentoo Hardened month has passed (April 18, 2013, 21:36 UTC)

Another month has passed, so time to mention again what we have all been doing lately ;-)


Version 4.8 of GCC is available in the tree, but currently masked. The package contains a fix needed to build hardened-sources, and a fix for the asan (address sanitizer). asan support in GCC 4.8 might be seen as an improvement security-wise, but it is yet unclear if it is an integral part of GCC or could be disabled with a configure flag. Apparently, asan “makes building gcc 4.8 crazy”. Seeing that it comes from Google, and building Google Chromium is also crazy, I start seeing a pattern here.

Anyway, it turns out that PaX/grSec and asan do not get along yet (ASAN assumes/uses hardcoded userland address space size values, which breaks when UDEREF is set as it pitches a bit from the size):

ERROR: AddressSanitizer failed to allocate 0x20000001000 (2199023259648) bytes at address 0x0ffffffff000

Given that this is hardcoded in the resulting binaries, it isn’t sufficient to change the size value from 47 bits to 46 bits as hardened systems can very well boot a kernel with and another kernel without UDEREF, causing the binaries to fail on the other kernel. Instead, a proper method would be to dynamically check the size of a userland address.

However, GCC 4.8 also brings along some nice enhancements and features. uclibc profiles work just fine with GCC 4.8, including armv7a and mips/mipsel. The latter is especially nice to hear, since mips used to require significant effort with previous GCCs.

Kernel and grSecurity/PaX

More recent kernels have now been stabilized to stay close to the grSecurity/PaX upstream developments. The most recent stable kernel now is hardened-sources-3.8.3. Others still available are hardened-sources versions 3.2.40-r1 and 2.6.32-r156.

The support for XATTR_PAX is still progressing, but a few issues have come up. One is that non-hardened systems are seeing warnings about pax-mark not being able to set the XATTR_PAX on tmpfs since vanilla kernels do not have the patch to support user.* extended attribute namespaces for tmpfs. A second issue is that the install application, as provided by coreutils, does not copy extended attributes. This has impact on ebuilds where pax markings are done before the install phase of a package. But only doing pax markings after the install phase isn’t sufficient either, since sometimes we need the binaries to be marked already for test phases or even in the compile phase. So this is still something on the near horizon.

Most likely the necessary tools will be patched to include extended attributes on copy operations. However, we need to take care only to copy over those attributes that make sense: user.pax does, but security ones like security.evm and security.selinux shouldn’t as those are either recomputed when needed, or governed through policy. The idea is that USE=”pax_kernel” will enable the above on coreutils.


The SELinux support in Gentoo has seen a fair share of updates on the userland utilities (like policycoreutils, setools, libselinux and such). Most of these have already made the stable tree or are close to be bumped to stable. The SELinux policy also has been updated a lot: most changes can be tracked through bugzilla, looking for the sec-policy r13 whiteboard. The changes can be applied to the system immediately if you use the live ebuilds (like selinux-base-9999), but I’m planning on releasing revision 13 of our policy set soon.

System Integrity

Some of the “early adopter” problems we’ve noticed on Gentoo Hardened have been integrated in the repositories upstream and are slowly progressing towards the main Linux kernel tree.


All hardened profiles have been moved to the 13.0 base. Some people frowned when they noticed that the uclibc profiles do not inherit from any architecture-related profile. This is however with reason: the architecture profiles are (amongst other reasons) focusing on the glibc specifics of the architecture. Since the profile intended here is for uclibc, those changes are not needed (nor wanted). Hence, these are collapsed in a single profile.


For SELinux, the SELinux handbook now includes information about USE=”unconfined” as well as the selinux_gentoo init script as provided by policycoreutils. Users who are already running with SELinux enabled can just look at the Change History to see which changes affect them.

A set of tutorials (which I’ve blogged about earlier as well) have been put online at the Gentoo Wiki. Next to the SELinux tutorials, an article pertaining to AIDE has been added as well as it fits nicely within the principles/concepts of the System Integrity subproject.


If you don’t do it already, start following @GentooHardened ;-)

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.9 (April 18, 2013, 17:41 UTC)

First of all py3status is on pypi ! You can now install it with the simple and usual :

$ pip install py3status

This new version features my first pull request from @Fandekasp who kindly wrote a pomodoro module which helps this technique’s adepts by having a counter on their bar. I also fixed a few glitches on module injection and some documentation.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: and

The first of the two files ( is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic file:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use but we’ll create instead soon).

To build a program, you need then to create a similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.2 released (April 18, 2013, 10:53 UTC)

After the security issue related bumps of the previous releases which happened last weeks it was about time 10gen released a 2.4.x fixing the following issues:

  • Fix for upgrading sharded clusters
  • TTL assertion on replica set secondaries
  • Several V8 memory leak and performance fixes
  • High volume connection crash

I guess everything listed above would have affected our cluster at work so I’m glad we’ve been patient on following-up this release :) See the changelog for details.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : streets (April 18, 2013, 06:02 UTC)


April 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : chinatown (April 16, 2013, 20:41 UTC)


Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece


FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)


We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.

April 14, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.8 (April 14, 2013, 20:33 UTC)

I went on a coding frenzy to implement most of the stuff I was not happy with py3status so far. Here comes py3status code name : San Francisco (more photos to come).
San Francisco


I always had the habit of using tabulators to indent my code. @Lujeni pointed out that this is not a PEP8 recommended method and that we should start respecting more of it in the near future. Well, he’s right and I guess it was time to move on so I switched to using spaces and corrected a lot of other coding style stuff which got my code a score going from around -1/10 to around 9.5/10 on pylint !

Threaded modules’ execution

This was the major thing I was not happy with : when a user-written module was executed for injection, the time it took to get its response would cause py3status to stop updating the bar. This means that if you had a database call to make to get some stuff you need displayed on the bar and it took 10 seconds, py3status was sleeping for those 10 seconds to update the bar ! This behavior could cause some delays in the clock ticking for example.

I decided to offload all of the modules’ detection and execution to a thread to solve this problem. To be frank, this also helped to rationalize the code better as well. No more delays and a cleaner handling is what you get, stuff will start appending themselves whatever the time they take to execute !


It was about time the examples available on py3status would also work using python3.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.

April 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So it stats my time in Ireland (April 13, 2013, 19:58 UTC)

With today it makes a full week I survived my move to Dublin. Word’s out on who my new employer is (but as usual, since this blog is personal and should not be tied to my employer, I’m not even going to name it), and I started the introductory courses. One thing I can be sure of: I will be eating healthily and compatibly with my taste — thankfully, chicken, especially spicy chicken, seems to be available everywhere in Ireland, yai!

I have spent almost all my life in Venice, never stayed for long periods of time away from it, with the exception of last year, which I spent for the most time, as you probably know, in Los Angeles — 2012 was a funny year like that: I never partied for the new year, but at 31st December 2011 I was at a friend’s place with friends, after which some of us ended up leaving at around 3am… for the first time in my life I ended up sleeping on a friend’s couch. Then it was time for my first week-long vacation since ever with the same group of friends in the Venetian Alps.

With this premise, it’s obvious that Dublin is looking a bit alien to me. It helps I’ve spent a few weeks over the past years in London, so that at least a few customs that are shared between the British and the Irish I already was used to — they probably don’t like to be remembered that they share some customs with the British, but there it goes. But it’s definitely more similar to Italy than Los Angeles.

Funny episode of the day was me going to Boots, and after searching the aisle for a while asking one of the workers if they kept hydrogen peroxide, which I used almost daily both in Italy and the US as a disinfectant – I cut or scrape very easily – and after being looked at in a very strange way I was informed that is not possible to sell it anymore in Ireland…. I’d guess it has something to do with the use of it in the London bombings of ‘05. Luckily they didn’t call the police.

I have to confess though that I like the restaurants better on the touristy, commercial areas than those in the upscale modern new districts — I love Nando’s for instance, which is nowhere Irish, but I love its spiciness (and this time around I could buy the freaking salt!). But also most pubs have very good chicken.

I still don’t have a permanent place though. I need to look into one soonish I suppose, but the job introduction took the priority for the moment. Even though, if the guests in the next apartment are going to throw another party at 4.30am I might decide to find something sooner, rather than later.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...

April 11, 2013
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio in GSoC 2013 (April 11, 2013, 11:34 UTC)

That’s right — PulseAudio will be participating in the Google Summer of Code again this year! We had a great set of students and projects last year, and you’ve already seen some their work in the last release.

There are some more details on how to get involved on the mailing list. We’re looking forward to having another set of smart and enthusiastic new contributors this year!

p.s.: Mentors and students from organisations (GStreamer and BlueZ, for example), do feel free to get in touch with us if you have ideas for projects related to PulseAudio that overlap with those other projects.

April 10, 2013
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
GCC 4.8 - building everything? (April 10, 2013, 13:49 UTC)

The last few days I've spent a few hundred CPU-hours building things with gcc 4.8. So far, alphabetically up to app-office/, it's been really boring.
The amount of failing packages is definitely lower than with 4.6 or 4.7. And most of the current troubles are unrelated - for example the whole info page generation madness.
At the current rate of filing and fixing bugs we should be able to unleash this new version on the masses really soon - maybe in about a month? (Or am I just too optimistic?)

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Forking ebuilds (April 10, 2013, 00:14 UTC)

Here’s a response to an email thread I sent recently. This was on a private alias but I’m not exposing the context or quoting anybody, so I’m not leaking anything but my own opinion which has no reason to be secret.

GLEP39 explicitly states that projects can be competing. I don’t see how you can exclude competing ebuilds from that since nothing prevents anybody from starting a project dedicated to maintaining an ebuild.

So, if you want to prevent devs from pushing competing ebuilds to the tree you have to change GLEP 39 first. No arguing or “hey all, hear my opinion” emails on whatever list will be able to change that.

Some are against forking ebuilds and object duplicating effort and lack of manpower. I will bluntly declare those people shortsighted. Territoriality is exactly what prevents us from getting more manpower. I’m interested in improving package X but developer A who maintains it is an ass and won’t yield on anything. At best I’ll just fork it in an overlay (with all the issues that having a package in an overlay entail, i.e. no QA, it’ll die pretty quickly, etc…), at worst I’m moving to Arch, or Exherbo, or else… What have we gained by not duplicating effort? We have gained negative manpower.

As long as forked ebuilds can cohabit peacefully in the tree using say a virtual (note: not talking about the devs here but about the packages) we should see them as progress. Gentoo is about choice. Let consumers, i.e. users and devs depending on the ebuild in various ways, have that choice. They’ll quickly make it known which one is best, at which point the failing ebuild will just die by itself. Let me say it again: Gentoo is about choice.

If it ever happened that devs of forked ebuilds could not cohabit peacefully on our lists or channels, then I would consider that a deliberate intention of not cooperating. As with any deliberate transgression of our rules if I were devrel lead right now I would simply retire all involved developers on the spot without warning. Note the use of the word “deliberate” here. It is important we allow devs to make mistakes, even encourage it. But we are adults. If one of us knowingly chooses to not play by the rules he or she should not be allowed to play. “Do not be an ass” is one of those rules. We’ve been there before with great success and it looks like we are going to have to go there again soon.

There you have it. You can start sending me your hate mail in 3… 2… 1…

April 09, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So there, I'm in Ireland (April 09, 2013, 21:50 UTC)

Just wanted to let everybody know that I’m in Ireland, as I landed at Dublin Airport on Saturday, and been roaming around the city for a few days now. Time looks like it’s running faster than usual, so I haven’t had much time to work on Gentoo stuff.

My current plan is to work, by the end of the week, on a testing VM as there’s an LVM2 bug that I owe Enrico to fix, and possibly work on the Autotools Mythbuster guide as well, there’s work to do there.

But today, I’m a bit too tired to keep going, it’s 11pm… I’ll doze off!

April 08, 2013
What’s cookin’ on the BBQ (April 08, 2013, 16:27 UTC)

While Spring has yet to come here, the rainy days are giving me some time to think about the future of Sabayon and summarize what’s been done during the last months.


As far as I can see, donations are going surprisingly well. The foundation has now enough money (see the campaign at to guarantee 24/7 operations, new hardware purchase and travel expenses for several months. Of course, the more the better (paranoia mode on) but I cannot really complain, given that’s our sole source of funds. Here is a list of stuff we’ve been able to buy during the last year (including prices, we’re in the EU, prices in the US are much lower, sigh):

  • one Odroid X2 (for Sabayon on ARM experiments) – 131€
  • one PandaBoard ES (for Sabayon on ARM experiments) – 160€
  • two 2TB Seagate Barracuda HDDs (one for Joost’s experiments, one for the Entropy tinderbox) – 185€
  • two 480GB Vertex3 OCZ SSDs for the Entropy tinderbox (running together with the Samsung 830 SSDs in a LVM setup) – 900€
  • one Asus PIKE 2008 SAS controller for the Entropy tinderbox – 300€
  • other 16GB of DDR3 for the Entropy tinderbox (now running with 64G) – 128€
  • @ maintenance (33€/mo for 1 year) – 396€
  • my personal FOSDEM 2013 travel expenses – 155€

Plus, travel expenses to data centers whenever there is a problem that cannot be fixed remotely. That’s more or less from 40€ to 60€ each depending on the physical distance.
As you may understand, this is just a part of the “costs”, because the time donated by individual developers is not accounted there, and I believe that it’s much more important than a piece of silicon.

monthly releases, entropy

Besides the money part, I spent the past months on Sabayon 11 (of course), on advancing with the automation agenda for 2013. Ideally, I would like to have stable releases automatically produced and tested monthly, and eventually pushed to mirrors. This required me to migrate to a different bittorrent tracker, one that scrapes a directory containing .torrents and publishes them automatically: you can see the outcome at Furthermore, a first, yet not advertised, set of monthly ISO images is available on our mirrors into the iso/monthly/ sub-directory. You can read more about them here. This may (eheh) indicate that the next Sabayon release will be versioned something like 13.05, who knows…
On the Entropy camp, nothing much has changed, besides the usual set of bug fixe, little improvements and the migration to an .ini-like repositories configuration files syntax for both Entropy Server and Client modules, see here. You may start realizing that all the good things I do are communicated through the devel mailing list.

leh systemd

I spent a week working on a Sabayon systemd system to see how it works and performs compared to openrc. Long story short, I am about to arrange some ideas on making the systemd migration come true at some point in the (near) future. Joost and I are experimenting with a private Entropy repository (thus chroot) that’s been migrated to systemd, from openrc. While I don’t want to start yet another flamewar about openrc vs systemd, I do believe in science, facts and benchmarks. Even though I don’t really like the vertical architecture of systemd, I am starting to appreciate its features and most importantly, its performance. The first thing I would like to sort out is to be able to switch between systemd and openrc at runtime, this may involve the creation of an eselect module (trivial) and patching some ebuilds. I think that’s the best thing to do, if we really want to design and deploy a migration path for current openrc users (I would like to remind people that Gentoo is about choice, after all). If you’re a Gentoo developer that hasn’t been bugged by me yet, feel free to drop a line to lxnay@g.o (expand the domain, duh!) if you’re interested.

April 07, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
FOSDEM 2013 & etc-update (April 07, 2013, 16:00 UTC)



I started writing this post after FOSDEM, but never actually managed to finish it. But as I plan to blog about something again “soon”, I wanted to get this one out first. So let’s start with FOSDEM – it is awesome event and every open source hacker is there unless he has some really huge reasons why not to come (like being dead, in prison or locked down in psychiatric care). I was there together with bunch of openSUSE/SUSE folks. It was a lot of fun and we even managed to get some work done during the event. So how was it?


We had a lot of fun on the way already. You know, every year, we rent a bus just for us and we go from Nuremberg to Brussels and back all together by bus. And we talk and drink and talk and drink some more…. So although it sounds crazy – 8 hours drive – it’s not as bad as it sounds.


What the hack is etc-update and what does it have to do with me, openSUSE or FOSDEM? Isn’t it Gentoo tool? Yes, it is. It is Gentoo tool (actually part of portage, Gentoo package manager) that is used in Gentoo to merge updates to the configuration files. When you install package, portage is not going to overwrite your configuration files that you have spend days and nights tuning. It will create a new file with new upstream configuration and it is up to you to merge them. But you know, rpm does the same thing. In almost all cases rpm is not going to overwrite your configuration file, but will install the new one as config_file.rpmnew. And it is up to you to merge the changes. But it’s not fun. Searching for all files, compare them manually and choose what to merge and how. And here comes etc-update o the rescue ;-)

How does it work? Simple. You need to install it (will speak about that later) and run it. It’s command line tool and it doesn’t need any special parameters. All you need to do is to run etc-update as root (to be actually able to do something with these files). And the result?

# etc-update 
Scanning Configuration files...
The following is the list of files which need updating, each
configuration file is followed by a list of possible replacement files.
1) /etc/camsource.conf (1)
2) /etc/ntp.conf (1)
Please select a file to edit by entering the corresponding number.
              (don't use -3, -5, -7 or -9 if you're unsure what to do)
              (-1 to exit) (-3 to auto merge all files)
                           (-5 to auto-merge AND not use 'mv -i')
                           (-7 to discard all updates)
                           (-9 to discard all updates AND not use 'rm -i'):

What I usually do is that I select configuration files I do care about, review changes and merge them somehow and later just use -5 for everything else. It looks really simple, doesn’t it? And in fact it is!

Somebody asked a question on how to merge updates of configuration files while visiting our openSUSE booth at FOSDEM. When I learned that from Richard, we talked a little bit about how easy it is to do something like that and later during one of the less interesting talks, I took this Gentoo tool, patched it to work on rpm distributions, packaged it and now it is in Factory and it will be part of openSUSE 13.1 ;-) If you want to try it, you can get it either from my home project – home:-miska-:arm (even for x86 ;-) ) or from utilities repository.

Hope you will like it and that it will make many sysadmins happy ;-)

April 06, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.7 (April 06, 2013, 15:30 UTC)

Some cool bugfixes happened since v0.5 and py3status broke the 20 github stars, I hope people are enjoying it.


  • clear the user class cache when receiving SIGUSR1
  • specify default folder for user defined classes
  • fix time transformation thx to @Lujeni
  • add Pingdom checks latency example module
  • fix issue #2 reported by @Detegr which caused the clock to drift on some use cases

April 04, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

If you’re using dev-db/postgresql-server, update now.

CVE-2013-1899 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13}
A connection request containing a database name that begins
with "-" may be crafted to damage or destroy files within a server's data directory.

CVE-2013-1900 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13,8.4.17}
Random numbers generated by contrib/pgcrypto functions may be easy for another
database user to guess

CVE-2013-1901 <dev-db/postgresql-server-{9.2.4,9.1.9}
An unprivileged user can run commands that could interfere with in-progress backups.

April 03, 2013
Matthew Thode a.k.a. prometheanfire (homepage, bugs)


  1. Keep in mind that ZFS on Linux is supported upstream, for differing values of support
  2. I do not care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.


Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). I uploaded an iso that works for me at this link Live DVDs newer then 12.1 should also have support, but the previous link has the stable version of zfsonlinux. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.


I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry Most newer drives are 4k advanced format drives. Because of this you need ashift=12, some/most newer SSDs need ashift=13 compression set to lz4 will make your system incompatible with upstream (oracle) zfs, if you want to stay compatible then just set compression=on

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=lz4 rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /tmp/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources                #or hardned-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-0.6.1/work/spl-0.6.1 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-0.6.1/work/zfs-zfs-0.6.1/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
mkdir -p /etc/portage/profile                                                   
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask      
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use                    
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

April 02, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The WebP experiment (April 02, 2013, 17:58 UTC)

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP at all… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.

I’m a content publisher, whether I like it or not. This blog is relatively well followed, and I write quite a lot in it. While my hosting provider does not give me grief for my bandwidth usage, optimizing it is something I’m always keen on, especially since I have been Slashdotted once before. This is one of the reasons why my ModSecurity Ruleset validates and filters crawlers as much as spammers.

Blogs’ feeds, be them RSS or Atom (this blog only supports the latter) are a very neat way to optimize bandwidth: they get you the content of the articles without styles, scripts or images. But they can also be quite big. The average feed for my blog’s articles is 100KiB which is a fairly big page, if you consider that feed readers are supposed to keep pinging the blog to check for new items. Luckily for everybody, the authors of HTTP did consider this problem, and solved it with two main features: conditional requests and compressed responses.

Okay there’s a sense of déjà-vu in all of this, because I already complained about software not using the features even when it’s designed to monitor web pages constantly.

By using conditional requests, even if you poke my blog every fifteen minutes, you won’t use more than 10KiB an hour, if no new article has been posted. By using compressed responses, instead of a 100KiB response you’ll just have to download 33KiB. With Google Reader, things were even better: instead of 113 requests for the feed, a single request was made by the FeedFetcher, and that was it.

But now Google Reader is no more (almost). What happens now? Well, of the 113 subscribers, a few will most likely not re-subscribe to my blog at all. Others have migrated to NewsBlur (35 subscribers), the rest seem to have installed their own feed reader or aggregator, including tt-rss, owncloud, and so on. This was obvious looking at the statistics from either AWStats or Munin, both showing a higher volume of requests and delivered content compared to last month.

I’ve then decided to look into improving the bandwidth a bit more than before, among other things, by providing WebP alternative for images, but that does not really work as intended — I have enough material for a rant post or two so I won’t discuss it now. But while doing so I found out something else.

One of the changes I made while hoping to use WebP is to serve the image files from a different domain ( which meant that the access log for the blog, while still not perfect, is decidedly cleaner than before. From there I noticed that a new feed reader started requesting my blog’s feed every half an hour. Without compression. In full every time. That’s just shy of 5MiB of traffic per day, but that’s not the worst part. The worst part is that said 5MiB are for a single reader as the requests come from a commercial, proprietary feed reader webapp.

And this is not the only one! Gwene also does the same, even though I sent a pull request to get it to use compressed responses, which hasn’t had a single reply. Even Yandex’s new product has the same issue.

While 5MiB/day is not too much taken singularly, my blog’s traffic averages on 50-60 MiB/day so it’s basically a 10% traffic for less than 1% of users, just because they do not follow the best practices when writing web software. I’ve now added these crawlers to the list of stealth robots, this means that they will receive a “406 Unacceptable” unless they finally implement at least the compressed responses support (which is the easy part in all this).

This has an unfortunate implication on users of those services that were reading me, who won’t get any new updates. If I was a commercial entity, I couldn’t afford this at all. The big problem, to me, is that with Google Reader going away, I expect more and more of this kind of issues to crop up repeatedly. Even NewsBlur, which is now my feed reader of choice fixed their crawlers yet, which I commented upon before — the code is open-source but I don’t want to deal with Python just yet.

Seriously, why are there so many people who expect to be able to deal with web software and yet have no idea how the web works at all? And I wonder if somebody expected this kind of fallout from the simple shut down of a relatively minor service like Google Reader.

March 31, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
udev-200 interface names (March 31, 2013, 00:59 UTC)

Just updated to udev-200 and figured it was time to read the news item and deal with the Predictable Network Interface Names. I only have one network card and connect with a static ip address. It looked to me like more trouble to keep net.eth0 then to just go with the flow and paddle downstream and not fight it so here is what I did.

First I read the news item :) then found out what my new name would be.

eselect news read
udevadm test-builtin net_id /sys/class/net/eth0 2> /dev/null

That returned enp0s25 ...

Next remove the old symlink and create the new one.

cd /etc/init.d/
rm net.eth0
ln -s net.lo net.enp0s25

I removed all the files from /etc/udev/rules.d/

Next set up /etc/conf.d/net for my static address.

# Static
routes_enp0s25="default via"

That was it, rebooted, held my breath, and everything seems just fine, YES!

enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::21c:c0ff:fe91:5798  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:c0:91:57:98  txqueuelen 1000  (Ethernet)
        RX packets 3604  bytes 1310220 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2229  bytes 406258 (396.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xd3400000-d3420000  
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 16436
        inet  netmask
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I had to edit /etc/vnstat.conf and change eth0 to enp0s25. I use vnstat with conky.

rm /var/lib/vnstat/*
vnstat -u -i enp0s25

March 30, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Tokyo, mont Fuji (March 30, 2013, 15:28 UTC)

Photos prises du haut de la mairie de Bunkyo-ku, il y avait beaucoup de Japonais car le coucher de soleil laissait voir le mont Fuji entre les tours. Moment magique !




Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

The article’s title is a play on the phrase “don’t open that door”, and makes more sense in Italian as we use the same word for ‘door’ and ‘port’…

So you left your hero (me) working on setting up a Raspberry Pi with at least a partial base of cross-compilation. The whole thing worked to a decent extent, but it wasn’t really as feasible as I hoped. Too many things, including Python, cannot cross-compile without further tricks, and the time it takes to figure out how to cross-compile them, tend to be more than that needed to just wait for it to build on the board itself. I guess this is why there is that little interest in getting cross-compilation supported.

But after getting a decent root, or stage4 as you prefer to call it, I needed to get a kernel to boot the device. This wasn’t easy.; there is no official configuration file published — what they tell you is, if you want to build a new custom kernel, to zcat /proc/config.gz from Raspian. I didn’t want to use Raspian, so I looked further. The next step is to check out the defconfig settings that the kernel repository includes, a few, different of them exist.

You’d expect them to be actually thought out to enable exactly what the RaspberryPi provides, and nothing more or less. Some leeway can be expected for things like network options, but at least the “cutdown” version should not include all of IrDA, Amateur Radio, Wireless, Bluetooth, USB network, PPP, … After disabling a bunch of options, since the system I need to run will have very few devices connected – in particular, only the Davis Vantage Pro station, maybe a printer – I built the kernel and copied it over the SD card. It booted, it crashed. Kernel panicked right away, due to a pointer dereference.

After some rebuild-copy-test cycles I was able to find out what the problem is. It’s a problem that is not unique to the RPi actually, as I found the same trace from an OMAP3 user reporting it somewhere else. The trick was disabling the (default-enabled) in-kernel debugger – which I couldn’t access anyway, as I don’t have an USB keyboard at hand right now – so that it would print the full trace of the error .That pointed at the l4_init function, which is the initialization of the Lightning 4 gameport controller — an old style, MIDI game port.

My hunch is that this expansion card is an old-style ISA card, since it does not rely on PCI structures to probe for the device — I cannot confirm it because googling for “lightning 4” only comes up with images of iPad and accessories. What it does, is simply poking at the 0×201 address, and the moment when it does, you get a bad dereference from the kernel exactly at that address. I’ve sent a (broken, unfortunately) patch to the LKML to see if there is an easy way to solve this.

To be honest and clear, if you just take a defconfig and build it exactly as-is, you won’t be hitting that problem. The problem happens to me because in this kernel, like in almost every other one I built, I do one particular thing: I disable modules so that a single, statically build kernel. This in turn means that all the drivers are initialized when you start the kernel, and the moment when the L4 driver is started, it crashes the kernel. Possibly it’s not the only one.

This is most likely not strictly limited to the RaspberryPi but it doesn’t help that there is no working minimal configuration – mine is, by the way, available here – and I’m pretty sure there are other similar situations even when the arch is x86… I guess it’s just a matter of reporting them when you encounter them.

Flattr for comments (March 30, 2013, 08:27 UTC)

You probably know already that my blog is using Flattr for micro-donation, both to the blog as a whole and to the single articles posted here. For those who don’t know, Flattr is a microdonation platform that splits a monthly budget into equal parts to share with your content creators of choice.

I’ve been using, and musing about, Flattr for a while and sometimes I ranted a little bit of how things have been moving in their camp. One of the biggest problems with the service is the relative scarce adoption. I’ve got a ton of “pending flattrs” as described on their blog for Twiter and Flickr users, mostly.

Riling up adoption of the service is key for it to be useful for both content creators and consumers: the former can only get something out of the system if their content is liked by enough people, and the latter will only care about adding money to the system if they find great content to donate to. Or if they use Socialvest to get the money while they spend it somewhere else.

So last night I did my part in trying to increase the usefulness of Flattr: I added it to the comments of my blog. If you do leave a comment and fill the email field, that email will be used, hashed, to create a new “thing” on Flattr, whether you’re already registered or not — if you’re not registered, the things will be kept pending until you register and associate the email address. This is not much different from what I’ve been doing already with gravatar, which uses the same method (the hashed email address).

Even though the description of the parameters needed to integrate Flattr for comments are described in the partnership interface there doesn’t seem to be a need to be registered as a partner – indeed you can see in the pages’ sources that there is no revenue key present – and assuming you already are loading the Flattr script for your articles’ buttons, all you have to add is the following code to the comment template (for Typo, other languages and engines will differ slightly of course!):

<% if != "" -%>
  <div class="comment_flattr right">
    <a class="FlattrButton" style="display:none;"
       title="Comment on <%= comment.article.title %>"
       data-flattr-tags="text, comment"
       data-flattr-owner="email:<%= Digest::MD5.hexdigest( %>"
       href="<%= comment.article.permalink_url %>#comment-<%= %>">
<% end -%>

So if I’m not making money with the partner site idea, why am I bothering with adding these extra buttons? Well, I often had people help me out a lot in comments, pointing out obvious mistakes I made or things I missed… and I’d like to be able to easily thank the commenters when they help me out… and now I can. Also, since this requires a valid email field, I hope for more people to fill it in, so that I can contact them if I want to ask or tell them something in private (sometimes I wished to contact people who didn’t really leave an easy way to contact them).

At any rate, I encourage you all to read the comments on the posts, and Flattr those you find important, interesting or useful. Think of it like a +1 or a “Like”. And of course, if you’re not subscribed with Flattr, do so! You’ll never know what other people could like, that you posted!

March 29, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Predictable persistently (non-)mnemonic names (March 29, 2013, 20:09 UTC)

This is part two of a series of articles looking into the new udev “predictable” names. Part one is here and talks about the path-based names.

As Steve also asked on the comments from last post, isn’t it possible to just use the MAC address of an interface to point at it? Sure it’s possible! You just need to enable the mac-based name generator. But what does that mean? It means that your new interface names will be enx0026b9d7bf1f and wlx0023148f1cc8 — do you see yourself typing them?

Myself, I’m not going to type them. My favourite suggestion to solve the issue is to rely on rules similar to the previous persistent naming, but not re-using the eth prefix to avoid collisions (which will no longer be resolved by future versions of udev). I instead use the names wan0 and lan0 (and so on), when the two interfaces sit stranding between a private and a public network. How do I achieve that? Simple:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:17:31:c6:4a:ca", NAME="lan0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:07:e9:12:07:36", NAME="wan0"

Yes these simple rules are doing all the work you need if you just want to make sure not to mix the two interfaces by mistake. If your server or vserver only has one interface, and you want to have it as wan0 no matter what its mac address is (easier to clone, for instance), then you can go for

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="*", NAME="wan0"

As long as you only have a single network interface, that will work just fine. For those who use Puppet, I also published a module that you can use to create the file, and ensure that the other methods to achieve “sticky” names are not present.

My reasoning to actually using this kind of names is relatively simple: the rare places where I do need to specify the interface name are usually in ACLs, the firewall, and so on. In these, the most important part to me is knowing whether the interface is public or not, so the wan/lan distinction is the most useful. I don’t intend trying to remember whether enp5s24k1f345totheright4nextothebaker is the public or private interface.

Speaking about which, one of the things that appears obvious even from Lennart’s comment to the previous post, is that there is no real assurance that the names are set in stone — he says that an udev upgrade won’t change them, but I guess most people would be sceptic, remembering the track record that udev and systemd has had over the past few months alone. In this situation my personal, informed opinion is that all this work on “predictable” names is a huge waste of time for almost everybody.

If you do care about stable interface names, you most definitely expect them to be more meaningful than 10-digits strings of paths or mac addresses, so you almost certainly want to go through with custom naming, so that at least you attach some sense into the names themselves.

On the other hand, if you do not care about interface names themselves, for instance because instead of running commands or scripts, you just use NetworkManager… well what the heck are you doing playing around with paths? If it doesn’t bother you that the interface for an USB device changes considerably between one port and another, how can it matter to you whether it’s called wwan0 or wwan123? And if the name of the interface does not matter to you, why are you spending useless time trying to get these “predictable” names working?

All in all, I think this is just an useless nice trick, that will only cause more headaches than it can possibly solve. Bahumbug!

Pacho Ramos a.k.a. pacho (homepage, bugs)
Gnome 3.8 released (March 29, 2013, 17:08 UTC)

Gnome 3.8 Released, and already available in main tree hardmasked for adventurous people willing to help with it being fixed for stable "soon" ;)

Thanks for your help!

March 25, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.1 and pymongo 2.5 released (March 25, 2013, 12:01 UTC)

10gen released a critical update for mongoDB 2.4.0 which affected queries on secondaries, you should upgrade asap. The python mongo driver followed the 2.4.x releases and got bumped to 2.5 this week-end. I am pleased to announce that I took the chance to add the kerberos authentication support to both ebuilds while bumping them.


  • GSSAPI (Kerberos) authentication
  • SSL certificate validation with hostname matching
  • Delegated and role based authentication

March 19, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Opportunities for Gentoo (March 19, 2013, 15:36 UTC)

When I’ve wanted to play in some new areas lately, it’s been a real frustration because Gentoo hasn’t had a complete set of packages ready in any of them. I feel like these are some opportunities for Gentoo to be awesome and gain access to new sets of users (or at least avoid chasing away existing users who want better tools):

  • Data science. Package Hadoop. Package streaming options like Storm. How about related tools like Flume? RabbitMQ is in Gentoo, though. I’ve heard anecdotally that a well-optimized Hadoop-on-Gentoo installation showed double-digit performance increases over the usual Hadoop distributions (i.e., not Linux distributions, but companies specializing in providing Hadoop solutions). Just heard from Tim Harder (radhermit) than he’s got some packages in progress for a lot of this, which is great news.
  • DevOps. This is an area where Gentoo historically did pretty well, in part because our own infrastructure team and the group at the Open Source Lab have run tools like CFEngine and Puppet. But we’re lagging behind the times. We don’t have Jenkins or Travis. Seriously? Although we’ve got Vagrant packaged, for example, we don’t have Veewee. We could be integrating the creation of Vagrant boxes into our release-engineering process.
  • Relatedly: Monitoring. Look for some of the increasingly popular open-source tools today, things like Graphite, StatsDLogstash, LumberjackElasticSearch, Kibana, Sensu, Tasseo, Descartes, Riemann. None of those are there.
  • Cloud. Public cloud and on-premise IaaS/PaaS. How about IaaS: OpenStack, CloudStack, Eucalyptus, or OpenNebula? Not there, although some work is happening for OpenStack according to Matthew Thode (prometheanfire). How about a PaaS like Cloud Foundry or OpenShift? Nope. None of the Netflix open-source tools are there. On the public side, things are a bit better — we’ve got lots of AWS tools packaged, even stretching to things like Boto. We could be integrating the creation of AWS images into our release engineering to ensure AWS users always have a recent, official Gentoo image.
  • NoSQL. We’ve got a pretty decent set here with some holes. We’ve got Redis, Mongo, and CouchDB not to mention Memcached, but how about graph databases like Neo4j, or other key-value stores like RiakCassandra, or Voldemort?
  • Android development. Gentoo is perfect as a development environment. We should be pushing it hard for mobile development, especially Android given its Linux base. There’s a couple of halfhearted wiki pages but that does not an effort make. If the SDKs and related packages are there, the docs need to be there too.

Where does Gentoo shine? As a platform for developers, as a platform for flexibility, as a platform to eke every last drop of performance out of a system. All of the above use cases are relevant to at least one of those areas.

I’m writing this post because I would love it if anyone else who wants to help Gentoo be more awesome would chip in with packaging in these specific areas. Let me know!

Update: Michael Stahnke suggested I point to some resources on Gentoo packaging, for anyone interested, so take a look at the Gentoo Development Guide. The Developer Handbook contains some further details on policy as well as info on how to get commit access by becoming a Gentoo developer.

Tagged: development, gentoo, greatness