Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
April 24, 2013, 23:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

April 24, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Using strace to troubleshoot SELinux problems (April 24, 2013, 01:50 UTC)

When SELinux is playing tricks on you, you can just “allow” whatever it wants to do, but that is not always an option: sometimes, there is no denial in sight because the problem lays within SELinux-aware applications (applications that might change their behavior based on what the policy sais or even based on if SELinux is enabled or not). At other times, you get a strange behavior that isn’t directly visible what the cause is. But mainly, if you want to make sure that allowing something is correct (and not just a corrective action), you need to be absolutely certain that what you want to allow is security-wise acceptable.

To debug such issues, I often take the strace command to debug the application at hand. To use strace, I toggle the allow_ptrace boolean (strace uses ptrace() which, by default, isn’t allowed policy-wise) and then run the offending application through strace (or attach to the running process if it is a daemon). For instance, to debug a tmux issue we had with the policy not that long ago:

# setsebool allow_ptrace on
# strace -o strace.log -f -s 256 tmux

The resulting log file (strace.log) might seem daunting at first to look at. What you see are the system calls that the process is performing, together with their options but also the return code of each call. This is especially important as SELinux, if it denies something, often returns something like EACCESS (Permission Denied).

7313  futex(0x349e016f080, FUTEX_WAKE_PRIVATE, 2147483647) = 0
7313  futex(0x5aad58fd84, FUTEX_WAKE_PRIVATE, 2147483647) = 0
7313  stat("/", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
7313  stat("/home", {st_mode=S_IFDIR|0755, st_size=4096, ...}) = 0
7313  stat("/home/swift", {st_mode=S_IFDIR|0755, st_size=12288, ...}) = 0
7313  stat("/home/swift/.pki", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
7313  stat("/home/swift/.pki/nssdb", {st_mode=S_IFDIR|0700, st_size=4096, ...}) = 0
7313  statfs("/home/swift/.pki/nssdb", 0x3c3cab6fa50) = -1 EACCES (Permission denied)

Most (if not all) of the methods shown in a strace log are documented through manpages, so you can quickly find out that futex() is about fast user-space locking, stat() (man 2 stat to see the information about the method instead of the application) is about getting file status and statfs() is for getting file system statistics.

The most common permission issues you’ll find are file related:

7313  open("/proc/filesystems", O_RDONLY) = -1 EACCES (Permission denied)

In the above case, you notice that the application is trying to open the /proc/filesystems file read-only. In the SELinux logs, this might be displayed as follows:

audit.log:type=AVC msg=audit(1365794728.180:3192): avc:  denied  { read } for  
pid=860 comm="nacl_helper_boo" name="filesystems" dev="proc" ino=4026532034 
scontext=staff_u:staff_r:chromium_naclhelper_t tcontext=system_u:object_r:proc_t tclass=file

Now the case of tmux before was not an obvious one. In the end, I compared the strace output’s of two runs (one in enforcing and one in permissive) to find what the difference would be. This is the result:


10905 fcntl(9, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE) 


10905 fcntl(9, F_GETFL) = 0x8002 (flags O_RDWR|O_LARGEFILE) 

You notice the difference? In enforcing-mode, one of the flags on the file descriptor has O_RDONLY whereas the one in permissive mode as O_RDWR. This means that the file descriptor in enforcing mode is read-only whereas in permissive-mode is read-write. What we then do in the strace logs is to see where this file descriptor (with id=9) comes from:

10905 dup(0)     = 9
10905 dup(1)     = 10
10905 dup(2)     = 11

As the man-pages sais, dup() duplicates a file descriptor. And because, by convention, the first three file descriptors of an application correspond with standard input (0), standard output (1) and error output (2), we now know that the file descriptor with id=9 comes from the standard input file descriptor. Although this one should be read-only (it is the input that the application gets = reads), it seems that tmux might want to use this for writes as well. And that is what happens – tmux sends the file descriptor to the tmux server to check if it is a tty and then uses it to write to the screen.

Now what does that have to do with SELinux? It has to mean something, otherwise running in permissive mode would give the same result. After some investigation, we found out that using newrole to switch roles changes the flags of the standard input (as then provided by newrole) from O_RDWR to O_RDONLY (code snippet from newrole.c – look at the first call to open()):

/* Close the tty and reopen descriptors 0 through 2 */
if (ttyn) {
        if (close(fd) || close(0) || close(1) || close(2)) {
                fprintf(stderr, _("Could not close descriptors.\n"));
                goto err_close_pam;
        fd = open(ttyn, O_RDONLY | O_NONBLOCK);
        if (fd != 0)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);
        fd = open(ttyn, O_RDWR | O_NONBLOCK);
        if (fd != 1)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);
        fd = open(ttyn, O_RDWR | O_NONBLOCK);
        if (fd != 2)
                goto err_close_pam;
        fcntl(fd, F_SETFL, fcntl(fd, F_GETFL, 0) & ~O_NONBLOCK);

Such obscure problems are much easier to detect and troubleshoot thanks to tools like strace.

April 23, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
SLOT’ing the old swig-1 (April 23, 2013, 01:50 UTC)

The SWIG tool helps developers in building interfaces/libraries that can be accessed from many other languages than the ones the library is initially written in or for. The SELinux userland utility setools uses it to provide Python and Ruby interfaces even though the application itself is written in C. Sadly, the tool currently requires swig-1 for its building of the interfaces and uses constructs that do not seem to be compatible with swig-2 (same with the apse package, btw).

I first tried to patch setools to support swig-2, but eventually found regressions in the libapol library it provides so the patch didn’t work out (that is why some users mentioned that a previous setools version did build with swig – yes it did, but the result wasn’t correct). Recently, a post on Google Plus’ SELinux community showed me that I wasn’t wrong in this matter (it really does require swig-1 and doesn’t seem to be trivial to fix).

Hence, I have to fix the gentoo build problem where one set of tools requires swig-1 and another swig-2. Otherwise world-updates and even building stages for SELinux systems would fail as Portage finds incompatible dependencies. One way to approach this is to use Gentoo’s support for SLOTs. When a package (ebuild) in Gentoo defines a SLOT, it tells the package manager that the same package but a different version might be installed alongside the package if that has a different SLOT version. In case of swig, the idea is to give swig-1 a different slot than swig-2 (which uses SLOT="0") and make sure that both do not install the same files (otherwise you get file collisions).

Luckily, swig places all of its files except for the swig binary itself in /usr/share/swig/<version>, so all I had left to do was to make sure the binary itself is renamed. I chose to use swig1.3 (similar as to how tools like ruby and python and for some packages even java is implemented on Gentoo). The result (through bug 466650) is now in the tree, as well as an adapted setools package that uses the new swig SLOT.

Thanks to Samuli Suominen for getting me on the (hopefully ;-) right track. I don’t know why I was afraid of doing this, it was much less complex than I thought (now let’s hope I didn’t break other things ;-)

April 22, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Mitigating DDoS attacks (April 22, 2013, 01:50 UTC)

Lately, DDoS attacks have been in the news more than I was hoping for. It seems that the botnets or other methods that are used to generate high-volume traffic to a legitimate service are becoming more and more easy to get and direct. At the time that I’m writing this post (a few days before its published though), the popular Reddit site is undergoing a DDoS attack which I hope will be finished (or mitigated) soon.

But what can a service do against DDoS attacks? After all, DDoS is like gasping for air if you can’t swim and are (almost) drowning: the air is the legitimate traffic, but the water is overwhelming and your mouth, pharynx and trachea just aren’t made to deal with this properly. And unlike specific Denial-of-Service attacks that use a vulnerability or malcrafted URL, you cannot just install some filter or upgrade a component to be safe again.

Methods for mitigating DDoS attacks (beyond increasing your bandwidth as that is very expensive and the botnets involved can go up to 130 Gbps, not a bandwidth you are probably willing to pay for if legitimate services on your site have enough with 10 Mbps) that come to mind are of all sorts of “classes”…

Configure your servers and services that they stay alive under pressure. Look for the sweet spot where performance of the services is still stable where a higher load means performance degradation. If you have some experience with load testing, you know that throughput on a service initially goes up linearly with the load (first phase). Then, it slows down (but still rises – phase 2) up to a point that, when you increase the load even further just a bit, the service degrades (and sometimes doesn’t even get back to its feed when you remove the additional load again – phase3). You need to look for the spot where load and performance is stable (somewhere at the middle of the second phase) and configure your systems so that additional load is dropped. Yes, this means that the DDoS will be more effective, but also means that your systems can easily get back up to their feet when the attack has finished (and you get a more predictable load and consequences).

Investigate if you can have a backup service that has a higher throughput ability (with reduced functionality). If the DDoS attack focuses on the system resources rather than network resources involved, such a backup “lighter” service can be used to still provide basic functionality (for instance a more static website), but even in case of network resource consumption it can have the advantage that the network consumption that your servers are placing (while replying to the requests) are lower.

Depending on the service you offer (and financial means you have at your disposal) you can look at redirecting traffic to more specialized services. Companies like Prolexic have systems that “scrub” the DDoS traffic from all traffic and only send legitimate requests to your systems. There are several methods for redirecting load, but a common one is to change the DNS records for your service(s) to point to the addresses of those specialized services instead. The lower the TTL (Time To Live) is of the records, the faster the redirect might take place. If you want to be able to handle an increase in load without specialized services, you might want to be able to redirect traffic to cloud services (where you host your service as well) which are generally capable of handling higher throughput than your own equipment (but this too comes at an additional cost).

Some people mention that you can switch IP address. This is true only if the DDoS attack is targeting IP addresses and not (DNS-resolved) URIs. You could set up additional IP addresses that are not registered in DNS (yet) and during the attack, extend the service resolving towards the additional addresses as well. If you do not notice a load spread of the DDoS attack towards the new addresses, you can remove the old addresses from DNS. But again, this won’t work generally – not only are most DDoS attacks using DNS-resolved URIs, most of the time attackers are actively involved in the attack and will quickly notice if such a “failover” has occurred (and react against it).

Depending on your relationship with your provider or location service, you can ask if the edge routers (preferably those of the ISP) can have fallback source filtering rules available to quickly enable. Those fallback rules would then only allow traffic from networks that you know most (all?) of your customers and clients are at. This isn’t always possible, but if you have a service that targets mainly people within your country, have the filter only allow traffic from networks of that country. If the DDoS attack uses geographically spread resources, it might be that the number of bots inside those allowed networks are low enough that your service can continue.

Configure your firewalls (and ask that your ISP does the same) to not accept (drop) traffic not expected. If the services on your architecture do not use external DNS, then you can drop incoming DNS response packets (a popular DDoS attack method is by using spoofed addresses towards open DNS resolvers; called a DNS reflection attack).

And finally, if you are not bound to a single data center, you might want to spread services across multiple locations. Although more difficult from a management point of view, a dispersed/distributed architecture allows other services to continue running while one is being attacked.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.


For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.


The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Sven Vermeulen a.k.a. swift (homepage, bugs)

When working with a SELinux-enabled system, administrators will eventually need to make small updates to the existing policy. Instead of building their own full policy (always an option, but most likely not maintainable in the long term) one or more SELinux policy modules are created (most distributions use a modular approach to the SELinux policy development) which are then loaded on their target systems.

In the past, users had to create their own policy module by creating (and maintaining) the necessary .te file(s), building the proper .pp files and loading it in the active policy store. In Gentoo, from policycoreutils-2.1.13-r11 onwards, a script is provided to the users that hopefully makes this a bit more intuitive for regular users: selocal.

As the name implies, selocal aims to provide an interface for handling local policy updates that do not need to be packaged or distributed otherwise. It is a command-line application that you feed single policy rules at one at a time. Each rule can be accompanied with a single-line comment, making it obvious for the user to know why he added the rule in the first place.

# selocal --help
Usage: selocal [] [] 

Command can be one of:
  -l, --list            List the content of a SELinux module
  -a, --add             Add an entry to a SELinux module
  -d, --delete          Remove an entry from a SELinux module
  -M, --list-modules    List the modules currently known by selocal
  -u, --update-dep      Update the dependencies for the rules
  -b, --build           Build the SELinux module (.pp) file (requires privs)
  -L, --load            Load the SELinux module (.pp) file (requires privs)

Options can be one of:
  -m, --module          Module name to use (default: selocal)
  -c, --comment        Comment (with --add)

The option -a requires that a rule is given, like so:
  selocal -a "dbadm_role_change(staff_r)"
The option -d requires that a line number, as shown by the --list, is given, like so:
  selocal -d 12

Let’s say that you need to launch a small script you written as a daemon, but you want this to run while you are still in the staff_t domain (it is a user-sided daemon you use personally). As regular staff_t isn’t allowed to have processes bind on generic ports/nodes, you need to enhance the SELinux policy a bit. With selocal, you can do so as follows:

# selocal --add "corenet_tcp_bind_generic_node(staff_t)" --comment "Launch local daemon"
# selocal --add "corenet_tcp_bind_generic_port(staff_t)" --comment "Launch local daemon"
# selocal --build --load
(some output on building the policy module)

When finished, the local policy is enhanced with the two mentioned rules. You can query which rules are currently stored in the policy:

# selocal --list
12: corenet_tcp_bind_generic_node(staff_t) # Launch local daemon
13: corenet_tcp_bind_generic_port(staff_t) # Launch local daemon

If you need to delete a rule, just pass the line number:

# selocal --delete 13

Having this tool around also makes it easier to test out changes suggested through bugreports. When I test such changes, I add in the bug report ID as the comment so I can track which settings are still local and which ones have been pushed to our policy repository. Underlyingly, selocal creates and maintains the necessary policy file in ~/.selocal and by default uses the selocal policy module name.

I hope this tool helps users with their quest on using SELinux. Feedback and comments are always appreciated! It is a small bash script and might still have a few bugs in it, but I have been using it for a few months so most quirks should be handled.

April 20, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Transforming GuideXML to DocBook (April 20, 2013, 01:50 UTC)

I recently committed an XSL stylesheet that allows us to transform the GuideXML documents (both guides and handbooks) to DocBook. This isn’t part of a more elaborate move to try and push DocBook instead of GuideXML for the Gentoo Documentation though (I’d rather direct documentation development more to the Gentoo wiki instead once translations are allowed): instead, I use it to be able to generate our documentation in other formats (such as PDF but also ePub) when asked.

If you’re not experienced with XSL: XSL stands for Extensible Stylesheet Language and can be seen as a way of “programming” in XML. A stylesheet allows developers to transform one XML document towards another format (either another XML, or as text-like output like wiki) while manipulating its contents. In case of documentation, we try to keep as much structure in the document as possible, but other uses could be to transform a large XML with only a few interesting fields towards a very small XML (only containing those fields you need) for further processing.

For now (and probably for the foreseeable future), the stylesheet is to be used in an offline mode (we are not going to provide auto-generated PDFs of all documents) as the process to convert a document from GuideXML to DocBook to XML:FO to PDF is quite resource-intensive. But users that are interested can use the stylesheet as linked above to create their own PDFs of the documentation.

Assuming you have a checkout of the Gentoo documentation, this process can be done as follows (example for the AMD64 handbook):

$ xsltproc docbook.xsl /path/to/handbook-amd64.xml > /somewhere/handbook-amd64.docbook
$ cd /somewhere
$ xsltproc --output --stringparam paper.type A4 \
  /usr/share/sgml/docbook/xsl-stylesheets/fo/docbook.xsl handbook-amd64.docbook
$ fop handbook-amd64.pdf

The docbook stylesheets are offered by the app-text/docbook-xsl-stylesheets package whereas the fop command is provided by dev-java/fop.

I have an example output available (temporarily) at my dev space (amd64 handbook) but I’m not going to maintain this for long (so the link might not work in the near future).

April 19, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)

So in the past few posts I discussed how sysbench can be used to simulate some workloads, specific to a particular set of tasks. I used the benchmark application to look at the differences between the guest and host on my main laptop, and saw a major performance regression with the memory workload test. Let’s view this again, using parameters more optimized to view the regressions:

$ sysbench --test=memory --memory-total-size=32M --memory-block-size=64 run
  Operations performed: 524288 (2988653.44 ops/sec)
  32.00 MB transferred (182.41 MB/sec)

  Operations performed: 524288 (24920.74 ops/sec)
  32.00 MB transferred (1.52 MB/sec)

$ sysbench --test=memory --memory-total-size=32M --memory-block-size=32M run
  Operations performed: 1 (  116.36 ops/sec)
  32.00 MB transferred (3723.36 MB/sec)

  Operations performed: 1 (   89.27 ops/sec)
  32.00 MB transferred (2856.77 MB/sec)

From looking at the code (gotta love Gentoo for making this obvious ;-) we know that the memory workload, with a single thread, does something like the following:

total_bytes = 0;
repeat until total_bytes >= memory-total-size:
  total_bytes += memory-block-size
  (start event timer)
  pointer -> buffer;
  while pointer <-> end-of(buffer)
    write somevalue at pointer
  (stop event timer)

Given that the regression is most noticeable when the memory-block-size is very small, the part of the code whose execution count is much different between the two runs is the mutex locking, global memory increment and the start/stop of event timer.

In a second phase, we also saw that mutex locking itself is not impacted. In the above case, we have 524288 executions. However, if we run the mutex workload this number of times, we see that this hardly has any effect:

$ sysbench --test=mutex --mutex-num=1 --mutex-locks=524288 --mutex-loops=0 run
Host:      total time:        0.0275s
Guest:     total time:        0.0286s

The code for the mutex workload, knowing that we run with one thread, looks like this:

mutex_locks = 524288
(start event timer)
  lock = get_mutex()
until mutex_locks = 0;
(stop event timer)

To check if the timer might be the culprit, let’s look for a benchmark that mainly does timer checks. The cpu workload can be used, when we tell sysbench that the prime to check is 3 (as its internal loop runs from 3 till the given number, and when the given number is 3 it skips the loop completely) and we ask for 524288 executions.

$ sysbench --test=cpu --cpu-max-prime=3 --max-requests=524288 run
Host:  total time:  0.1640s
Guest: total time: 21.0306s

Gotcha! Now, the event timer (again from looking at the code) contains two parts: getting the current time (using clock_gettime()) and logging the start/stop (which is done in memory structures). Let’s make a small test application that gets the current time (using the real-time clock as the sysbench application does) and see if we get similar results:

$ cat test.c
#include <stdio.h>
#include <time.h>

int main(int argc, char **argv, char **arge) {
  struct timespec tps;
  long int i = 524288;
  while (i-- > 0)
    clock_gettime(CLOCK_REALTIME, &tps);

$ gcc -lrt -o test test.c
$ time ./test
Host:  0m0.019s
Guest: 0m5.030s

So given that the clock_gettime() is ran twice in the sysbench, we already have 10 seconds of overhead on the guest (and only 0,04s on the host). When such time-related functions are slow, it is wise to take a look at the clock source configured on the system. On Linux, you can check this by looking at /sys/devices/system/clocksource/*.

# cd /sys/devices/system/clocksource/clocksource0
# cat current_clocksource
# cat available_clocksource
kvm-clock tsc hpet acpi_pm

Although kvm-clock is supposed to be the best clock source, let’s switch to the tsc clock:

# echo tsc > current_clocksource

If we rerun our test application, we get a much more appreciative result:

$ time ./test
Host:  0m0.019s
Guest: 0m0.024s

So, what does that mean for our previous benchmark results?

$ sysbench --test=cpu --cpu-max-prime=20000 run
Host:            35,3049 sec
Guest (before):  36,5582 sec
Guest (now):     35,6416 sec

$ sysbench --test=fileio --file-total-size=6G --file-test-mode=rndrw --max-time=300 --max-requests=0 --file-extra-flags=direct run
Host:            1,8424 MB/sec
Guest (before):  1,5591 MB/sec
Guest (now):     1,5912 MB/sec

$ sysbench --test=memory --memory-block-size=1M --memory-total-size=10G run
Host:            3959,78 MB/sec
Guest (before)   3079,29 MB/sec
Guest (now):     3821,89 MB/sec

$ sysbench --test=threads --num-threads=128 --max-time=10s run
Host:            9765 executions
Guest (before):   512 executions
Guest (now):      529 executions

So we notice that this small change has nice effects on some of the tests. The CPU benchmark improves from 3,55% overhead to 0,95%; fileio is the same (from 15,38% to 13,63%), memory improves from 22,24% overhead to 3,48% and threads remains about status quo (from 94,76% slower to 94,58%).

That doesn’t mean that the VM is now suddenly faster or better than before – what we changed was how fast a certain time measurement takes, which the benchmark software itself uses rigorously. This goes to show how important it is to

  1. understand fully how the benchmark software works and measures
  2. realize the importance of access to source code is not to be misunderstood
  3. know that performance benchmarks give figures, but do not tell you how your users will experience the system

That’s it for the sysbench benchmark for now (the MySQL part will need to wait until a later stage).

In the previous post, I gave some feedback on the cpu and fileio workload tests that sysbench can handle. Next on the agenda are the memory, threads and mutex workloads.

When using the memory workload, sysbench will allocate a buffer (provided through the –memory-block-size parameter, defaults to 1kbyte) and each execution will read or write to this memory (–memory-oper, defaults to write) in a random or sequential manner (–memory-access-mode, defaults to sequential).

$ sysbench --test=memory --memory-block-size=1M --memory-total-size=10G run
Host throughput, 1M:  3959,78 MB/sec
Guest throughput, 1M: 3079,29 MB/sec

The guest has a lower throughput (about 77% of the host), which is lower than what most online posts provide on KVM performance. We’ll get back to that later. Let’s look at the default block size of 1k (meaning that the benchmark will do a lot more executions before it reaches the total memory (in load):

$ sysbench --test=memory --memory-total-size=1G run
Host throughput, 1k:  1702,59 MB/sec
Guest throughput, 1k:   23,67 MB/sec

This is a lot worse: the guest’ throughput is only 1,4% of the host throughput! The qemu-kvm process on the host is also taking up a lot of CPU.

Now let’s take a look at the other workload, threads. In this particular workload, you identify the number of threads (–num-threads), the number of locks (–thread-locks) and the number of times a thread should run its ‘lock-yield..unlock’ workload (–thread-yields). The more locks you identify, the less number of threads will have the same lock (each thread is allocated a single lock during an execution, but every new execution will give it a new lock so the threads do not always take the same lock).

Note that parts of this is also handled by the other tests: mutex’es are used when a new operation (execution) for the thread is prepared. In case of the memory-related workload above, the smaller the buffer size, the more frequent thread operations are needed. In the last run we did (with the bad performance), millions of operations were executed (although no yields were performed). Something similar can be simulated using a single lock, single thread and a very high number of operations and no yields:

$ sysbench --test=threads --num-threads=1 --thread-yields=0 --max-requests=1000000 --thread-locks=1 run
Host runtime:    0,3267 s  (event:    0,2278)
Guest runtime:  40,7672 s  (event:   30,6084)

This means that the guest “throughput” problems from the memory identified above seem to be related to this rather than memory-specific regressions. To verify if the scheduler itself also shows regressions, we can run more threads concurrently. For instance, running 128 threads simultaneously, using the otherwise default settings, during 10 seconds:

$ sysbench --test=threads --num-threads=128 --max-time=10s run
Host:   9765 executions (events)
Guest:   512 executions (events)

Here we get only 5% throughput.

Let’s focus on the mutex again, as sysbench has an additional mutex workload test. The workload has each thread running a local fast loop (simple increments, –mutex-loops) after which it takes a random mutex (one of –mutex-num), locks it, increments a global variable and then releases the mutex again. This is repeated for the number of locks identified (–mutex-locks). If mutex operations would be the cause of the performance issues above, then we would notice that the mutex operations are a major performance regression on my system.

Let’s run that workload with a single thread (default), no loops and a single mutex.

$ sysbench --test=mutex --mutex-num=1 --mutex-locks=50000000 --mutex-loops=1 run
Host (duration):   2600,57 ms
Guest (duration):  2571,44 ms

In this example, we see that the mutex operations are almost at the same speed (99%) of the host, so pure mutex operations are not likely to be the cause of the performance regressions earlier on. So what does give the performance problems? Well, that investigation will be for the third and last post in this series ;-)

April 18, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Another Gentoo Hardened month has passed (April 18, 2013, 21:36 UTC)

Another month has passed, so time to mention again what we have all been doing lately ;-)


Version 4.8 of GCC is available in the tree, but currently masked. The package contains a fix needed to build hardened-sources, and a fix for the asan (address sanitizer). asan support in GCC 4.8 might be seen as an improvement security-wise, but it is yet unclear if it is an integral part of GCC or could be disabled with a configure flag. Apparently, asan “makes building gcc 4.8 crazy”. Seeing that it comes from Google, and building Google Chromium is also crazy, I start seeing a pattern here.

Anyway, it turns out that PaX/grSec and asan do not get along yet (ASAN assumes/uses hardcoded userland address space size values, which breaks when UDEREF is set as it pitches a bit from the size):

ERROR: AddressSanitizer failed to allocate 0x20000001000 (2199023259648) bytes at address 0x0ffffffff000

Given that this is hardcoded in the resulting binaries, it isn’t sufficient to change the size value from 47 bits to 46 bits as hardened systems can very well boot a kernel with and another kernel without UDEREF, causing the binaries to fail on the other kernel. Instead, a proper method would be to dynamically check the size of a userland address.

However, GCC 4.8 also brings along some nice enhancements and features. uclibc profiles work just fine with GCC 4.8, including armv7a and mips/mipsel. The latter is especially nice to hear, since mips used to require significant effort with previous GCCs.

Kernel and grSecurity/PaX

More recent kernels have now been stabilized to stay close to the grSecurity/PaX upstream developments. The most recent stable kernel now is hardened-sources-3.8.3. Others still available are hardened-sources versions 3.2.40-r1 and 2.6.32-r156.

The support for XATTR_PAX is still progressing, but a few issues have come up. One is that non-hardened systems are seeing warnings about pax-mark not being able to set the XATTR_PAX on tmpfs since vanilla kernels do not have the patch to support user.* extended attribute namespaces for tmpfs. A second issue is that the install application, as provided by coreutils, does not copy extended attributes. This has impact on ebuilds where pax markings are done before the install phase of a package. But only doing pax markings after the install phase isn’t sufficient either, since sometimes we need the binaries to be marked already for test phases or even in the compile phase. So this is still something on the near horizon.

Most likely the necessary tools will be patched to include extended attributes on copy operations. However, we need to take care only to copy over those attributes that make sense: user.pax does, but security ones like security.evm and security.selinux shouldn’t as those are either recomputed when needed, or governed through policy. The idea is that USE=”pax_kernel” will enable the above on coreutils.


The SELinux support in Gentoo has seen a fair share of updates on the userland utilities (like policycoreutils, setools, libselinux and such). Most of these have already made the stable tree or are close to be bumped to stable. The SELinux policy also has been updated a lot: most changes can be tracked through bugzilla, looking for the sec-policy r13 whiteboard. The changes can be applied to the system immediately if you use the live ebuilds (like selinux-base-9999), but I’m planning on releasing revision 13 of our policy set soon.

System Integrity

Some of the “early adopter” problems we’ve noticed on Gentoo Hardened have been integrated in the repositories upstream and are slowly progressing towards the main Linux kernel tree.


All hardened profiles have been moved to the 13.0 base. Some people frowned when they noticed that the uclibc profiles do not inherit from any architecture-related profile. This is however with reason: the architecture profiles are (amongst other reasons) focusing on the glibc specifics of the architecture. Since the profile intended here is for uclibc, those changes are not needed (nor wanted). Hence, these are collapsed in a single profile.


For SELinux, the SELinux handbook now includes information about USE=”unconfined” as well as the selinux_gentoo init script as provided by policycoreutils. Users who are already running with SELinux enabled can just look at the Change History to see which changes affect them.

A set of tutorials (which I’ve blogged about earlier as well) have been put online at the Gentoo Wiki. Next to the SELinux tutorials, an article pertaining to AIDE has been added as well as it fits nicely within the principles/concepts of the System Integrity subproject.


If you don’t do it already, start following @GentooHardened ;-)

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Sven Vermeulen a.k.a. swift (homepage, bugs)

Being busy with virtualization and additional security measures, I frequently come in contact with people asking me what the performance impact is. Now, you won’t find the performance impact of SELinux here as I have no guests nor hosts that run without SELinux. But I did want to find out what one can do to compare system (and later application) performance, so I decided to take a look at the various benchmark utilities available. In this first post, I’ll take a look at sysbench (using 0.4.12, released on March 2009 – unlike what you would think from the looks of the site alone) to compare the performance of my KVM guest versus host.

The obligatory system information: the host is a HP Pavilion dv7 3160eb with an Intel Core i5-430M processor (dual-core with 2 threads per core). Frequency scaling is disabled – the CPU is fixed at 2.13 Ghz. The system has 4Gb of memory (DDR3), the internal hard disks are configured as a software RAID1 and with LVM on top (except for the file system that hosts the virtual guest images, which is a plain software RAID1). The guests run with the boot options given below, meaning 1.5Gb of memory, 2 virtual CPUs of the KVM64 type. The CFLAGS for both are given below as well, together with the expanded set given by gcc ${CFLAGS} -E -v – &1 | grep cc1.

/usr/bin/qemu-kvm -monitor stdio -nographic -gdb tcp::1301 \
  -vnc \
  -net nic,model=virtio,macaddr=00:11:22:33:44:b3,vlan=0 \
  -net vde,vlan=0 \
  -drive file=/srv/virt/gentoo/test/pg1.img,if=virtio,cache=none \
  -k nl-be -m 1536 -cpu kvm64 -smp 2

# For host
CFLAGS="-march=core2 -O2 -pipe"
#CFLAGS="-D_FORTIFY_SOURCE=2 -fno-strict-overflow -march=core2 \
         -fPIE -O2 -fstack-protector-all"
# For guest
CFLAGS="-march=x86-64 -O2 -pipe"
#CFLAGS="-fno-strict-overflow -march=x86-64 -fPIE -O2 \

I am aware that the CFLAGS between the two are not the same (duh), and I know as well that the expansion given above isn’t entirely accurate. But still, it gives some idea on the differences.

Now before I go on to the results, please keep in mind that I am not a performance expert, not even a performance experienced or even performance wanna-be experienced person: the more I learn about the inner workings of an operating system such as Linux, the more complex it becomes. And when you throw in additional layers such as virtualization, I’m almost completely lost. In my day-job, some people think they can “prove” the inefficiency of a hypervisor by counting from 1 to 100’000 and adding the numbers, and then take a look at how long this takes. I think this is short-sighted, as this puts load on a system that does not simulate reality. If you really want to do performance measures for particular workloads, you need to run those workloads and not some small script you hacked up. That is why I tend to focus on applications that use workload simulations for infrastructural performance measurements (like HammerDB for performance testing databases). But for this blog post series, I’m first going to start with basic operations and later posts will go into more detail for particular workloads (such as database performance measurements).

Oh, and BTW, when I display figures with a comma (“,”), that comma means decimal (so “1,00″ = “1″).

The figures below are numbers that can be interpreted in many ways, and can prove everything. I’ll sometimes give my interpretation to it, but don’t expect to learn much from it – there are probably much better guides out there for this. The posts are more of a way to describe how sysbench works and what you should take into account when doing performance benchmarks.

So the testing is done using sysbench, which is capable of running CPU, I/O, memory, threading, mutex and MySQL tests. The first run of it that I did was a single-thread run for CPU performance testing.

$ sysbench --test=cpu --cpu-max-prime=20000 run

This test verifies prime numbers by dividing the number with sequentially increasing numbers and verifying that the remainder (modulo calculation) is zero. If it is, then the number is not prime and the calculation goes on to the next number; otherwise, if none have a remainder of 0, then the number is prime. The maximum number that it divides by is calculated by taking the integer part of the square root of the number (so for 17, this is 4). This algorithm is very simple, so you should also take into account that during the compilation of the benchmark, the compiler might already have optimized some of it.

Let’s look at the numbers.

Run     Stat     Host      Guest
1.1    total   35,4331   37,0528   35,4312   36,8917
1.2    total   35,1482   36,1951   35,1462   36,0405
1.3    total   35,3334   36,4266   35,3314   36,2640
avg    total   35,3049   36,5582   35,3029   36,3987
med    total   35,3334   36,4266   35,3314   36,2640

On average (I did three runs on each system), the guest took 3,55% more time to finish the test than the host (total). If we look at the pure calculation (so not the remaining overhead of the inner workings – then the guest took 3,10% more time. The median however (the run that wasn’t the fastest nor the slowest of the three) has the guest taking 3,09% more time (total) and 2,64% more time (

Let’s look at the two-thread results.

Run     Stat     Host      Guest
1.1    total   17,5185   18,0905   35,0296   36,0217
1.2    total   17,8084   18,1070   35,6131   36,0518
1.3    total   18,0683   18,0921   36,1322   36,0194
avg    total   17,5185   18,0965   35,0296   36,0310
med    total   17,8084   18,0921   35,6131   36,0194

With these figures, we notice that the guest average total run time takes 1,67% more time to complete, and the event time only 1,23%. I was personally expecting that the guest would have a higher percentage than previously (gut feeling – never trust it when dealing with complex matter) but was happy to see that the difference wasn’t higher. I’m not going to start analyze this in more detail and just go to the next test: fileio.

In case of fileio testing, I assume that the hypervisor will take up more overhead, but keep in mind that you also need to consider the environmental factors: LVM or not, RAID1 or not, mount options, etc. Since I am comparing guests versus hosts here, I should look for a somewhat comparable setup. Hence, I will look for the performance of the host (software raid, LVM, ext4 file system with data=ordered) and the guest (images on software raid, ext4 file system with data=ordered and barrier=0, and LVM in guest).

Furthermore, running a sysbench test suggests a file that is much larger than the available RAM. I’m going to run the tests on a 6Gb file size, but enable O_DIRECT for writes so that some caches (page cache) are not used. This can be done using –file-extra-flags=direct.

As with all I/O-related benchmarks, you need to define which kind of load you want to test with. Are the I/Os sequential (like reading or writing a large file completely) or random? For databases, you are most likely interested in random reads (data, for selects) and sequential writes (into transaction logs). A file server usually has random read/write. In the below test, I’ll use a combined random read/write.

$ sysbench --test=fileio --file-total-size=6G prepare
$ sysbench --test=fileio --file-total-size=6G --file-test-mode=rndrw --max-time=300 --max-requests=0 --file-extra-flags=direct run
$ sysbench --test=fileio --file-total-size=6G cleanup

In the output, the throughput seems to be most important:

Operations performed:  4348 Read, 2898 Write, 9216 Other = 16462 Total
Read 67.938Mb  Written 45.281Mb  Total transferred 113.22Mb  (1.8869Mb/sec)

In the above case, the throughput is 1,8869 Mbps. So let’s look at the (averaged) results:

Host:  1,8424 Mbps
Guest: 1,5591 Mbps

The above figures (which are an average of 3 runs) tell us that the guest has a throughput of about 84,75% (so we take about 15% performance hit on random read/write I/O). Now I used sysbench here for some I/O validation of guest between host, but other usages apply as well. For instance, let’s look at the impact of data=ordered versus data=journal (taken on the host):

6G, data=ordered, barrier=1: 1,8435 Mbps
6G, data=ordered, barrier=0: 2,1328 Mbps
6G, data=journal, barrier=1: 599,85 Kbps
6G, data=journal, barrier=0: 767,93 Kbps

From the figures, we can see that the data=journal option slows down the throughput to a final figure about 30% of the original throughput (70% decrease!). Also, disabling barriers has a positive impact on performance, giving about 15% throughput gain. This is also why some people report performance improvements when switching to LVM, as – as far as I can tell (but finding a good source on this is difficult) – LVM by default disables barriers (but does honor the barrier=1 mount option if you provide it).

That’s about it for now – the next post will be about the memory and threads tests within sysbench.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: and

The first of the two files ( is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic file:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use but we’ll create instead soon).

To build a program, you need then to create a similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

April 17, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Simple drawing for I/O positioning (April 17, 2013, 23:00 UTC)

Instead of repeatedly trying to create an overview of the various layers involved with I/O operations within Linux on whatever white-board is in the vicinity, I decided to draw one up in that I can then update as I learn more from this fascinating world. The drawing’s smaller blocks within the layers are meant to give some guidance to what is handled where, so they are definitely not complete.

So for those interested (or those that know more of it than I ever will and prepared to help me out):


I hope it isn’t too far from the truth.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece


FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)


We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.

April 14, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.

April 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So it stats my time in Ireland (April 13, 2013, 19:58 UTC)

With today it makes a full week I survived my move to Dublin. Word’s out on who my new employer is (but as usual, since this blog is personal and should not be tied to my employer, I’m not even going to name it), and I started the introductory courses. One thing I can be sure of: I will be eating healthily and compatibly with my taste — thankfully, chicken, especially spicy chicken, seems to be available everywhere in Ireland, yai!

I have spent almost all my life in Venice, never stayed for long periods of time away from it, with the exception of last year, which I spent for the most time, as you probably know, in Los Angeles — 2012 was a funny year like that: I never partied for the new year, but at 31st December 2011 I was at a friend’s place with friends, after which some of us ended up leaving at around 3am… for the first time in my life I ended up sleeping on a friend’s couch. Then it was time for my first week-long vacation since ever with the same group of friends in the Venetian Alps.

With this premise, it’s obvious that Dublin is looking a bit alien to me. It helps I’ve spent a few weeks over the past years in London, so that at least a few customs that are shared between the British and the Irish I already was used to — they probably don’t like to be remembered that they share some customs with the British, but there it goes. But it’s definitely more similar to Italy than Los Angeles.

Funny episode of the day was me going to Boots, and after searching the aisle for a while asking one of the workers if they kept hydrogen peroxide, which I used almost daily both in Italy and the US as a disinfectant – I cut or scrape very easily – and after being looked at in a very strange way I was informed that is not possible to sell it anymore in Ireland…. I’d guess it has something to do with the use of it in the London bombings of ‘05. Luckily they didn’t call the police.

I have to confess though that I like the restaurants better on the touristy, commercial areas than those in the upscale modern new districts — I love Nando’s for instance, which is nowhere Irish, but I love its spiciness (and this time around I could buy the freaking salt!). But also most pubs have very good chicken.

I still don’t have a permanent place though. I need to look into one soonish I suppose, but the job introduction took the priority for the moment. Even though, if the guests in the next apartment are going to throw another party at 4.30am I might decide to find something sooner, rather than later.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...

April 11, 2013
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio in GSoC 2013 (April 11, 2013, 11:34 UTC)

That’s right — PulseAudio will be participating in the Google Summer of Code again this year! We had a great set of students and projects last year, and you’ve already seen some their work in the last release.

There are some more details on how to get involved on the mailing list. We’re looking forward to having another set of smart and enthusiastic new contributors this year!

p.s.: Mentors and students from organisations (GStreamer and BlueZ, for example), do feel free to get in touch with us if you have ideas for projects related to PulseAudio that overlap with those other projects.

April 10, 2013
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
GCC 4.8 - building everything? (April 10, 2013, 13:49 UTC)

The last few days I've spent a few hundred CPU-hours building things with gcc 4.8. So far, alphabetically up to app-office/, it's been really boring.
The amount of failing packages is definitely lower than with 4.6 or 4.7. And most of the current troubles are unrelated - for example the whole info page generation madness.
At the current rate of filing and fixing bugs we should be able to unleash this new version on the masses really soon - maybe in about a month? (Or am I just too optimistic?)

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Forking ebuilds (April 10, 2013, 00:14 UTC)

Here’s a response to an email thread I sent recently. This was on a private alias but I’m not exposing the context or quoting anybody, so I’m not leaking anything but my own opinion which has no reason to be secret.

GLEP39 explicitly states that projects can be competing. I don’t see how you can exclude competing ebuilds from that since nothing prevents anybody from starting a project dedicated to maintaining an ebuild.

So, if you want to prevent devs from pushing competing ebuilds to the tree you have to change GLEP 39 first. No arguing or “hey all, hear my opinion” emails on whatever list will be able to change that.

Some are against forking ebuilds and object duplicating effort and lack of manpower. I will bluntly declare those people shortsighted. Territoriality is exactly what prevents us from getting more manpower. I’m interested in improving package X but developer A who maintains it is an ass and won’t yield on anything. At best I’ll just fork it in an overlay (with all the issues that having a package in an overlay entail, i.e. no QA, it’ll die pretty quickly, etc…), at worst I’m moving to Arch, or Exherbo, or else… What have we gained by not duplicating effort? We have gained negative manpower.

As long as forked ebuilds can cohabit peacefully in the tree using say a virtual (note: not talking about the devs here but about the packages) we should see them as progress. Gentoo is about choice. Let consumers, i.e. users and devs depending on the ebuild in various ways, have that choice. They’ll quickly make it known which one is best, at which point the failing ebuild will just die by itself. Let me say it again: Gentoo is about choice.

If it ever happened that devs of forked ebuilds could not cohabit peacefully on our lists or channels, then I would consider that a deliberate intention of not cooperating. As with any deliberate transgression of our rules if I were devrel lead right now I would simply retire all involved developers on the spot without warning. Note the use of the word “deliberate” here. It is important we allow devs to make mistakes, even encourage it. But we are adults. If one of us knowingly chooses to not play by the rules he or she should not be allowed to play. “Do not be an ass” is one of those rules. We’ve been there before with great success and it looks like we are going to have to go there again soon.

There you have it. You can start sending me your hate mail in 3… 2… 1…

April 09, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So there, I'm in Ireland (April 09, 2013, 21:50 UTC)

Just wanted to let everybody know that I’m in Ireland, as I landed at Dublin Airport on Saturday, and been roaming around the city for a few days now. Time looks like it’s running faster than usual, so I haven’t had much time to work on Gentoo stuff.

My current plan is to work, by the end of the week, on a testing VM as there’s an LVM2 bug that I owe Enrico to fix, and possibly work on the Autotools Mythbuster guide as well, there’s work to do there.

But today, I’m a bit too tired to keep going, it’s 11pm… I’ll doze off!

April 08, 2013
What’s cookin’ on the BBQ (April 08, 2013, 16:27 UTC)

While Spring has yet to come here, the rainy days are giving me some time to think about the future of Sabayon and summarize what’s been done during the last months.


As far as I can see, donations are going surprisingly well. The foundation has now enough money (see the campaign at to guarantee 24/7 operations, new hardware purchase and travel expenses for several months. Of course, the more the better (paranoia mode on) but I cannot really complain, given that’s our sole source of funds. Here is a list of stuff we’ve been able to buy during the last year (including prices, we’re in the EU, prices in the US are much lower, sigh):

  • one Odroid X2 (for Sabayon on ARM experiments) – 131€
  • one PandaBoard ES (for Sabayon on ARM experiments) – 160€
  • two 2TB Seagate Barracuda HDDs (one for Joost’s experiments, one for the Entropy tinderbox) – 185€
  • two 480GB Vertex3 OCZ SSDs for the Entropy tinderbox (running together with the Samsung 830 SSDs in a LVM setup) – 900€
  • one Asus PIKE 2008 SAS controller for the Entropy tinderbox – 300€
  • other 16GB of DDR3 for the Entropy tinderbox (now running with 64G) – 128€
  • @ maintenance (33€/mo for 1 year) – 396€
  • my personal FOSDEM 2013 travel expenses – 155€

Plus, travel expenses to data centers whenever there is a problem that cannot be fixed remotely. That’s more or less from 40€ to 60€ each depending on the physical distance.
As you may understand, this is just a part of the “costs”, because the time donated by individual developers is not accounted there, and I believe that it’s much more important than a piece of silicon.

monthly releases, entropy

Besides the money part, I spent the past months on Sabayon 11 (of course), on advancing with the automation agenda for 2013. Ideally, I would like to have stable releases automatically produced and tested monthly, and eventually pushed to mirrors. This required me to migrate to a different bittorrent tracker, one that scrapes a directory containing .torrents and publishes them automatically: you can see the outcome at Furthermore, a first, yet not advertised, set of monthly ISO images is available on our mirrors into the iso/monthly/ sub-directory. You can read more about them here. This may (eheh) indicate that the next Sabayon release will be versioned something like 13.05, who knows…
On the Entropy camp, nothing much has changed, besides the usual set of bug fixe, little improvements and the migration to an .ini-like repositories configuration files syntax for both Entropy Server and Client modules, see here. You may start realizing that all the good things I do are communicated through the devel mailing list.

leh systemd

I spent a week working on a Sabayon systemd system to see how it works and performs compared to openrc. Long story short, I am about to arrange some ideas on making the systemd migration come true at some point in the (near) future. Joost and I are experimenting with a private Entropy repository (thus chroot) that’s been migrated to systemd, from openrc. While I don’t want to start yet another flamewar about openrc vs systemd, I do believe in science, facts and benchmarks. Even though I don’t really like the vertical architecture of systemd, I am starting to appreciate its features and most importantly, its performance. The first thing I would like to sort out is to be able to switch between systemd and openrc at runtime, this may involve the creation of an eselect module (trivial) and patching some ebuilds. I think that’s the best thing to do, if we really want to design and deploy a migration path for current openrc users (I would like to remind people that Gentoo is about choice, after all). If you’re a Gentoo developer that hasn’t been bugged by me yet, feel free to drop a line to lxnay@g.o (expand the domain, duh!) if you’re interested.

April 07, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
FOSDEM 2013 & etc-update (April 07, 2013, 16:00 UTC)



I started writing this post after FOSDEM, but never actually managed to finish it. But as I plan to blog about something again “soon”, I wanted to get this one out first. So let’s start with FOSDEM – it is awesome event and every open source hacker is there unless he has some really huge reasons why not to come (like being dead, in prison or locked down in psychiatric care). I was there together with bunch of openSUSE/SUSE folks. It was a lot of fun and we even managed to get some work done during the event. So how was it?


We had a lot of fun on the way already. You know, every year, we rent a bus just for us and we go from Nuremberg to Brussels and back all together by bus. And we talk and drink and talk and drink some more…. So although it sounds crazy – 8 hours drive – it’s not as bad as it sounds.


What the hack is etc-update and what does it have to do with me, openSUSE or FOSDEM? Isn’t it Gentoo tool? Yes, it is. It is Gentoo tool (actually part of portage, Gentoo package manager) that is used in Gentoo to merge updates to the configuration files. When you install package, portage is not going to overwrite your configuration files that you have spend days and nights tuning. It will create a new file with new upstream configuration and it is up to you to merge them. But you know, rpm does the same thing. In almost all cases rpm is not going to overwrite your configuration file, but will install the new one as config_file.rpmnew. And it is up to you to merge the changes. But it’s not fun. Searching for all files, compare them manually and choose what to merge and how. And here comes etc-update o the rescue ;-)

How does it work? Simple. You need to install it (will speak about that later) and run it. It’s command line tool and it doesn’t need any special parameters. All you need to do is to run etc-update as root (to be actually able to do something with these files). And the result?

# etc-update 
Scanning Configuration files...
The following is the list of files which need updating, each
configuration file is followed by a list of possible replacement files.
1) /etc/camsource.conf (1)
2) /etc/ntp.conf (1)
Please select a file to edit by entering the corresponding number.
              (don't use -3, -5, -7 or -9 if you're unsure what to do)
              (-1 to exit) (-3 to auto merge all files)
                           (-5 to auto-merge AND not use 'mv -i')
                           (-7 to discard all updates)
                           (-9 to discard all updates AND not use 'rm -i'):

What I usually do is that I select configuration files I do care about, review changes and merge them somehow and later just use -5 for everything else. It looks really simple, doesn’t it? And in fact it is!

Somebody asked a question on how to merge updates of configuration files while visiting our openSUSE booth at FOSDEM. When I learned that from Richard, we talked a little bit about how easy it is to do something like that and later during one of the less interesting talks, I took this Gentoo tool, patched it to work on rpm distributions, packaged it and now it is in Factory and it will be part of openSUSE 13.1 ;-) If you want to try it, you can get it either from my home project – home:-miska-:arm (even for x86 ;-) ) or from utilities repository.

Hope you will like it and that it will make many sysadmins happy ;-)

April 04, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

If you’re using dev-db/postgresql-server, update now.

CVE-2013-1899 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13}
A connection request containing a database name that begins
with "-" may be crafted to damage or destroy files within a server's data directory.

CVE-2013-1900 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13,8.4.17}
Random numbers generated by contrib/pgcrypto functions may be easy for another
database user to guess

CVE-2013-1901 <dev-db/postgresql-server-{9.2.4,9.1.9}
An unprivileged user can run commands that could interfere with in-progress backups.

April 03, 2013
Matthew Thode a.k.a. prometheanfire (homepage, bugs)


  1. Keep in mind that ZFS on Linux is supported upstream, for differing values of support
  2. I do not care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.


Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). I uploaded an iso that works for me at this link Live DVDs newer then 12.1 should also have support, but the previous link has the stable version of zfsonlinux. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.


I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry Most newer drives are 4k advanced format drives. Because of this you need ashift=12, some/most newer SSDs need ashift=13 compression set to lz4 will make your system incompatible with upstream (oracle) zfs, if you want to stay compatible then just set compression=on

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=lz4 rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /tmp/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources                #or hardned-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-0.6.1/work/spl-0.6.1 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-0.6.1/work/zfs-zfs-0.6.1/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
mkdir -p /etc/portage/profile                                                   
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask      
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use                    
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

April 02, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The WebP experiment (April 02, 2013, 17:58 UTC)

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP at all… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.

I’m a content publisher, whether I like it or not. This blog is relatively well followed, and I write quite a lot in it. While my hosting provider does not give me grief for my bandwidth usage, optimizing it is something I’m always keen on, especially since I have been Slashdotted once before. This is one of the reasons why my ModSecurity Ruleset validates and filters crawlers as much as spammers.

Blogs’ feeds, be them RSS or Atom (this blog only supports the latter) are a very neat way to optimize bandwidth: they get you the content of the articles without styles, scripts or images. But they can also be quite big. The average feed for my blog’s articles is 100KiB which is a fairly big page, if you consider that feed readers are supposed to keep pinging the blog to check for new items. Luckily for everybody, the authors of HTTP did consider this problem, and solved it with two main features: conditional requests and compressed responses.

Okay there’s a sense of déjà-vu in all of this, because I already complained about software not using the features even when it’s designed to monitor web pages constantly.

By using conditional requests, even if you poke my blog every fifteen minutes, you won’t use more than 10KiB an hour, if no new article has been posted. By using compressed responses, instead of a 100KiB response you’ll just have to download 33KiB. With Google Reader, things were even better: instead of 113 requests for the feed, a single request was made by the FeedFetcher, and that was it.

But now Google Reader is no more (almost). What happens now? Well, of the 113 subscribers, a few will most likely not re-subscribe to my blog at all. Others have migrated to NewsBlur (35 subscribers), the rest seem to have installed their own feed reader or aggregator, including tt-rss, owncloud, and so on. This was obvious looking at the statistics from either AWStats or Munin, both showing a higher volume of requests and delivered content compared to last month.

I’ve then decided to look into improving the bandwidth a bit more than before, among other things, by providing WebP alternative for images, but that does not really work as intended — I have enough material for a rant post or two so I won’t discuss it now. But while doing so I found out something else.

One of the changes I made while hoping to use WebP is to serve the image files from a different domain ( which meant that the access log for the blog, while still not perfect, is decidedly cleaner than before. From there I noticed that a new feed reader started requesting my blog’s feed every half an hour. Without compression. In full every time. That’s just shy of 5MiB of traffic per day, but that’s not the worst part. The worst part is that said 5MiB are for a single reader as the requests come from a commercial, proprietary feed reader webapp.

And this is not the only one! Gwene also does the same, even though I sent a pull request to get it to use compressed responses, which hasn’t had a single reply. Even Yandex’s new product has the same issue.

While 5MiB/day is not too much taken singularly, my blog’s traffic averages on 50-60 MiB/day so it’s basically a 10% traffic for less than 1% of users, just because they do not follow the best practices when writing web software. I’ve now added these crawlers to the list of stealth robots, this means that they will receive a “406 Unacceptable” unless they finally implement at least the compressed responses support (which is the easy part in all this).

This has an unfortunate implication on users of those services that were reading me, who won’t get any new updates. If I was a commercial entity, I couldn’t afford this at all. The big problem, to me, is that with Google Reader going away, I expect more and more of this kind of issues to crop up repeatedly. Even NewsBlur, which is now my feed reader of choice fixed their crawlers yet, which I commented upon before — the code is open-source but I don’t want to deal with Python just yet.

Seriously, why are there so many people who expect to be able to deal with web software and yet have no idea how the web works at all? And I wonder if somebody expected this kind of fallout from the simple shut down of a relatively minor service like Google Reader.

March 31, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
udev-200 interface names (March 31, 2013, 00:59 UTC)

Just updated to udev-200 and figured it was time to read the news item and deal with the Predictable Network Interface Names. I only have one network card and connect with a static ip address. It looked to me like more trouble to keep net.eth0 then to just go with the flow and paddle downstream and not fight it so here is what I did.

First I read the news item :) then found out what my new name would be.

eselect news read
udevadm test-builtin net_id /sys/class/net/eth0 2> /dev/null

That returned enp0s25 ...

Next remove the old symlink and create the new one.

cd /etc/init.d/
rm net.eth0
ln -s net.lo net.enp0s25

I removed all the files from /etc/udev/rules.d/

Next set up /etc/conf.d/net for my static address.

# Static
routes_enp0s25="default via"

That was it, rebooted, held my breath, and everything seems just fine, YES!

enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::21c:c0ff:fe91:5798  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:c0:91:57:98  txqueuelen 1000  (Ethernet)
        RX packets 3604  bytes 1310220 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2229  bytes 406258 (396.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xd3400000-d3420000  
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 16436
        inet  netmask
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I had to edit /etc/vnstat.conf and change eth0 to enp0s25. I use vnstat with conky.

rm /var/lib/vnstat/*
vnstat -u -i enp0s25

March 30, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

The article’s title is a play on the phrase “don’t open that door”, and makes more sense in Italian as we use the same word for ‘door’ and ‘port’…

So you left your hero (me) working on setting up a Raspberry Pi with at least a partial base of cross-compilation. The whole thing worked to a decent extent, but it wasn’t really as feasible as I hoped. Too many things, including Python, cannot cross-compile without further tricks, and the time it takes to figure out how to cross-compile them, tend to be more than that needed to just wait for it to build on the board itself. I guess this is why there is that little interest in getting cross-compilation supported.

But after getting a decent root, or stage4 as you prefer to call it, I needed to get a kernel to boot the device. This wasn’t easy.; there is no official configuration file published — what they tell you is, if you want to build a new custom kernel, to zcat /proc/config.gz from Raspian. I didn’t want to use Raspian, so I looked further. The next step is to check out the defconfig settings that the kernel repository includes, a few, different of them exist.

You’d expect them to be actually thought out to enable exactly what the RaspberryPi provides, and nothing more or less. Some leeway can be expected for things like network options, but at least the “cutdown” version should not include all of IrDA, Amateur Radio, Wireless, Bluetooth, USB network, PPP, … After disabling a bunch of options, since the system I need to run will have very few devices connected – in particular, only the Davis Vantage Pro station, maybe a printer – I built the kernel and copied it over the SD card. It booted, it crashed. Kernel panicked right away, due to a pointer dereference.

After some rebuild-copy-test cycles I was able to find out what the problem is. It’s a problem that is not unique to the RPi actually, as I found the same trace from an OMAP3 user reporting it somewhere else. The trick was disabling the (default-enabled) in-kernel debugger – which I couldn’t access anyway, as I don’t have an USB keyboard at hand right now – so that it would print the full trace of the error .That pointed at the l4_init function, which is the initialization of the Lightning 4 gameport controller — an old style, MIDI game port.

My hunch is that this expansion card is an old-style ISA card, since it does not rely on PCI structures to probe for the device — I cannot confirm it because googling for “lightning 4” only comes up with images of iPad and accessories. What it does, is simply poking at the 0×201 address, and the moment when it does, you get a bad dereference from the kernel exactly at that address. I’ve sent a (broken, unfortunately) patch to the LKML to see if there is an easy way to solve this.

To be honest and clear, if you just take a defconfig and build it exactly as-is, you won’t be hitting that problem. The problem happens to me because in this kernel, like in almost every other one I built, I do one particular thing: I disable modules so that a single, statically build kernel. This in turn means that all the drivers are initialized when you start the kernel, and the moment when the L4 driver is started, it crashes the kernel. Possibly it’s not the only one.

This is most likely not strictly limited to the RaspberryPi but it doesn’t help that there is no working minimal configuration – mine is, by the way, available here – and I’m pretty sure there are other similar situations even when the arch is x86… I guess it’s just a matter of reporting them when you encounter them.

Flattr for comments (March 30, 2013, 08:27 UTC)

You probably know already that my blog is using Flattr for micro-donation, both to the blog as a whole and to the single articles posted here. For those who don’t know, Flattr is a microdonation platform that splits a monthly budget into equal parts to share with your content creators of choice.

I’ve been using, and musing about, Flattr for a while and sometimes I ranted a little bit of how things have been moving in their camp. One of the biggest problems with the service is the relative scarce adoption. I’ve got a ton of “pending flattrs” as described on their blog for Twiter and Flickr users, mostly.

Riling up adoption of the service is key for it to be useful for both content creators and consumers: the former can only get something out of the system if their content is liked by enough people, and the latter will only care about adding money to the system if they find great content to donate to. Or if they use Socialvest to get the money while they spend it somewhere else.

So last night I did my part in trying to increase the usefulness of Flattr: I added it to the comments of my blog. If you do leave a comment and fill the email field, that email will be used, hashed, to create a new “thing” on Flattr, whether you’re already registered or not — if you’re not registered, the things will be kept pending until you register and associate the email address. This is not much different from what I’ve been doing already with gravatar, which uses the same method (the hashed email address).

Even though the description of the parameters needed to integrate Flattr for comments are described in the partnership interface there doesn’t seem to be a need to be registered as a partner – indeed you can see in the pages’ sources that there is no revenue key present – and assuming you already are loading the Flattr script for your articles’ buttons, all you have to add is the following code to the comment template (for Typo, other languages and engines will differ slightly of course!):

<% if != "" -%>
  <div class="comment_flattr right">
    <a class="FlattrButton" style="display:none;"
       title="Comment on <%= comment.article.title %>"
       data-flattr-tags="text, comment"
       data-flattr-owner="email:<%= Digest::MD5.hexdigest( %>"
       href="<%= comment.article.permalink_url %>#comment-<%= %>">
<% end -%>

So if I’m not making money with the partner site idea, why am I bothering with adding these extra buttons? Well, I often had people help me out a lot in comments, pointing out obvious mistakes I made or things I missed… and I’d like to be able to easily thank the commenters when they help me out… and now I can. Also, since this requires a valid email field, I hope for more people to fill it in, so that I can contact them if I want to ask or tell them something in private (sometimes I wished to contact people who didn’t really leave an easy way to contact them).

At any rate, I encourage you all to read the comments on the posts, and Flattr those you find important, interesting or useful. Think of it like a +1 or a “Like”. And of course, if you’re not subscribed with Flattr, do so! You’ll never know what other people could like, that you posted!

March 29, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Predictable persistently (non-)mnemonic names (March 29, 2013, 20:09 UTC)

This is part two of a series of articles looking into the new udev “predictable” names. Part one is here and talks about the path-based names.

As Steve also asked on the comments from last post, isn’t it possible to just use the MAC address of an interface to point at it? Sure it’s possible! You just need to enable the mac-based name generator. But what does that mean? It means that your new interface names will be enx0026b9d7bf1f and wlx0023148f1cc8 — do you see yourself typing them?

Myself, I’m not going to type them. My favourite suggestion to solve the issue is to rely on rules similar to the previous persistent naming, but not re-using the eth prefix to avoid collisions (which will no longer be resolved by future versions of udev). I instead use the names wan0 and lan0 (and so on), when the two interfaces sit stranding between a private and a public network. How do I achieve that? Simple:

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:17:31:c6:4a:ca", NAME="lan0"
SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="00:07:e9:12:07:36", NAME="wan0"

Yes these simple rules are doing all the work you need if you just want to make sure not to mix the two interfaces by mistake. If your server or vserver only has one interface, and you want to have it as wan0 no matter what its mac address is (easier to clone, for instance), then you can go for

SUBSYSTEM=="net", ACTION=="add", ATTR{address}=="*", NAME="wan0"

As long as you only have a single network interface, that will work just fine. For those who use Puppet, I also published a module that you can use to create the file, and ensure that the other methods to achieve “sticky” names are not present.

My reasoning to actually using this kind of names is relatively simple: the rare places where I do need to specify the interface name are usually in ACLs, the firewall, and so on. In these, the most important part to me is knowing whether the interface is public or not, so the wan/lan distinction is the most useful. I don’t intend trying to remember whether enp5s24k1f345totheright4nextothebaker is the public or private interface.

Speaking about which, one of the things that appears obvious even from Lennart’s comment to the previous post, is that there is no real assurance that the names are set in stone — he says that an udev upgrade won’t change them, but I guess most people would be sceptic, remembering the track record that udev and systemd has had over the past few months alone. In this situation my personal, informed opinion is that all this work on “predictable” names is a huge waste of time for almost everybody.

If you do care about stable interface names, you most definitely expect them to be more meaningful than 10-digits strings of paths or mac addresses, so you almost certainly want to go through with custom naming, so that at least you attach some sense into the names themselves.

On the other hand, if you do not care about interface names themselves, for instance because instead of running commands or scripts, you just use NetworkManager… well what the heck are you doing playing around with paths? If it doesn’t bother you that the interface for an USB device changes considerably between one port and another, how can it matter to you whether it’s called wwan0 or wwan123? And if the name of the interface does not matter to you, why are you spending useless time trying to get these “predictable” names working?

All in all, I think this is just an useless nice trick, that will only cause more headaches than it can possibly solve. Bahumbug!

Pacho Ramos a.k.a. pacho (homepage, bugs)
Gnome 3.8 released (March 29, 2013, 17:08 UTC)

Gnome 3.8 Released, and already available in main tree hardmasked for adventurous people willing to help with it being fixed for stable "soon" ;)

Thanks for your help!

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Predictably non-persistent names (March 29, 2013, 10:51 UTC)

This is going to be fun. The Gentoo “udev team”, in the person of Samuli – who seems to suffer from 0-day bump syndrome – decided to now enable by default the new predictable names featuer that is supposed to make things so much nicer in Linux land where, especially for people coming from FreeBSD, things have been pretty much messed up. This replaces the old “persistent” names, that were often enough too fragile to work, as they did in-place renaming of interfaces, and would cause way too often conflicts at boot time, since swapping two devices’ names is not an atomic operation for obvious reasons.

So what’s this predictable name all around? Well, it’s mostly a merge of the previous persistent naming system, and the BIOS label naming project which was developed by RedHat for a few years already so that the names of interfaces for server hardware in the operating system match the documentation of said server, so that you can be sure that if you’re connecting the port marked with “1” on the chassis, out of four on the motherboard, it will bring up eth2.

But why were those two technologies needed? Let’s start first with explaining how (more or less) the kernel naming scheme works: unlike the BSD systems, where the interfaces are named after the kernel driver (en0, dc0, etc.), the Linux kernel uses generic names, mostly eth, wlan and wwan, and maybe a couple more, for tunnels and so on. This causes the first problem: if you have multiple devices of the same class (ethernet, wlan, wwan) coming from different drivers, the order of the interface may very well vary between reboots, either because of changes in the kernel, if the drivers are built-in, or simply because of locking and execution of modules load (which is much more common for binary distributions).

The reason why changes in the kernel can change the order is that the order in which drivers are initialized has changed before and might change again in the future. A driver could also decide to change the order with which its devices are initialized (PCI tree scanning order, PCI ID order, MAC address order, …) and so on, causing it to change the order of interfaces even for the same driver. More about this later.

But here’s my first doubt arises: how common is for people to have more than one interface of the same class from vendors different enough to use different drivers? Well it depends on the class of device; on a laptop you’d have to search hard for a model with more than one Ethernet or wireless interface, unless you add an ExpressCard or PCMCIA expansion card (and even those are not that common). On a desktop, I’ve seen a few very recent motherboards with more than one network port, and I have yet to see one with different chips for the two. Servers, that’s a different story.

Indeed, it’s not that uncommon to have multiple on-board and expansion card ports on a server. For instance you could use the two onboard ports as public and private interfaces for the host… and then add a 4-port card to split between virtual machines. In this situation, having a persistent naming of the interfaces is indeed something you would be glad of. How can you tell which one of eth{0..5} is your onboard port #2, otherwise? This would be problem number two.

Another situation in which having a persistent naming of interfaces is almost a requirement is if you’re setting up a router: you definitely don’t want to switch the LAN and WAN interface names around, especially where the firewall is involved.

This background is why the persistent-net rules were devised quite a few years ago for udev. Unfortunately almost everybody got at least one nasty experience with them. Sometimes the in-place rename would fail, and you’d end up with the temporary names at the end of boot. In a few cases the name was not persistent at all: if the kernel driver for the device would change, or change name at least, the rules wouldn’t match and your eth0 would become eth1 (this was the case when Intel split the e1000 and e1000e drivers, but it’s definitely more common with wireless drivers, especially if they move from staging to main).

So the old persistent net rules were flawed. What about the new predictable rules? Well, not only they combined the BIOS naming scheme (which is actually awesome when it works — SuperMicro servers such as Excelsior do not expose the label; my Dell laptop only exposes a label for the Ethernet port but doesn’t for either the wireless adapter or the 3G one), but it has two “fallbacks” that are supposed to be used when the labels fail, one based on the MAC address of the interface, and the other based on the “path” — which for most PCI, PCI-E, onboard, ExpressCard ports is basically the PCI address; for USB… we’ll see in a moment.

So let’s see, from my laptop:

# lspci | grep &aposNetwork controller&apos
03:00.0 Network controller: Intel Corporation Centrino Advanced-N 6200 (rev 35)
# ifconfig | grep wlp3
wlp3s0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500

Why “wlp3s0”? It’s the Wireless adapter (wl) PCI (p) card at bus 3, slot 0 (s0): 03:00.0. Matches lspci properly. But let’s see the WWAN interface on the same laptop:

# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Much longer name! What’s going on then? Let’s see, it’s reporting it’s card at bus 0, slot 29 (0×1d) — lspci will use hexadecimal numbers for the addresses:

# lspci | grep &apos00:1d&apos
00:1d.0 USB controller: Intel Corporation 5 Series/3400 Series Chipset USB2 Enhanced Host Controller (rev 05)

Okay so it’s an USB device, even though the physical form factor is a mini-PCIE card. It’s common. Does it match lsusb?

# lsusb | grep Broadband
Bus 002 Device 004: ID 413c:8184 Dell Computer Corp. F3607gw v2 Mobile Broadband Module

Not the Bus/Device specification there, which is good: the device number will increase every time you pop something in/out of the port, so it’s not persistent across reboots at all. What it uses is the path to the device standing by USB ports, which is a tad more complex, but basically means it matches /sys/bus/usb/devices/2-1.6:1.6/ (I don’t pretend to know how the thing works exactly, but it describe to which physical port the device is connected).

In my laptop’s case, the situation is actually quite nice: I cannot move either the WLAN or WWAN device on a different slot so the name assigned by the slot is persistent as well as predictable. But what if you’re on a desktop with an add-on WLAN card? What happens if you decide to change your video card, with a more powerful one that occupies the space of two slots, one of which happen to be the place where you WLAN card is? You move it, reboot and .. you just changed the interface name! If you’ve been using Network Manager, you’ll just have to reconfigure the network I suppose.

Let’s take a different example. My laptop, with its integrated WWAN card, is a rare example; most people I know use USB “keys”, as the providers give them away for free, at least in Italy. I happen to have one as well, so let me try to plug it in one of the ports of my laptop:

# lsusb | grep modem
Bus 002 Device 014: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u2i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Okay great this is a different USB device, connected to the same USB controller as the onboard one, but at different ports, neat. Now, what if I had all my usual ports busy, and I decided to connect it to the USB3 add-on ExpressCard I got on the laptop?

# lsusb | grep modem
Bus 003 Device 004: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wws1u1i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500

What’s this? Well, the USB3 controller provides slot information, so udev magically uses that to rename the interface, so it avoids using the otherwise longer wwp6s0u1i1 name (the USB3 controller is on the PCI bus 6).

Let’s go back to the on-board ports:

# lsusb | grep modem
Bus 002 Device 016: ID 12d1:1436 Huawei Technologies Co., Ltd. E173 3G Modem (modem-mode)
# ifconfig -a | grep ww
wwp0s29u1u3i1: flags=4098<BROADCAST,MULTICAST>  mtu 1500
wwp0s29u1u6i6: flags=4098<BROADCAST,MULTICAST>  mtu 1500

Seems the same, but it’s not. Now it’s u3 not u2. Why? I used a different port on the laptop. And the interface name changed. Yes, any port change will produce a different interface name, predictably. But what happens if the kernel decides to change the way the ports are enumerated? What happens if the USB 2 driver is buggy and is supposed to provide slot information, and they fix it? You got it, even in these cases, the interface names are changed.

I’m not saying that the kernel naming scheme is perfect. But if you’re expected to always only have an Ethernet port, a WLAN card and a WWAN USB stick, with it you’ll be sure to have eth0, wlan0 and wwan0, as long as the drivers are not completely broken as they are now (like if the WLAN is appearing as eth1), and as long as you don’t muck with the interface names in userspace.

Next up, I’ll talk about the MAC addresses based naming and my personal preference when setting up servers and routers. Have fun in the mean time figuring out what your interface names will be.

March 19, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)
Opportunities for Gentoo (March 19, 2013, 15:36 UTC)

When I’ve wanted to play in some new areas lately, it’s been a real frustration because Gentoo hasn’t had a complete set of packages ready in any of them. I feel like these are some opportunities for Gentoo to be awesome and gain access to new sets of users (or at least avoid chasing away existing users who want better tools):

  • Data science. Package Hadoop. Package streaming options like Storm. How about related tools like Flume? RabbitMQ is in Gentoo, though. I’ve heard anecdotally that a well-optimized Hadoop-on-Gentoo installation showed double-digit performance increases over the usual Hadoop distributions (i.e., not Linux distributions, but companies specializing in providing Hadoop solutions). Just heard from Tim Harder (radhermit) than he’s got some packages in progress for a lot of this, which is great news.
  • DevOps. This is an area where Gentoo historically did pretty well, in part because our own infrastructure team and the group at the Open Source Lab have run tools like CFEngine and Puppet. But we’re lagging behind the times. We don’t have Jenkins or Travis. Seriously? Although we’ve got Vagrant packaged, for example, we don’t have Veewee. We could be integrating the creation of Vagrant boxes into our release-engineering process.
  • Relatedly: Monitoring. Look for some of the increasingly popular open-source tools today, things like Graphite, StatsDLogstash, LumberjackElasticSearch, Kibana, Sensu, Tasseo, Descartes, Riemann. None of those are there.
  • Cloud. Public cloud and on-premise IaaS/PaaS. How about IaaS: OpenStack, CloudStack, Eucalyptus, or OpenNebula? Not there, although some work is happening for OpenStack according to Matthew Thode (prometheanfire). How about a PaaS like Cloud Foundry or OpenShift? Nope. None of the Netflix open-source tools are there. On the public side, things are a bit better — we’ve got lots of AWS tools packaged, even stretching to things like Boto. We could be integrating the creation of AWS images into our release engineering to ensure AWS users always have a recent, official Gentoo image.
  • NoSQL. We’ve got a pretty decent set here with some holes. We’ve got Redis, Mongo, and CouchDB not to mention Memcached, but how about graph databases like Neo4j, or other key-value stores like RiakCassandra, or Voldemort?
  • Android development. Gentoo is perfect as a development environment. We should be pushing it hard for mobile development, especially Android given its Linux base. There’s a couple of halfhearted wiki pages but that does not an effort make. If the SDKs and related packages are there, the docs need to be there too.

Where does Gentoo shine? As a platform for developers, as a platform for flexibility, as a platform to eke every last drop of performance out of a system. All of the above use cases are relevant to at least one of those areas.

I’m writing this post because I would love it if anyone else who wants to help Gentoo be more awesome would chip in with packaging in these specific areas. Let me know!

Update: Michael Stahnke suggested I point to some resources on Gentoo packaging, for anyone interested, so take a look at the Gentoo Development Guide. The Developer Handbook contains some further details on policy as well as info on how to get commit access by becoming a Gentoo developer.

Tagged: development, gentoo, greatness

Josh Saddler a.k.a. nightmorph (homepage, bugs)
fonts (March 19, 2013, 10:18 UTC)

i think i’ve sorted out some of my desktop font issues, and created a few more in the process.

for a long time, i’ve had to deal with occasionally jagged, hard-to-read fonts when viewing webpages, because i ran my xfce desktop without any font antialiasing.

i’ve always hated the way modern desktop environments try to “fool” my eyes with antialiasing and subpixel hinting to convince me that a group of square pixels can be smoothed into round shapes. turning off antialiasing tends to make the rounder fonts, especially serif fonts, look pretty bad at large sizes, as seen here:

display issues

my preferred font for the desktop and the web is verdana, which looks pretty good without antialiasing. but most websites use other fonts, so rather than force one size of verdana everywhere (which causes flow/layout issues), i turned on antialiasing for my entire desktop, including my preferred browser, and started disabling antialiasing where needed.

before and after font settings:

before/after settings

i tried the infinality patchset for freetype, but unfortunately none of the eselect configurations produced the crisply rounded antialiased text the patches are known for. i rebuilt freetype without the patchset, and went into /etc/fonts to do some XML hacking.

while eselect-fontconfig offers painless management of existing presets, the only way to customize one’s setup is to get into nitty-gritty text editing, and font configs are in XML format. this is what i ended up with:

$ cat ~/.fonts.conf

<?xml version="1.0"?>
<!DOCTYPE fontconfig SYSTEM "fonts.dtd">
<match target="font">
    <edit name="antialias" mode="assign">
<match target="font" >
    <test name="size" qual="any" compare="more">
    <edit name="antialias" mode="assign">
<match target="font" >
    <test name="pixelsize" qual="any" compare="more">
    <edit name="antialias" mode="assign">
<match target="pattern">
    <test qual="any" name="family"><string>Helvetica</string></test>
    <edit name="antialias" mode="assign">

let’s step through the rules:

first, all antialiasing is disabled. then, any requested font size over 11, or anything that would display more than 16 pixels high, is antialiased. finally, since the common helvetica font really needs to be antialiased at all sizes, a rule turns that on. in theory, that is — firefox and xfce both seem to be ignoring this. unless antialiasing really is enabled at the smallest sizes with no visible effect, since there are only so many pixel spaces available at that scale to “fake” rounded corners.

a test webpage shows the antialiasing effect on different fonts and sizes:

desktop and browser fonts

besides the helvetica issue, there are a few xfce font display problems. xfce is known for mostly ignoring the “modern” xorg font config files, and each app in the desktop environment follows its own aliasing and hinting rules. gvim’s monospace font is occasionally antialiased, resulting in hard-to-read code. the terminal, which uses the exact same font and size, is not antialiased, since it has its own control for text display.

the rest of the gtk+ apps in the above screenshot are size 10 verdana, so they have no antialiasing, being under the “size 11″ rule. firefox doesn’t always obey the system’s font smoothing and hinting settings, even with the proper options in about:config set. unlike user stylesheets, there’s no way to enforce desktop settings with something like !important CSS code. i haven’t found any pattern in what firefox ignores or respects.

also, i haven’t found a workable fontconfig rule that enables antialiasing only for specific fonts at certain sizes. i’m not sure it’s even possible to set such a rule, despite putting together well-formed XML to do just that.

* * *

to sum up: font management on linux can be needlessly complicated, even if you don’t have special vision needs. my environment is overall a bit better, but i’m not ready to move entirely to antialiased text, not until it’s less blurry. i need crispy, sharp text.

fonts on my android phone’s screen look pretty good despite the antialiasing used everywhere, but the thing’s pixel density is so much higher than laptop and desktop LCDs that the display server doesn’t need to resort to complicated smoothing/hinting techniques to achieve that look.

as a general resource, the arch linux wiki page has very useful information on font configuration. there are some great ideas in there, even if they don’t all work on my system. the gentoo linux wiki page on fontconfig is a more basic; i didn’t use anything from it.

March 16, 2013
Gentoo Haskell Herd a.k.a. haskell (homepage, bugs)
a haskell dev survey (March 16, 2013, 20:58 UTC)

Ladies and gentlemen!

If you happen to be involved in using/developing haskell-powered software you might like to answer our poll on that matter.

Thanks in advance!

Aaron W. Swenson a.k.a. titanofold (homepage, bugs)
PostgreSQL 8.3 Has Reached End of Life (March 16, 2013, 13:48 UTC)

Today I’ll be masking PostgreSQL 8.3 for removal. If you haven’t already, you should move to a more recent version of PostgreSQL.

March 11, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
openSUSE 12.3 Release party in Nürnberg (March 11, 2013, 16:35 UTC)

Party AnimalEverybody probably already knows, that openSUSE 12.3 is going to be released this Wednesday. I’m currently in SUSE offices in Nuremberg, helping to polish last bits and pieces for the upcoming release. But more importantly, as every release, we need to celebrate it! And this time, due to the lucky circumstances, I’ll be here for Nuremberg release party!

Nuremberg release party will take place the same day as release at Artefakt, in Nuremberg’s city centre from 19:00 (local time, of course). It’s an open event so everybody is welcomed.

You can meet plenty of fellow Geekos there and there will be some food and also openSUSE beer available (some charges may apply). Most of the openSUSE Team at SUSE (former Boosters and Jos) will be there and we hope to meet every openSUSE enthusiastic, supporter or user from Nuremberg.

There will be demo computer running 12.3 and hopefully even public Google Hangout for people who wants to join us remotely – follow +openSUSE G+ page to see it if we will manage it ;-)

So see you in great numbers on Wednesday in Artefakt!

PS: If you expected announcement for Prague release party from me, don’t worry, I haven’t forgot about it, we are planning it, expect announcement soon and party in few weeks ;-)

March 09, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
Open links with urxvt stopped working (March 09, 2013, 00:55 UTC)

INCOMPATIBLE CHANGE: renamed urlLauncher resource to url-launcher

so .Xdefaults becomes;

URxvt.perl-ext-common: default,matcher
URxvt.url-launcher: /usr/bin/firefox
URxvt.matcher.button: 1

March 08, 2013
Tomáš Chvátal a.k.a. scarabeus (homepage, bugs)
Prague Installfest results (March 08, 2013, 13:37 UTC)

Last weekend (2.-3.3. 2013) we had a lovely conference here in Prague. People could attend to quite few very cool talks and even play OpenArena tournament :-) Anyway that ain’t so interesting for Gentoo users. The cool part for us is the Gentoo track that I tried to assemble in there and which I will try to describe here.

Setup of the venue

This was easy task as I borrowed computer room in the dormatories basement which was large enough to hold around 30 students. I just carried in my laptop, checked the beamer works. Ensured the chairs are not falling apart and replaced the broken ones. Verified the wifi works (which it did not but the admins made it working just in time). And for last brought some drinks from main track so we do not dry out.

The classroom was in bit different area than the main track I tried to put some arrows for people to find the place. But when people started getting in and calling me where the hell is the place I figured out something is wrong. This pointy was then adjusted but still it shows up that we should rather not split of the main tracks or ensure there are HUGE and clear arrows pointing in directions where people can find us.


During the day there were only three talks, two held by me and one that was not on the plan done by Theo.

Hardened talk

I was supposed to start this talk at 10:00 but given the issue with the arrows people showed up around 10:20 so I had to cut back some informations and live examples.
Anyway I hope it was interesting hardened overview and at least Petr Krcmar wrote lots of stuff so we maybe will se some articles about it in czech media (something like “How I failed to install hardened Gentoo” :P).

Gentoo global stuff

This was more discussion about features than talk. The users were pointing out what they would like to see happening in Gentoo and what were their largest issues lately.

From issues people pointed out the broken udev update which rendered some boxes non-bootable (yes there was message but they are quite easy to overlook, I forgot to do it on one machine myself). Some sugesstions went for genkernel to actually trigger rebuild of kernel right away in post stage for user with the enabled required options. This sounds like quite nice idea, as since you are using genkernel you probably want your kernel automatically adjusted and updated for the cases where the apps require option additions. As I am not aware of the genkernel stuff I told the users to open bug about this.

Second big thing we were talking about were binary packages. The idea was to have some tinderbox which produce generic binary packages available for most useflag variants. So you could specify -K and it would use the binary form or if not provided compiled localy. For this the most work would need to be done on portage side because we would have to somehow figure out multiple versions of the same package with different enabled uses.

Infra talk

Theo did awesome job explaining how infra uses puppet and what services and servers we have. This was on-demand talk which people that were on-site wanted.

Hacking — aka stuff that we somehow did

Martin “plusky” Pluskal (SU) went over our prehistoric bugs from 2k5 and 2k6 and created list of cantfix ones which are no longer applicable or are new pkg requests with dead upstream. I still have to close them or give him editbugz privs (this sounds more like it as I am lazy like hell, or better make him developer :P).
Ondrej Sukup (ACR) attending over hangout worked on python-r1 porting and I commited his work to cvs.
Cyril “metan” Hrubis (SU) worked on crossdev on some magic avr bug I don’t want to hear much about but he seems optimistic that he might finish the work in near future.
David Heidelberger worked first on fixing bugs with his lappy and then helped on the bug wrangling with Martin.
Jan “yac” Matejka (SU) finished his quizzes and thus he got shiny bug and is now in lovely hands of our recruiters to became our newest addition to the team.
Michal “miska” Hrusecky (SU) worked on update of osc tools update to match latest we have in opensuse buildservice and he plans to commit them soonish to cvs.
Pavel “pavlix” Simerda (RH) who is the guy responsible for latest networkmanager bugs expressed his intentions to became dev and I agreed with him
Tampakrap (SU) worked on breaking one laptop with fresh install of Gentoo, which I then picked up and finished with some nice KDE love :-)
Amy Winston helped me a lot with setup for the venue and also kept us with Theo busy breaking her laptop, which I hope she is still happily using and does not want to kill us, other then that she focused on our sweet bugzie and wrangling. She seems not willing to finish her quizzes to became full developer, so we will have work hard on that in the future :-)
And lastly I (SU) helped users with issues they had on their local machines and explained how to avoid those or report directly to bugzie with relevant informations and so on.

In case you wonder SU = SUSE ; RH = RedHat; ACR = Armed forces CR.

For the future events we have to keep in mind that we need to better setup those and have prepared small buglists rather then wide-range ones where people spend more time picking ideal work than working on those :-)


The lunch and the afterparty were done in nice pub nearby which had decent food and plenty of beer so everyone was happy. The only problem was that it take some waiting to get the food as suddenly there were 40 people in the pub (I still think this could’ve been somehow prepared so they had only limited subset of foods really fast so you can choose between waiting a bit or picking something and going back fast).

During the night one of Gentoo attendees got quite drunk and had to be delivered home by other ogranizers as I had to leave bit early (being up from 5 am is not something I fancy).
The big problem here was with the location where one should put him, because he was not able to talk and his ID contained residency info for different city. So for the next time when you go for linux event where you don’t know much put into your pockets some paper with the address. It is superconvenient and we don’t have to bother your parents at 1 am to find out what to do with their “sweet” child.


I would like to say huge thanks to all attendees for making the event possible and also appologize for everything I frogot to mention here.

March 07, 2013
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
Having fun with integer factorization (March 07, 2013, 01:45 UTC)

Given the input

 # yafu "factor(10941738641570527421809707322040357612003732945449205990913842131476349984288934784717997257891267332497625752899781833797076537244027146743531593354333897)" -threads 4 -v -noecm
if one is patient enough gives this output:
sqrtTime: 1163
NFS elapsed time = 3765830.4643 seconds.
pretesting / nfs ratio was 0.00
Total factoring time = 3765830.6384 seconds

***factors found***

PRP78 = 106603488380168454820927220360012878679207958575989291522270608237193062808643
PRP78 = 102639592829741105772054196573991675900716567808038066803341933521790711307779
What does that mean?
The input number is conveniently chosen from the RSA challenge numbers and was the "world record" until 2003. Advances in algorithms, compilers and hardware have made it possible for me to re-do that record attempt in about a month walltime on a single machine ( 4-core AMD64).

Want to try yourself?
emerge yafu
that's the "easiest" tool to manage. The dependencies are a bit fiddly, but it works well for up to ~512bit, maybe a bit more. It depends on msieve, which is quite impressive, and gmp-ecm, which I find even more intriguing.

If you feel like more of a challenge:
emerge cado-nfs
This tool even supports multi-machine setups out of the box using ssh, but it's slightly intimidating and might not be obvious to figure out. Also for a "small" input in the 120 decimal digits range it was about 25% slower than yafu - but it's still impressive what these tools can do.

February 28, 2013
Jan Kundrát a.k.a. jkt (homepage, bugs)

There's a lot of people who are very careful to never delete a single line from an e-mail they are replying to, always quoting the complete history. There's also a lot of people who believe that it wastes time to eyeball such long, useless texts. One of the fancy features introduced in this release of Trojitá, a fast Qt IMAP e-mail client, is automatic quote collapsing. I won't show you an example of an annoying mail for obvious reasons :), but this feature is useful even for e-mails which employ reasonable quoting strategy. It looks like this in the action:

When you click on the ... symbols, the first level expands to reveal the following:

When everything is expanded, the end results looks like this:

This concept is extremely effective especially when communicating with a top-posting community.

We had quite some internal discussion about how to implement this feature. For those not familiar with Trojitá's architecture, we use a properly restricted QtWebKit instance for e-mail rendering. The restrictions which are active include click-wrapped loading of remote content for privacy (so that a spammer cannot know whether you have read their message), no plugins, no HTML5 local storage, and also no JavaScript. With JavaScript, it would be easy to do nice, click-controlled interactive collapsing of nested citations. However, enabling JavaScript might have quite some security implications (or maybe "only" keeping your CPU busy and draining your battery by a malicious third party). We could have enabled JavaScript for plaintext contents only, but that would not be as elegant as the solution we chose in the end.

Starting with Qt 4.8, WebKit ships with support for the :checked CSS3 pseudoclass. Using this feature, it's possible to change the style based on whether an HTML checkbox is checked or not . In theory, that's everything one might possibly need, but there's a small catch -- the usual way of showing/hiding contents based on a state of a checkbox hits a WebKit bug (quick summary: it's tough to have it working without the ~ adjacent-sibling selector unless you use it in one particular way). Long story short, I now know more about CSS3 than I thought I would ever want to know, and it works (unless you're on Qt5 already where it assert-fails and crashes the WebKit).

Speaking of WebKit, the way we use it in Trojitá is a bit unusual. The QWebView class contains full support for scrolling, so it is not necessary to put it inside a QScrollArea. However, when working with e-mails, one has to account for messages containing multiple body parts which have to be shown separately (again, for both practical and security reasons). In addition, the e-mail header which is typically implemented as a custom QWidget for flexibility, is usually intended to combine with the message bodies into a single entity to be scrolled together. With WebKit, this is doable (after some size hints magic, and I really mean magic -- thanks to Thomas Lübking of the KWin fame for patches), but there's a catch -- internal methods like the findText which normally scroll the contents of the web page into the matching place no longer works when the whole web view is embedded into a QScrollArea. I've dived into the source code of WebKit and the interesting thing is that there is code for exactly this case, but it is only implemented in Apple's version of WebKit. The source code even says that Apple needed this for its own -- an interesting coincidence, I guess.

Compared with the last release, Trojitá has also gained support for "smart replying". It will now detect that a message comes from a mailing list and Ctrl+R will by default reply to list. Thomas has added support for saving drafts, so that you are not supposed to lose your work when you accidentally kill Trojitá anymore. There's also been the traditional round of bug fixes and compatibility improvements. It is entertaining to see that Trojitá is apparently triggering certain code paths in various IMAP server implementations, proprietary and free software alike, for the first time.

The work on support for multiple IMAP accounts is getting closer to being ready for prime time. It isn't present in the current release, though -- the GUI integration in particular needs some polishing before it hits the masses.

I'm happy to observe that Trojitá is getting features which are missing from other popular e-mail clients. I'm especially fond of my pet contribution, the quote collapsing. Does your favorite e-mail application offer a similar feature?

In the coming weeks, I'd like to focus on getting the multiaccounts branch merged into master, adding better integration with the address book (Trojitá can already offer tab completion with data coming from Mutt's abook) and general GUI improvements. It would also be great to make it possible to let Trojitá act as a handler for the mailto: URLs so that it gets invoked when you click on an e-mail address in your favorite web browser, for example.

And finally, to maybe lure a reader or two into trying Trojitá, here's a short quote from a happy user who came to our IRC channel a few days ago:

17:16 < Sir_Herrbatka> i had no idea that it's possible for mail client to be THAT fast
One cannot help but be happy when reading this. Thanks!

If you're on Linux, you can get the latest version of Trojitá from the OBS or the usual place.


Greg KH a.k.a. gregkh (homepage, bugs)
Linux 3.8 is NOT a longterm kernel (February 28, 2013, 00:15 UTC)

I said this last week on Google+ when I was at a conference, and needed to get it out there quickly, but as I keep getting emails and other queries about this, I might as make it "official" here. For no other reason that it provides a single place for me to point people at.

Anyway, I would like to announce that the 3.8 Linux kernel series is NOT going to be a longterm stable kernel release. I will NOT be maintaining it for long time, and in fact, will stop maintaining it right after the 3.9 kernel is released.

The 3.0 and 3.4 kernel releases are both longterm, and both are going to be maintained by me for at least 2 years. If I were to pick 3.8 right now, that would mean I would be maintaining 3 longterm kernels, plus whatever "normal" stable kernels are happening at that time. That is something that I can not do without loosing even more hair than I currently have. To do so would be insane to attempt.

Hopefully this puts to rest all of the rumors.

February 17, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
LightZone in Gentoo betagarden (February 17, 2013, 19:08 UTC)

If you are running Gentoo, heard about the release of the LightZone source code and got curious to see it for yourself:

sudo layman -a betagarden
sudo emerge -av media-gfx/LightZone

What you get is LightZone 100% built from sources, no more shipped .jar files included.

One word of warning: the software has not seen much testing in this form, yet. So if your pictures mean a lot you, make backups before. Better safe than sorry.

February 15, 2013
LinuxCrazy Podcasts a.k.a. linuxcrazy (homepage, bugs)
Podcast 97 Interview with WilliamH (February 15, 2013, 00:46 UTC)

Interview with WilliamH, Gentoo Linux Developer


Gentoo Accessibility