Gentoo Logo
Gentoo Logo Side
Gentoo Spaceship

. Aaron W. Swenson
. Agostino Sarubbo
. Alec Warner
. Alex Alexander
. Alex Legler
. Alexey Shvetsov
. Alexis Ballier
. Alexys Jacob
. Amadeusz Żołnowski
. Andreas K. Hüttel
. Andreas Proschofsky
. Andrew Gaffney
. Anthony Basile
. Arun Raghavan
. Bernard Cafarelli
. Bjarke Istrup Pedersen
. Brent Baude
. Brian Harring
. Christian Ruppert
. Chí-Thanh Christopher Nguyễn
. Dane Smith
. Daniel Gryniewicz
. David Abbott
. Denis Dupeyron
. Detlev Casanova
. Diego E. Pettenò
. Domen Kožar
. Donnie Berkholz
. Doug Goldstein
. Eray Aslan
. Fabio Erculiani
. Gentoo Haskell Herd
. Gentoo News
. Gilles Dartiguelongue
. Greg KH
. Hanno Böck
. Hans de Graaff
. Ian Whyman
. Ioannis Aslanidis
. Jan Kundrát
. Jason Donenfeld
. Jeffrey Gardner
. Jeremy Olexa
. Joachim Bartosik
. Johannes Huber
. Jonathan Callen
. Jorge Manuel B. S. Vicetto
. Joseph Jezak
. Josh Saddler
. Kenneth Prugh
. Lance Albertson
. Liam McLoughlin
. LinuxCrazy Podcasts
. Luca Barbato
. Luis Francisco Araujo
. Mark Loeser
. Markos Chandras
. Mart Raudsepp
. Matt Turner
. Matthew Marlowe
. Matthew Thode
. Matti Bickel
. Michal Hrusecky
. Michał Górny
. Mike Doty
. Mike Gilbert
. Mike Pagano
. Mu Qiao
. Nathan Zachary
. Ned Ludd
. Nirbheek Chauhan
. Ole Markus With
. Olivier Crête
. Pacho Ramos
. Patrick Kursawe
. Patrick Lauer
. Patrick McLean
. Paul de Vrieze
. Pavlos Ratis
. Paweł Hajdan, Jr.
. Petteri Räty
. Piotr Jaroszyński
. Rafael Goncalves Martins
. Raúl Porcel
. Remi Cardona
. Richard Freeman
. Robin Johnson
. Ryan Hill
. Sean Amoss
. Sebastian Pipping
. Serkan Kaba
. Steev Klimaszewski
. Stratos Psomadakis
. Sune Kloppenborg Jeppesen
. Sven Vermeulen
. Sven Wegener
. Tim Sammut
. Tiziano Müller
. Tobias Heinlein
. Tobias Klausmann
. Tomáš Chvátal
. Torsten Veller
. Vikraman Choudhury
. Zack Medico
. Zhang Le

Last updated:
May 06, 2013, 23:04 UTC

Views expressed in the content published here do not necessarily represent the views of Gentoo Linux or the Gentoo Foundation.

Bugs? Comments? Suggestions? Contact us!

Powered by:
Planet Venus

Welcome to Gentoo Universe, an aggregation of weblog articles on all topics written by Gentoo developers. For a more refined aggregation of Gentoo-related topics only, you might be interested in Planet Gentoo.

May 06, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 3 (May 06, 2013, 01:50 UTC)

In previous posts I talked about capabilities and gave an introduction to how this powerful security feature within Linux can be used (and also exploited). I also covered a few capabilities, so let’s wrap this up with the remainder of them.

Enable and disable kernel auditing; change auditing filter rules; retrieve auditing status and filtering rules
Write records to kernel auditing log
Employ features that can block system suspend
Override Mandatory Access Control (implemented for the SMACK LSM)
Allow MAC configuration or state changes (implemented for the SMACK LSM)
Perform various network-related operations:

  • interface configuration
  • administration of IP firewall, masquerading and accounting
  • modify routing tables
  • bind to any address for transparent proxying
  • set type-of-service (TOS)
  • clear driver statistics
  • set promiscuous mode
  • enabling multicasting
  • use setsockopt() for privileged socket operations
Bind a socket to Internet domain privileged ports (less than 1024)
Use RAW and PACKET sockets, and bind to any address for transparent proxying
Allow the process to add any capability from the calling thread’s bounding set to its inheritable set, and drop capabilities from the bounding set (using prctl()) and make changes to the securebits flags.
Very powerful capability, includes:

  • Running quota control, mount, swap management, set hostname, …
  • Perform VM86_REQUEST_IRQ vm86 command
  • Perform IPC_SET and IPC_RMID operations on arbitrary System V IPC objects
  • Perform operations on trusted.* and security.* extended attributes
  • Use lookup_dcookie

and many, many more. man capabilities gives a good overview of them.

Use reboot() and kexec_load()
Use chroot()
Load and unload kernel modules
Another capability with many consequences, including:

  • Use reserved space on ext2 file systems
  • Make ioctl() calls controlling ext3 journaling
  • Override disk quota limits
  • Increase resource limits
  • Override RLIMIT_NPROC resource limits

and many more.

Set system clock and real-time hardware clock
Use vhangup() and employ various privileged ioctl() operations on virtual terminals
Perform privileged syslog() operations and view kernel addresses exposed with /proc and other interfaces (if kptr_restrict is set)
Trigger something that will wake up the system

Now when you look through the manual page of the capabilities, you’ll notice it talks about securebits as well. This is an additional set of flags that govern how capabilities are used, inherited etc. System administrators don’t set these flags – they are governed by the applications themselves (when creating threads, forking, etc.) These flags are set on a per-thread level, and govern the following behavior:

Allow a thread with UID 0 to retain its capabilities when it switches its UIDs to a nonzero (non-root) value. By default, this flag is not set, and even if it is set, it is cleared on an execve call, reducing the likelihood that capabilities are “leaked”.
When set, the kernel will not adjust the capability sets when the thread’s effective and file system UIDs are switched between zero (root) and non-zero values.
If set, the kernel does not grant capabilities when a setuid-root program is executed, or when a process with an effective or real UID of 0 (root) calls execve.

Manipulating these bits requires the CAP_SETPCAP capability. Except for the SECBIT_KEEP_CAPS security bit, the others are preserved on an execve() call, and all bits are inherited by child processes (such as when fork() is used).

As a user or admin, you can also see capability-related information through the /proc file system:

 # grep ^Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 0000001fffffffff
CapEff: 0000001fffffffff
CapBnd: 0000001fffffffff

$ grep ^Cap /proc/$$/status
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000001fffffffff

The capabilities listed therein are bitmasks for the various capabilities. The mask 1FFFFFFFFF holds 37 positions, which match the 37 capabilities known (again, see uapi/linux/capabilities.h in the kernel sources to see the values of each of the capabilities). Again, the pscap can be used to get information about the enabled capabilities of running processes in a more human readable format. But another tool provided by the sys-libs/libcap is interested as well to look at: capsh. The tool offers many capability-related features, including decoding the status fields:

$ capsh --decode=0000001fffffffff

Next to fancy decoding, capsh can also launch a shell with reduced capabilities. This makes it a good utility for jailing chroots even more.

May 05, 2013

Since a long time I realized that is a pita every time that I keyword, receive a repoman failure for dependency.bad(mostly) that does not regard the arch that I’m changing.
So, checking in the repoman manual, I realized that –ignore-arches looks bad for my case and I decided to request a new feature: –include-arches.
This feature, as explained in the bug, checks only for the arches that you write as argument and should be used only when you are keywording/stabilizing.

Some examples/usage:

First, it saves time, the following example will try to run repoman full in the kdelibs directory:
$ time repoman full > /dev/null 2>&1
real 0m12.434s

$ time repoman full --include-arches "amd64" > /dev/null 2>&1
real 0m3.880s

Second, kdelibs suffers for a dependency.bad on amd64-fbsd, so:
$ repoman full
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs
dependency.bad 2
kde-base/kdelibs/kdelibs-4.10.2.ebuild: PDEPEND: ~amd64-fbsd(default/bsd/fbsd/amd64/9.0) ['>=kde-base/nepomuk-widgets-4.10.2:4[aqua=]']

$ repoman full --include-arches "amd64"
RepoMan scours the neighborhood...
>>> Creating Manifest for /home/ago/gentoo-x86/kde-base/kdelibs

Now when I will keyword the packages I can check for specific arches and skip the unuseful checks since they causes, in this case, only a waste of time.
Thanks to Zac for the work on it.

Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 2 (May 05, 2013, 01:50 UTC)

As I’ve (in a very high level) described capabilities and talked a bit on how to work with them, I started with a small overview of file-related capabilities. So next up are process-related capabilities (note, this isn’t a conform terminology, more some categorization that I do myself).

Allow the process to lock memory
Bypass the permission checks for operations on System V IPC objects (similar to the CAP_DAC_OVERRIDE for files)
Bypass permission checks for sending signals
Allow the process to make arbitrary manipulations of process UIDs and create forged UID when passing socket credentials via UNIX domain sockets
Same, but then for GIDs
This capability governs several permissions/abilities, namely to allow the process to

  • change the nice value of itself and other processes
  • set real-time scheduling priorities for itself, and set scheduling policies and priorities for arbitrary processes
  • set the CPU affinity for arbitrary processes
  • apply migrate_pages to arbitrary processes and allow processes to be migrated to arbitrary nodes
  • apply move_pages to arbitrary processes
  • use the MPOL_MF_MOVE_ALL flag with mbind() and move_pages()

The abilities related to page moving, migration and nodes is of importance for NUMA systems, not something most workstations have or need.

Use acct(), to enable or disable system resource accounting for the process
Allow the process to trace arbitrary processes using ptrace(), apply get_robust_list() against arbitrary processes and inspect processes using kcmp().
Allow the process to perform I/O port operations, access /proc/kcore and employ the FIBMAP ioctl() operation.

Capabilities such as CAP_KILL and CAP_SETUID are very important to govern correctly, but this post would be rather dull (given that the definitions of the above capabilities can be found from the manual page) if I wouldn’t talk a bit more about its feasibility. Take a look at the following C application code:

#include <errno.h>
#include <stdio.h>
#include <string.h>
#include <sys/capability.h>
#include <sys/prctl.h>
#include <sys/types.h>
#include <unistd.h>

int main(int argc, char ** argv) {
  printf("cap_setuid and cap_setgid: %d\n", prctl(PR_CAPBSET_READ, CAP_SETUID|CAP_SETGID, 0, 0, 0));
  printf(" %s\n", cap_to_text(cap_get_file(argv[0]), NULL));
  printf(" %s\n", cap_to_text(cap_get_proc(), NULL));
  if (setresuid(0, 0, 0));
    printf("setresuid(): %s\n", strerror(errno));
  execve("/bin/sh", NULL, NULL);

At first sight, it looks like an application to get root privileges (setresuid()) and then spawn a shell. If that application would be given CAP_SETUID and CAP_SETGID effectively, it would allow anyone who executed it to automatically get a root shell, wouldn’t it?

$ gcc -o test -lcap test.c
# setcap cap_setuid,cap_setgid+ep test
$ ./test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
setresuid() failed: Operation not permitted

So what happened? After all, the two capabilities are set with the +ep flags given. Then why aren’t these capabilities enabled? Well, this binary was stored on a file system that is mounted with the nosuid option. As a result, this capability is not enabled and the application didn’t work. If I move the file to another file system that doesn’t have the nosuid option:

$ /usr/local/bin/test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
 = cap_setgid,cap_setuid+ep
setresuid() failed: Operation not permitted

So the capabilities now do get enabled, so why does this still fail? This now is due to SELinux:

type=AVC msg=audit(1367393377.342:4778): avc:  denied  { setuid } for  pid=21418 comm="test" capability=7  scontext=staff_u:staff_r:staff_t tcontext=staff_u:staff_r:staff_t tclass=capability

And if you enable grSecurity’s TPE, we can’t even start the binary to begin with:

$ ./test
-bash: ./test: Permission denied
$ /lib/ /home/test/test
/home/test/test: error while loading shared libraries: /home/test/test: failed to map segment from shared object: Permission denied

# dmesg
[ 5579.567842] grsec: From denied untrusted exec (due to not being in trusted group and file in non-root-owned directory) of /home/test/test by /home/test/test[bash:4221] uid/euid:1002/1002 gid/egid:100/100, parent /bin/bash[bash:4195] uid/euid:1002/1002 gid/egid:100/100

When all these “security obstacles” are not enabled, then the call succeeds:

$ /usr/local/bin/test
cap_setuid and cap_setgid: 1
 = cap_setgid,cap_setuid+ep
 = cap_setgid,cap_setuid+ep
setresuid() failed: Success
root@hpl tmp # 

This again shows how important it is to regularly review capability-enabled files on the file system, as this is a major security problem that cannot be detected by only looking for setuid binaries, but also that securing a system is not limited to one or a few settings: one always has to take the entire setup into consideration, hardening the system so it becomes more difficult for malicious users to abuse it.

# filecap -a
file                 capabilities
/usr/local/bin/test     setgid, setuid

May 04, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Overview of Linux capabilities, part 1 (May 04, 2013, 01:50 UTC)

In the previous posts, I talked about capabilities and how they can be used to allow processes to run in a privileged fashion without granting them full root access to the system. An example given was how capabilities can be leveraged to run ping without granting it setuid root rights. But what are the various capabilities that Linux is, well, capable of?

There are many, and as time goes by, more capabilities are added to the set. The last capability added to the main Linux kernel tree was the CAP_BLOCK_SUSPEND in the 3.5 series. An overview of all capabilities can be seen with man capabilities or by looking at the Linux kernel source code, include/uapi/linux/capability.h. But because you are all lazy, and because it is a good exercise for myself, I’ll go through many of them in this and the next few posts.

For now, let’s look at file related capabilities. As a reminder, if you want to know which SELinux domains are “granted” a particular capability, you can look this up using sesearch. The capability is either in the capability or capability2 class, and is named after the capability itself, without the CAP_ prefix:

$ sesearch -c capability -p chown -A
Allow making changes to the file UIDs and GIDs.
Bypass file read, write and execute permission checks. I came across a reddit post that was about this capability not that long ago.
Bypass file read permission and directory read/search permission checks.
This capability governs 5 capabilities in one:

  • Bypass permission checks on operations that normally require the file system UID of the process to match the UID of the file (unless already granted through CAP_DAC_READ_SEARCH and/or CAP_DAC_OVERRIDE)
  • Allow to set extended file attributes
  • Allow to set access control lists
  • Ignore directory sticky bit on file deletion
  • Allow specifying O_NOATIME for files in open() and fnctl() calls
Do not clear the setuid/setgid permission bits when a file is modified
Allow establishing leases on files
Allow setting FS_APPEND_FL and FP_IMMUTABLE_FL inode flags
Allow creating special files with mknod
Allow setting file capabilities (what I did with the anotherping binary in the previous post)

When working with SELinux (especially when writing applications), you’ll find that the CAP_DAC_READ_SEARCH and CAP_DAC_OVERRIDE capability come up often. This is the case when applications are written to run as root yet want to scan through, read or even execute non-root owned files. Without SELinux, because these run as root, this is all granted. However, when you start confining those applications, it becomes apparent that they require this capability. Another example is when you run user applications, as root, like when trying to play a movie or music file with mplayer when this file is owned by a regular user:

type=AVC msg=audit(1367145131.860:18785): avc:  denied  { dac_read_search } for
pid=8153 comm="mplayer" capability=2  scontext=staff_u:sysadm_r:mplayer_t
tcontext=staff_u:sysadm_r:mplayer_t tclass=capability

type=AVC msg=audit(1367145131.860:18785): avc:  denied  { dac_override } for
pid=8153 comm="mplayer" capability=1  scontext=staff_u:sysadm_r:mplayer_t
tcontext=staff_u:sysadm_r:mplayer_t tclass=capability

Notice the time stamp: both checks are triggered at the same time. What happens is that the Linux security hooks first check for DAC_READ_SEARCH (the “lesser” grants of the two) and then for DAC_OVERRIDE (which contains DAC_READ_SEARCH and more). In both cases, the check failed in the above example.

The CAP_LEASE capability is one that I had not heard about before (actually, I had not heard of getting “file leases” on Linux either). A file lease allows for the lease holder (which requires this capability) to be notified when another process tries to open or truncate the file. When that happens, the call itself is blocked and the lease holder is notified (usually using SIGIO) about the access. It is not really to lock a file (since, if the lease holder doesn’t properly release it, it is forcefully “broken” and the other process can continue its work) but rather to properly close the file descriptor or flushing caches, etc.

BTW, on my system, only 5 SELinux domains hold the lease capability.

There are 37 capabilities known by the Linux kernel at this time. The above list has 9 file related ones. So perhaps next I can talk about process capabilities.

May 03, 2013
Sebastian Pipping a.k.a. sping (homepage, bugs)
May 3rd = Day Against DRM (May 03, 2013, 14:23 UTC)

Learn more at (and

Nirbheek Chauhan a.k.a. nirbheek (homepage, bugs)

Almost a year ago, I worked with Pooja on transliterating a Hindi poem to Bharati Braille for a Type installation at Amar Jyoti School; an institute for the visually-impaired in Delhi. You can read more about that on her blog post about it. While working on that, we were surprised to discover that there were no free (or open source) tools to do the conversion! All we could find were expensive proprietary software, or horribly wrong websites. We had to sit down and manually transliterate each character while keeping in mind the idiosyncrasies of the conversion.

Now, like all programmers who love what they do, I have an urge to reduce the amount of drudgery and repetitive work in my life with automation ;). In addition, we both felt that a free tool to do such a transliteration would be useful for those who work in this field. And so, we decided to work on a website to convert from Devanagari (Hindi & Marathi) to Bharati Braille.

Now, after tons of research and design/coding work, we are proud to announce the first release of our Devanagari to Bharati Braille converter! You can read more about the converter here, and download the source code on Github.

If you know anyone who might find this useful, please tell them about it!

Sven Vermeulen a.k.a. swift (homepage, bugs)
Restricting and granting capabilities (May 03, 2013, 01:50 UTC)

As capabilities are a way for running processes with some privileges, without having the need to grant them root privileges, it is important to understand that they exist if you are a system administrator, but also as an auditor or other security-related function. Having processes run as a non-root user is no longer sufficient to assume that they do not hold any rights to mess up the system or read files they shouldn’t be able to read.

The grsecurity kernel patch set, which is applied to the Gentoo hardened kernel sources, contains for instance CONFIG_GRKERNSEC_CHROOT_CAPS which, as per its documentation, “restrcts the capabilities on all root processes within a chroot jail to stop module insertion, raw i/o, system and net admin tasks, rebooting the system, modifying immutable files, modifying IPC owned by another, and changing the system time.” But other implementations might even use capabilities to restrict the users. Consider LXC (Linux Containers). When a container is started, CAP_SYS_BOOT (the ability to shutdown/reboot the system/container) is removed so that users cannot abuse this privilege.

You can also grant capabilities to users selectively, using (the Capabilities Pluggable Authentication Module). For instance, to allow some users to ping, instead of granting the cap_net_raw immediately (+ep), we can assign the capability to some users through PAM, and have the ping binary inherit and use this capability instead (+p). That doesn’t mean that the capability is in effect, but rather that it is in a sort-of permitted set. Applications that are granted a certain permission this way can either use this capability if the user is allowed to have it, or won’t otherwise.

# setcap cap_net_raw+p anotherping

# vim /etc/pam.d/system-login
... add in something like
auth     required

# vim /etc/security/capability.conf
... add in something like
cap_net_raw           user1

The logic used with capabilities can be described as follows (it is not as difficult as it looks):

        pI' = pI
  (***) pP' = fP | (fI & pI)
        pE' = pP' & fE          [NB. fE is 0 or ~0]

  I=Inheritable, P=Permitted, E=Effective // p=process, f=file
  ' indicates post-exec().

So, for instance, the second line reads “The permitted set of capabilities of the newly forked process is set to the permitted set of capabilities of its executable file, together with the result of the AND operation between the inherited capabilities of the file and the inherited capabilities of the parent process.”

As an admin, you might want to keep an eye out for binaries that have particular capabilities set. With filecap you can list which capabilities are in the effective set of files found on the file system (for instance, +ep).

# filecap 
file                 capabilities
/bin/anotherping     net_raw

Similarly, with pscap you can see the capabilities set on running processes.

# pscap -a
ppid  pid   name        command           capabilities
6148  6152  root        bash              full

It might be wise to take this up in the daily audit reports.

May 02, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Capabilities, a short intro (May 02, 2013, 01:50 UTC)

Capabilities. You probably have heard of them already, but when you start developing SELinux policies, you’ll notice that you come in closer contact with them than before. This is because SELinux, when applications want to do something “root-like”, checks the capability of that application. Without SELinux, this either requires the binary to have the proper capability set, or the application to run in root modus. With SELinux, the capability also needs to be granted to the SELinux context (the domain in which the application runs).

But forget about SELinux for now, and let’s focus on capabilities. Capabilities in Linux are flags that tell the kernel what the application is allowed to do, but unlike file access, capabilities for an application are system-wide: there is no “target” to which it applies. Think about an “ability” of an application. See for yourself through man capabilities. If you have no additional security mechanism in place, the Linux root user has all capabilities assigned to it. And you can remove capabilities from the root user if you want to, but generally, capabilities are used to grant applications that tiny bit more privileges, without needing to grant them root rights.

Consider the ping utility. It is marked setuid root on some distributions, because the utility requires the (cap)ability to send raw packets. This capability is known as CAP_NET_RAW. However, thanks to capabilities, you can now mark the ping application with this capability and drop the setuid from the file. As a result, the application does not run with full root privileges anymore, but with the restricted privileges of the user plus one capability, namely the CAP_NET_RAW.

Let’s take this ping example to the next level: copy the binary (possibly relabel it as ping_exec_t if you run with SELinux), make sure it does not hold the setuid and try it out:

# cp ping anotherping
# chcon -t ping_exec_t anotherping

Now as a regular user:

$ ping -c 1
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.057 ms

$ anotherping -c 1
ping: icmp open socket: Operation not permitted

Let’s assign the binary with the CAP_NET_RAW capability flag:

# setcap cap_net_raw+ep anotherping

And tadaa:

$ anotherping -c 1
PING ( 56(84) bytes of data.
64 bytes from icmp_seq=1 ttl=64 time=0.054 ms

What setcap did was place an extended attribute to the file, which is a binary representation of the capabilities assigned to the application. The additional information (+ep) means that the capability is permitted and effective.

So long for the primer, I’ll talk about the various capabilities in a later post.

May 01, 2013
Donnie Berkholz a.k.a. dberkholz (homepage, bugs)

If you’re a university student, time is running out! You could get paid to hack on Gentoo or other open-source software this summer, but you’ve gotta act now. The deadline to apply for the Google Summer of Code is this Friday.

If this sounds like your dream come true, you can find some Gentoo project ideas here and Gentoo’s GSoC homepage here. For non-Gentoo projects, you can scan through the GSoC website to find the details.

Tagged: gentoo, gsoc

Sven Vermeulen a.k.a. swift (homepage, bugs)
SELinux mount options (May 01, 2013, 01:50 UTC)

When you read through the Gentoo Hardened SELinux handbook, you’ll notice that we sometimes update /etc/fstab with some SELinux-specific settings. So, what are these settings about and are there more of them?

First of all, let’s look at a particular example from the installation instructions so you see what I am talking about:

tmpfs  /tmp  tmpfs  defaults,noexec,nosuid,rootcontext=system_u:object_r:tmp_t  0 0

What the rootcontext= option does here is to set the context of the “root” of that file system (meaning, the context of /tmp in the example) to the specified context before the file system is made visible to the userspace. Because we do it soon, the file system is known as tmp_t throughout its life cycle (not just after the mount or so).

Another option that you’ll frequently see on the Internet is the context= option. This option is most frequently used for file systems that do not support extended attributes, and as such cannot store the context of files on the file system. With the context= mount option set, all files on that file system get the specified context. For instance, context=system_u:object_r:removable_t.

If the file system does support extended attributes, you might find some benefit in using the defcontext= option. When set, the context of files and directories (and other resources on that file system) that do not have a SELinux context set yet will use this default context. However, once a context is set, it will use that context instead.

The last context-related mount option is fscontext=. With this option, you set the context of the “filesystem” class object of the file system rather than the mount itself (or the files). Within SELinux, “filesystem” is one of the resource classes that can get a context. Remember the /tmp mount example from before? Well, even though the files are labeled tmp_t, the file system context itself is still tmpfs_t.

It is important to know that, if you use one of these mount options, context= is mutually exclusive to the other options as it “forces” the context on all resources (including the filesystem class).

April 30, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
Qemu-KVM monitor tips and tricks (April 30, 2013, 01:50 UTC)

When running KVM guests, the Qemu/KVM monitor is a nice interface to interact with the VM and do specific maintenance tasks on. If you run the KVM guests with VNC, then you can get to this monitor through Ctrl-Alt-2 (and Ctrl-Alt-1 to get back to the VM display). I personally run with the monitor on the standard input/output where the VM is launched as its output is often large and scrolling in the VNC doesn’t seem to work well.

I decided to give you a few tricks that I use often on the monitor to handle the VMs.

When I do not start the VNC server associated with the VM by default, I can enable it on the monitor using change vnc while getting details is done using info vnc. To disable VNC again, use change vnc none.

(qemu) info vnc
Server: disabled
(qemu) change vnc
(qemu) change vnc password
Password: ******
(qemu) info vnc
        auth: vnc
Client: none

Similarly, if you need to enable remote debugging, you can use the gdbserver option.

Getting information using info is dead-easy, and supports a wide area of categories: balloon info, block devices, character devices, cpus, memory mappings, network information, etcetera etcetera… Just enter info to get an overview of all supported commands.

To easily manage block devices, you can see the current state of devices using info block and then use change <blockdevice> <path> to update it.

(qemu) info block
virtio0: removable=0 io-status=ok file=/srv/virt/gentoo/hardened2selinux/selinux-base.img ro=0 drv=qcow2 encrypted=0 bps=0 bps_rd=0 bps_wr=0 iops=0 iops_rd=0 iops_wr=0
ide1-cd0: removable=1 locked=0 tray-open=0 io-status=ok [not inserted]
floppy0: removable=1 locked=0 tray-open=0 [not inserted]
sd0: removable=1 locked=0 tray-open=0 [not inserted]
(qemu) change ide1-cd0 /srv/virt/media/systemrescuecd-x86-2.2.0.iso

To powerdown the system, use system_powerdown. If that fails, you can use quit to immediately shut down (terminate) the VM. To reset it, use system_reset. You can also hot-add PCI devices and manipulate CPU states, or even perform live migrations between systems.

When you use qcow2 image formats, you can take a full VM snapshot using savevm and, when you later want to return to this point again, use loadvm. This is interesting when you want to do potentially harmful changes on the system and want to easily revert back if things got broken.

(qemu) savevm 20130419
(qemu) info snapshots
     ID        TAG                 VM SIZE                DATE       VM CLOCK
     1         20130419               224M 2013-04-19 12:05:16   00:00:17.294
(qemu) loadvm 20130419

April 29, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
photorec to the rescue (April 29, 2013, 01:50 UTC)

Once again PhotoRec has been able to save files from a corrupt FAT USB drive. The application scans the partition, looking for known files (based on the file magic) and then restores those files. The files are not named as they were though, so there is still some manual work left, but the recovery works pretty well:

PhotoRec 6.12, Data Recovery Utility, May 2011
Christophe GRENIER

Disk /dev/sdc1 - 1000 GB / 931 GiB (RO) - WD My Book
     Partition                  Start        End    Size in sectors
     No partition             0   0  1 121600 253 63 1953520002 [Whole disk]

Pass 1 - Reading sector  464342462/1953520002, 10738 files found
Elapsed time 2h46m01s - Estimated time to completion 8h52m25
jpg: 7429 recovered
txt: 961 recovered
mp3: 558 recovered
tx?: 373 recovered
riff: 297 recovered
gif: 218 recovered
exe: 151 recovered
ifo: 126 recovered
mpg: 91 recovered
pdf: 83 recovered
others: 451 recovered

In Gentoo, you can find the package as part of app-admin/testdisk. To recover the files, I ran the following command:

$ photorec /log /d /path/to/recovery/dest /dev/sdc1

While skimming through the recovered files, I found a few ones that I deleted a long time ago but apparently never got overwritten (the data, that is). Scary to see how easy such recovery is… Makes me remember that, if you really want to delete files in a less recoverable manner, you can use shred for that.

And for those out there yelling to backup this data – you’re absolutely correct, but no. I backup my systems and important files daily, but this disk contained (mainly) raw picture images and videorecordings. The manipulated, finished images and recordings are backed up (or at least on a disk and somewhere online), but the raw images and recordings are often too much to introduce a backup for, and if I would really lost them, I wouldn’t shed a tear (nor panic).

April 28, 2013
Raúl Porcel a.k.a. armin76 (homepage, bugs)
The new BeagleBone Black and Gentoo (April 28, 2013, 18:02 UTC)

Hi all, long time no see.

Some weeks ago I got an early version of the BeagleBone Black from the people at to create the documentation I always create with every device I get.

Like always i’d like to announce the guide for installing Gentoo in the BeagleBone Black. Have a look at: . Feel free to send any corrections my way.

This board is a new version of the original BeagleBone, known in the community as BeagleBone white, for which I wrote a post for it:

This new version differs in some aspects with the previous version:

  • Cheaper: 45$ vs 89$ of the BeagleBone white
  • 512MB DDR3L RAM vs 256MB DDR2 RAM of the BeagleBone white
  • 1GHz of processor speed vs 720MHz of the BeagleBone white, both when using an external PSU for power

Also it has more features which the old BeagleBone didn’t had

  • miniHDMI output
  • 2GB eMMC

However the new version has missing:

  • Serial port and JTAG through the miniUSB interface

The reason for missing this feature is cost cutting measures, as can be read in the Reference manual.

The full specs of the BeagleBone Black are:
# ARMv7-A 1GHz TI AM3358/9 ARM Cortex-A8 processor
# SMSC LAN8710 Ethernet card
# 1x microSDHC slot
# 1x USB 2.0 Type-A port
# 1x mini-USB 2.0 OTG port
# 1x RJ45
# 1x 6 pin 3.3V TTL Header for serial
# Reset, power and user-defined button

More info about the specs in BeagleBone Black’s webpage.

For those curious as me, here’s the bootlog and the cpuinfo.

I’ve found two issues while working on it:

  1. The USB port doesn’t have a working hotplug detection. That means that if you plug an USB device in the USB port, it will be only detected once, if you remove the USB device, the USB port will stop working. I’ve been told that they are working on it. I haven’t been able to find a workaround for it.
  2. The BeagleBone Black doesn’t detect an microSD card when plugged in when its been booted from the eMMC. If you want to use a microSD card for additional storage, it must be inserted before it boots.

I’d like to thank the people at for providing me a Beaglebone Black to document this.

Have fun!

Sven Vermeulen a.k.a. swift (homepage, bugs)
Securely handling libffi (April 28, 2013, 01:50 UTC)

I’ve recently came across libffi again. No, not because it was mentioned during the Gentoo Hardened online meeting, but because my /var/tmp wasn’t mounted correctly, and emerge (actually python) uses libffi. Most users won’t notice this, because libffi works behind the scenes. But when it fails, it fails bad. And SELinux actually helped me quickly identify what the problem is.

$ emerge --info
segmentation fault

The abbreviation “libffi” comes from Foreign Function Interface, and is a library that allows developers to dynamically call code from another application or library. But the method how it approaches this concerns me a bit. Let’s look at some strace output:

8560  open("/var/tmp/ffiZ8gKPd", O_RDWR|O_CREAT|O_EXCL, 0600) = 11
8560  unlink("/var/tmp/ffiZ8gKPd")      = 0
8560  ftruncate(11, 4096)               = 0
8560  mmap(NULL, 4096, PROT_READ|PROT_EXEC, MAP_SHARED, 11, 0) = -1 EACCES (Permission denied)

Generally, what libffi does, is to create a file somewhere where it can write files (it checks the various mounts on a system to get a list of possible target file systems), adds the necessary data (that it wants to execute) to it, unlinks the file from the file system (but keep the file descriptor open, so that the file cannot (easily) be modified on the system anymore) and then maps it to memory for executable access. If executing is allowed by the system (for instance because the mount point does not have noexec), then SELinux will trap it because the domain (in our case now, portage_t) is trying to execute an (unlinked) file for which it holds no execute rights on:

type=AVC msg=audit(1366656205.201:2221): avc:  denied  { execute } for  
pid=8560 comm="emerge" path=2F7661722F66666962713154465A202864656C6574656429 
dev="dm-3" ino=6912 scontext=staff_u:sysadm_r:portage_t tcontext=staff_u:object_r:var_t

When you notice something like this (an execute on an unnamed file), then this is because the file descriptor points to a file already unlinked from the system. Finding out what it was about might be hard (but with strace it is easy as … well, whatever is easy for you).

Now what happened was that, because /var/tmp wasn’t mounted, files created inside it got the standard type (var_t) which the Portage domain isn’t allowed to execute. It is allowed to execute a lot of types, but not that one ;-) When /var/tmp is properly mounted, the file gets the portage_tmp_t type where it does hold execute rights for.

Now generally, I don’t like having world-writeable locations without noexec. For /tmp, noexec is enabled, but for /var/tmp I have (well, had ;-) to allow execution from the file system, mainly because some (many?) Gentoo package builds require it. So how about this dual requirement, of allowing Portage to write (and execute) its own files, and allow libffi to do its magic? Certainly, from a security point of view, I might want to restrict this further…

Well, we need to make sure that the location where Portage works with (the location pointed to by $PORTAGE_TMPDIR) is specifically made available for Portage: have the directory only writable by the Portage user. I keep it labeled as tmp_t so that the existing policies apply, but it might work with portage_tmp_t immediately set as well. Perhaps I’ll try that one later. With that set, we can have this mount-point set with exec rights (so that libffi can place its file there) in a somewhat more secure manner than allowing exec on world-writeable locations.

So now my /tmp and /var/tmp (and /run and /dev/shm and /lib64/rc/init.d) are tmpfs-mounts with the noexec (as well as nodev and nosuid) bits set, with the location pointed towards by $PORTAGE_TMPDIR being only really usable by the Portage user:

$ ls -ldZ /var/portage
drwxr-x---. 4 portage root system_u:object_r:tmp_t 4096 Apr 22 21:45 /var/portage/

And libffi? Well, allowing applications to create their own executables and executing it is something that should be carefully governed. I’m not aware of any existing or past vulnerabilities, but I can imagine that opening the ffi* file(s) the moment they come up (to make sure you have a file descriptor) allows you to overwrite the content after libffi has created it but before the application actually executes it. By limiting the locations where applications can write files to (important step one) and the types they can execute (important step two) we can already manage this a bit more. Using regular DAC, this is quite difficult to achieve, but with SELinux, we can actually control this a bit more.

Let’s first see how many domains are allowed to create, write and execute files:

$ sesearch -c file -p write,create,execute -A | grep write | grep create \
  | grep execute | awk '{print $1}' | sort | uniq | wc -l

Okay, 32 target domains. Not that bad, and certainly doable to verify manually (hell, even in a scripted manner). You can now check which of those domains have rights to execute generic binaries (bin_t), possibly needed for command execution vulnerabilities or privilege escalation. Or that have specific capabilities. And if you want to know which of those domains use libffi, you can use revdep-rebuild to find out which files are linked to the libffi libraries.

It goes to show that trying to keep your box secure is a never-ending story (please, companies, allow your system administrators to do their job by giving them the ability to continuously increase security rather than have them ask for budget to investigate potential security mitigation directives based on the paradigm of business case and return on investment using pareto-analytics blaaaahhhh….), and that SELinux can certainly be an important method to help achieve it.

April 27, 2013
Sven Vermeulen a.k.a. swift (homepage, bugs)
How logins get their SELinux user context (April 27, 2013, 01:50 UTC)

Sometimes, especially when users are converting their systems to be SELinux-enabled, their user context is wrong. An example would be when, after logon (in permissive mode), the user is in the system_u:system_r:local_login_t domain instead of a user domain like staff_u:staff_r:staff_t.
So, how does a login get its SELinux user context?

Let’s look at the entire chain of SELinux context changes across a boot. At first, when the system boots, the kernel (and all processes invoked from it) run in the kernel_t domain (I’m going to ignore the other context fields for now until they become relevant). When the kernel initialization has been completed, the kernel executes the init binary. When you use an initramfs, then a script might be called. This actually doesn’t matter that much yet, since SELinux stays within the kernel_t domain until a SELinux-aware init is launched.

When the init binary is executed, init of course starts. But as mentioned, init is SELinux-aware, meaning it will invoke SELinux-related commands. One of these is that it will load the SELinux policy (as stored in /etc/selinux) and then reexecute itself. Because of that, its process context changes from kernel_t towards init_t. This is because the init binary itself is labeled as init_exec_t and a type transition is defined from kernel_t towards init_t when init_exec_t is executed.

Ok, so init now runs in init_t and it goes on with whatever it needs to do. This includes invoking init scripts (which, btw, run in initrc_t because the scripts are labeled initrc_exec_t or with a type that has the init_script_file_type attribute set, and a transition from init_t to initrc_t is defined when such files are executed). When the bootup is finally completed, init launches the getty processes. The commands are mentioned in /etc/inittab:

$ grep getty /etc/inittab
c1:12345:respawn:/sbin/agetty --noclear 38400 tty1 linux
c2:2345:respawn:/sbin/agetty 38400 tty2 linux

These binaries are also explicitly labeled getty_exec_t. As a result, the getty (or agetty) processes run in the getty_t domain (because a transition is defined from init_t to getty_t when getty_exec_t is executed). Ok, so gettys run in getty_t. But what happens when a user now logs on to the system?

Well, the getty’s invoke the login binary which, you guessed it right, is labeled as something: login_exec_t. As a result (because, again, a transition is defined in the policy), the login process runs as local_login_t. Now the login process invokes the various PAM subroutines which follow the definitions in /etc/pam.d/login. On Gentoo systems, this by default points to the system-local-login definitions which points to the system-login definitions. And in this definition, especially under the sessions section, we find a reference to

session         required close
session         required multiple open

Now here is where some of the magic starts (see my post on Using pam_selinux to switch contexts for the gritty details). The methods inside the binary will look up what the context should be for a user login. For instance, when the root user logs on, it has SELinux checking what SELinux user root is mapped to, equivalent to running semanage login -l:

$ semanage login -l | grep ^root
root                      root                     

In this case, the SELinux user for root is root, but this is not always the case (that login and user are the same). For instance, my regular administrative account maps to the staff_u SELinux user.

Next, it checks what the default context should be for this user. This is done by checking the default_contexts file (such as the one in /etc/selinux/strict/contexts although user-specific overrides can be (and are) placed in the users subdirectory) based on the context of the process that is asking SELinux what the default context should be. In our case, it is the login process running as local_login_t:

$ grep -HR local_login_t /etc/selinux/strict/contexts/*
default_contexts:system_r:local_login_t user_r:user_t staff_r:staff_t sysadm_r:sysadm_t unconfined_r:unconfined_t
users/unconfined_u:system_r:local_login_t               unconfined_r:unconfined_t
users/guest_u:system_r:local_login_t            guest_r:guest_t
users/user_u:system_r:local_login_t             user_r:user_t
users/staff_u:system_r:local_login_t            staff_r:staff_t sysadm_r:sysadm_t
users/root:system_r:local_login_t  unconfined_r:unconfined_t sysadm_r:sysadm_t staff_r:staff_t user_r:user_t
users/xguest_u:system_r:local_login_t   xguest_r:xguest_t

Since we are verifying this for the root SELinux user, the following line of the users/root file is what matters:

system_r:local_login_t  unconfined_r:unconfined_t sysadm_r:sysadm_t staff_r:staff_t user_r:user_t

Here, SELinux looks for the first match in that line that the user has access to. This is defined by the roles that the user is allowed to access:

$ semanage user -l | grep root
root            staff_r sysadm_r

As root is allowed both the staff_r and sysadm_r roles, the first one found in the default context file that matches will be used. So it is not the order in which the roles are displayed in the semanage user -l output that matters, but the order of the contexts in the default context file. In the example, this is sysadm_r:sysadm_t:

system_r:local_login_t  unconfined_r:unconfined_t sysadm_r:sysadm_t staff_r:staff_t user_r:user_t
                        <-----------+-----------> <-------+-------> <------+------> <-----+----->
                                    `- no matching role   `- first (!)     `- second      `- no match

Now that we know what the context should be, this is used for the first execution that the process (still login) will do. So login changes the Linux user (if applicable) and invokes the shell of that user. Because this is the first execution that is done by login, the new context is set (being root:sysadm_r:sysadm_t) for the shell.

And that is why, if you run id -Z, it returns the user context (root:sysadm_r:sysadm_t) if everything works out fine ;-)

April 26, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB and Pacemaker recent bumps (April 26, 2013, 14:23 UTC)

mongoDB 2.4.3

Yet another bugfix release, this new stable branch is surely one of the most quickly iterated I’ve ever seen. I guess we’ll wait a bit longer at work before migrating to 2.4.x.

pacemaker 1.1.10_rc1

This is the release of pacemaker we’ve been waiting for, fixing among other things, the ACL problem which was introduced in 1.1.9. Andrew and others are working hard to get a proper 1.1.10 out soon, thanks guys.

Meanwhile, we (gentoo cluster herd) have been contacted by @Psi-Jack who has offered his help to follow and keep some of our precious clustering packages up to date, I wish our work together will benefit everyone !

All of this is live on portage, enjoy.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
My time abroad: loyalty cards (April 26, 2013, 12:56 UTC)

Compared to most people around me now, and probably most of the people who read my blog, my life is not that extraordinary, in the terms of travel and moving around. I’ve been, after all, scared of planes for years, and it wasn’t until last year that I got out of the continent — in an year, though, I more than doubled the number of flights I’ve been on, with 18 last year, and more than doubled the number of countries I’ve been to, counting Luxembourg even though I only landed there and got on a bus to get back to Brussels after Alitalia screwed up.

On the other hand, compared to most of the people I know in Italy, I’ve been going around quite a bit, as I spent a considerable amount of time last year in Los Angeles, and I’ve now moved to Dublin, Ireland. And there are quite a few differences between these places and Italy. I’ve already written a bit about the differences I found during my time in the USA but this time I want to focus on something which is quite a triviality, but still is a remarkable difference between the three countries I got to know up to now. As the title suggest I’m referring to stores’ loyalty cards.

Interestingly enough, there was just this week an article on the Irish Times about the “privacy invasion” of loyalty cards.. I honestly don’t see it as big a deal as many others. Yes, they do profile your shopping habits. Yes, if you do not keep private the kind of offers they sent you, they might tell others something about you as well — the newspaper actually brought up the example of a father who discovered the pregnancy of the daughter because of the kind of coupons the supermarket was sending, based on her change of spending habits; I’m sorry but I cannot really feel bad about it. After all, absolute privacy and relevant offers are kinda at the opposite sides of a range.. and I’m usually happy enough when companies are relevant to me.

So of course stores want to know the habits of a single person, or of a single household, and for that they give you loyalty cards… but for you to use them, they have to give you something in return, don’t they? This is where the big difference on this topic appears clearly, if you look at the three countries:

  • in both Italy and Ireland, you get “points” with your shopping; in the USA, instead, the card gives you immediate discounts; I’m pretty sure that this gives not-really-regular-shoppers a good reason to get the card as well: you can easily save a few dollars on a single grocery run by getting the loyalty card at the till;
  • in Italy you redeem the points to get prizes – this works not so differently than with airlines after all – sometimes by adding a contribution, sometimes for free; in my experience the contribution is never worth it, so either you get something for free or just forget about it;
  • in Ireland I still haven’t seen a single prize system; instead they work with coupons: you get a certain amount of points each euro you spend (usually, one point per euro), and then when you get to a certain amount of points, they get a value (usually, one cent per point), and a coupon redeemable for the value is sent you.

Of course, the “European” method (only by contrast with American, since I don’t know what other countries do), is a real loyalty scheme: you need a critical mass of points for them to be useful, which means that you’ll try to get on the same store as much as you can. This is true for airlines as well, after all. On the other hand, people who shop occasionally are less likely to request the card at all, so even if there is some kind of data to be found in their shopping trends, they will be completely ignored by this kind of scheme.

I’m honestly not sure which method I prefer, at this point I still have one or two loyalty cards from my time in Los Angeles, and I’m now collecting a number of loyalty cards here in Dublin. Some are definitely a good choice for me, like the Insomnia card (I love getting coffee at a decent place where I can spend time to read, in the weekends), others, like Dunnes, make me wonder.. the distance from the supermarket to where I’m going to live is most likely offsetting the usefulness of their coupons compared to the (otherwise quite more expensive) Spar at the corner.

At any rate, I just want to write my take on the topic, which is definitely not of interest to most of you…

April 25, 2013
Bernard Cafarelli a.k.a. voyageur (homepage, bugs)

Recently, I have been toying around with GateOne, a web-based SSH client/terminal emulator. However, installing it on my server proved to be a bit challenging: it requires tornado as a webserver, and uses websockets, while I have an Apache 2.2 instance already running with a few sites on it (and my authentication system configured for my tastes)

So, I looked how to configure a reverse proxy for GateOne, but websockets were not officially supported by Apache... until recently! Jim Jagielski added the proxy_wstunnel module in trunk a few weeks ago. From what I have seen on the mailing list, backporting to 2.4 is easy to do (and was suggested as an official backport), but 2.2 required a few additional changes to the original patch (and current upstream trunk).

A few fixes later, I got a working patch (based on Apache 2.2.24), available here:

Recompile with this patch, and you will get a nice and shiny module file!

Now just load it (in /etc/apache2/httpd.conf in Gentoo):
<IfDefine PROXY>
LoadModule proxy_wstunnel_module modules/

and add a location pointing to your GateOne installation:

<Location /gateone/ws>
    ProxyPass wss://
    ProxyPassReverse wss://

<Location /gateone>
    Order deny,allow
    Deny from all
    Allow from #your favorite rule


Reload Apache, and you now have Gateone running behind your Apache server :) If it does not work, first check GateOne log and configuration, especially the "origins" variable

For other websocket applications, Jim Jagielski comments here :

ProxyPass /whatever ws://websocket-srvr.example/com/

Basically, the new submodule adds the 'ws' and 'wss' scheme to the allowed protocols between the client and the backend, so you tell Apache that you'll be talking 'ws' with the backend (same as ajp://whatever sez that httpd will be talking ajp to the backend).

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Tarsnap and backup strategies (April 25, 2013, 13:52 UTC)

After having had a quite traumatic experience with a customer’s service running on one of the virtual servers I run last November, I made sure to have a very thorough backup for all my systems. Unfortunately, it turns out to be a bit too thorough, so let me explore with you what was going on.

First of all, the software I use to run the backup is tarsnap — you might have heard of it or not, but it’s basically a very smart service, that uses an open-source client, based upon libarchive, and then a server system that stores content (de-duplicated, compressed and encrypted with a very flexible key system). The author is a FreeBSD developer, and he’s charging an insanely small amount of money.

But the most important part to know when you use tarsnap is that you just always create a new archive: it doesn’t really matter what you changed, just get everything together, and it will automatically de-duplicate the content that didn’t change, so why bother? My first dumb method of backups, which is still running as of this time, is to simply, every two hours, dump a copy of the databases (one server runs PostgreSQL, the other MySQL — I no longer run MongoDB but I start to wonder about it, honestly), and then use tarsnap to generate an archive of the whole /etc, /var and a few more places where important stuff is. The archive is named after date and time of the snapshot. And I haven’t deleted any snapshot since I started, for most servers.

It was a mistake.

The moment when I went to recover the data out of earhart (the host that still hosts this blog, a customer’s app, and a couple more sites, like the assets for the blog and even Autotools Mythbuster — but all the static content, as it’s managed by git, is now also mirrored and served active-active from another server called pasteur), the time it took to extract the backup was unsustainable. The reason was obvious when I thought about it: since it has been de-duplicating for almost an year, it would have to scan hundreds if not thousands of archives to get all the small bits and pieces.

I still haven’t replaced this backup system, which is very bad for me, especially since it takes a long time to delete the older archives even after extracting them. On the other hand it’s probably a lot of a matter of tradeoff in the expenses as well, as going through all the older archives to remove the old crap drained my credits with tarsnap quickly. Since the data is de-duplicated and encrypted, the archives’ data needs to be downloaded to be decrypted, before it can be deleted.

My next preference is going to be to set it up so that the script is executed in different modes: 24 times in 48 hours (every two hours), 14 times in 14 days (daily), and 8 times in two months (weekly). The problem is actually doing the rotation properly with a script, but I’ll probably publish a Puppet module to take care of that, since it’s the easiest thing for me to do, to make sure it executes as intended.

The essence of this post is basically to warn you all that, no matter whether it’s cheap to keep around the whole set of backups since the start of time, it’s still a good idea to just rotate them.. especially for content that does not change that often! Think about it even when you set up any kind of backup strategy…

April 24, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
Hello Gentoo Planet (April 24, 2013, 08:51 UTC)

Hey Gentoo folks !

I finally followed a friend’s advice and stepped into the Gentoo Planet and Universe feeds. I hope my modest contributions will help and be of interest to some of you readers.

As you’ll see, I don’t talk only about Gentoo but also about photography and technology more generally. I also often post about the packages I maintain or I have an interest in to highlight their key features or bug fixes.

April 21, 2013

Those of you who don't live under a rock will have learned by now that AMD has published VDPAU code to use the Radeon UVD engine for accelerated video decode with the free/open source drivers.

In case you want to give it a try, mesa-9.2_pre20130404 has been added (under package.mask) to the portage tree for your convenience. Additionally you will need a patched kernel and new firmware.


For kernel 3.9, grab the 10 patches from the dri-devel mailing list thread (recommended) [UPDATE]I put the patches into a tarball and attached to Gentoo bug 466042[/UPDATE]. For kernel 3.8 I have collected the necessary patches here, but be warned that kernel 3.8 is not officially supported. It works on my Radeon 6870, YMMV.


The firmware is part of radeon-ucode-20130402, but has not yet reached the linux-firmware tree. If you require other firmware from the linux-firmware package, remove the radeon files from the savedconfig file and build the package with USE="savedconfig" to allow installation together with radeon-ucode. [UPDATE]linux-firmware-20130421 now contains the UVD firmware, too.[/UPDATE]

The new firmware files are
radeon/RV710_uvd.bin: Radeon 4350-4670, 4770.
radeon/RV770_uvd.bin: Not useful at this time. Maybe later for 4200, 4730, 4830-4890.
radeon/CYPRESS_uvd.bin: Evergreen cards.
radeon/SUMO_uvd.bin: Northern Islands cards and Zacate/Llano APUs.
radeon/TAHITI_uvd.bin: Southern Islands cards and Trinity APUs.

Testing it

If your kernel is properly patched and finds the correct firmware, you will see this message at boot:
[drm] UVD initialized successfully.
If mesa was correctly built with VDPAU support, vdpauinfo will list the following codecs:
Decoder capabilities:

name level macbs width height
MPEG1 16 1048576 16384 16384
MPEG2_SIMPLE 16 1048576 16384 16384
MPEG2_MAIN 16 1048576 16384 16384
H264_BASELINE 16 9216 2048 1152
H264_MAIN 16 9216 2048 1152
H264_HIGH 16 9216 2048 1152
VC1_SIMPLE 16 9216 2048 1152
VC1_MAIN 16 9216 2048 1152
VC1_ADVANCED 16 9216 2048 1152
MPEG4_PART2_SP 16 9216 2048 1152
MPEG4_PART2_ASP 16 9216 2048 1152
If mplayer and its dependencies were correctly built with VDPAU support, running it with "-vc ffh264vdpau," parameter will output something like the following when playing back a H.264 file:
VO: [vdpau] 1280x720 => 1280x720 H.264 VDPAU acceleration
To make mplayer use acceleration by default, uncomment the [vo.vdpau] section in /etc/mplayer/mplayer.conf

Gallium3D Head-up display

Another cool new feature is the Gallium3D HUD (link via Phoronix), which can be enabled with the GALLIUM_HUD environment variable. This supposedly works with all the Gallium drivers (i915g, radeon, nouveau, llvmpipe).

An example screenshot of Supertuxkart using GALLIUM_HUD="cpu0+cpu1+cpu2:100,cpu:100,fps;draw-calls,requested-VRAM+requested-GTT,pixels-rendered"

If you have any questions or problems setting up UVD on Gentoo, stop by #gentoo-desktop on freenode IRC.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

This is a follow-up on my last post for autotools introduction. I’m trying to keep these posts bite sized both because it seems to work nicely, and because this way I can avoid leaving the posts rotting in the drafts set.

So after creating a simple autotools build system in the previous now you might want to know how to build a library — this is where the first part of complexity kicks in. The complexity is not, though, into using libtool, but into making a proper library. So the question is “do you really want to use libtool?”

Let’s start from a fundamental rule: if you’re not going to install a library, you don’t want to use libtool. Some projects that only ever deal with programs still use libtool because that way they can rely on .la files for static linking. My suggestion is (very simply) not to rely on them as much as you can. Doing it this way means that you no longer have to care about using libtool for non-library-providing projects.

But in the case you are building said library, using libtool is important. Even if the library is internal only, trying to build it without libtool is just going to be a big headache for the packager that looks into your project (trust me I’ve seen said projects). Before entering the details on how you use libtool, though, let’s look into something else: what you need to make sure you think about, in your library.

First of all, make sure to have an unique prefix to your public symbols, be them constants, variables or functions. You might also want to have one for symbols that you use within your library on different translation units — my suggestion in this example is going to be that symbols starting with foo_ are public, while symbols starting with foo__ are private to the library. You’ll soon see why this is important.

Reducing the amount of symbols that you expose is not only a good performance consideration, but it also means that you avoid the off-chance to have symbol collisions which is a big problem to debug. So do pay attention.

There is another thing that you should consider when building a shared library and that’s the way the library’s ABI is versioned but it’s a topic that, in and by itself, takes more time to discuss than I want to spend in this post. I’ll leave that up to my full guide.

Once you got these details sorted out, you should start by slightly change the file from the previous post so that it initializes libtool as well:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Now it is possible to provide a few options to LT_INIT for instance to disable by default the generation of static archives. My personal recommendation is not to touch those options in most cases. Packagers will disable static linking when it makes sense, and if the user does not know much about static and dynamic linking, they are better off getting everything by default on a manual install.

On the side, the changes are very simple. Libraries built with libtool have a different class than programs and static archives, so you declare them as lib_LTLIBRARIES with a .la extension (at build time this is unavoidable). The only real difference between _LTLIBRARIES and _PROGRAMS is that the former gets its additional links from _LIBADD rather than _LDADD like the latter.

bin_PROGRAMS = fooutil1 fooutil2 fooutil3

libfoo_la_SOURCES = lib/foo1.c lib/foo2.c lib/foo3.c
libfoo_la_LIBADD = -lz
libfoo_la_LDFLAGS = -export-symbols-regex &apos^foo_[^_]&apos

fooutil1_LDADD =
fooutil2_LDADD =
fooutil3_LDADD = -ldl

pkginclude_HEADERS = lib/foo1.h lib/foo2.h lib/foo3.h

The _HEADERS variable is used to define which header files to install and where. In this case, it goes into ${prefix}/include/${PACKAGE}, as I declared it a pkginclude install.

The use of -export-symbols-regex ­– further documented in the guide – ensures that only the symbols that we want to have publicly available are exported and does so in an easy way.

This is about it for now — one thing that I haven’t added in the previous post, but which I’ll expand in the next iteration or the one after, is that the only command you need to regenerate autotools is autoreconf -fis and that still applies after introducing libtool support.

April 18, 2013
Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Bitrot is accumulating, and while we've tried to keep kdpim-4.4 running in Gentoo as long as possible, the time is slowly coming to say goodbye. In effect this is triggered by annoying problems like these:

There are probably many more such bugs around, where incompatibilities between kdepim-4.4 and kdepimlibs of more recent releases occur or other software updates have led to problems. Slowly it's getting painful, and definitely more painful than running a recent kdepim-4.10 (which has in my opinion improved quite a lot over the last major releases).
Please be prepared for the following steps:
  • end of april 2013, all kdepim-4.4 packages in the Gentoo portage tree will be package.masked 
  • end of may 2013, all kdepim-4.4 packages in the Gentoo portage tree will be removed
  • afterwards, we will finally be able to simplify the eclasses a lot by removing the special handling
We still have the kdepim-4.7 upgrade guide around, and it also applies to the upgrade from kdepim-4.4 to any later version. Feel free to improve it or suggest improvements.

R.I.P. kmail1.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.9 (April 18, 2013, 17:41 UTC)

First of all py3status is on pypi ! You can now install it with the simple and usual :

$ pip install py3status

This new version features my first pull request from @Fandekasp who kindly wrote a pomodoro module which helps this technique’s adepts by having a counter on their bar. I also fixed a few glitches on module injection and some documentation.

Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)

I’ve been asked over on Twitter if I had any particular tutorial for an easy one-stop-shop tutorial for Autotools newbies… the answer was no, but I will try to make up for it by writing this post.

First of all, with the name autotools, we include quite a bit of different tools. If you have a very simple program (not hellow-simple, but still simple), you definitely want to use at the very least two: autoconf and automake. While you could use the former without the latter, you really don’t want to. This means that you need two files: and

The first of the two files ( is processed to produce a configure script which the user will be executing at build time. It is also the bane of most people because, if you look at one for a complex project, you’ll see lots of content (and logic) and next to no comments on what things do. Lots of it is cargo-culting and I’m afraid I cannot help but just show you a possible basic file:

AC_INIT([myproject], [123], [], [])
AM_INIT_AUTOMAKE([foreign no-dist-gz dist-xz])



Let me explain. The first two lines are used to initialize autoconf and automake respectively. The former is being told the name and version of the project, the place to report bugs, and an URL for the package to use in documentation. The latter is told that we’re not a GNU project (seriously, this is important — you wouldn’t believe how many tarballs I find with 0-sized files just because they are mandatory in the default GNU layout; even though I found at least one crazy package lately that wanted to have a 0-sized NEWS file), and that we want a .tar.xz tarball and not a .tar.gz one (which is the default).

After initializing the tools, you need to, at the very least, ask for a C compiler. You could have asked for a C++ compiler as well, but I’ll leave that as an exercise to the reader. Finally, you got to tell it to output Makefile (it’ll use but we’ll create instead soon).

To build a program, you need then to create a similar to this:

bin_PROGRAMS = hellow

dist_doc_DATA = README

Here we’re telling automake that we have a program called hellow (which sources are by default hellow.c) which has to be installed in the binary directory, and a README file that has to be distributed in the tarball and installed as a documentation piece. Yes this is really enough as a very basic

If you were to have two programs, hellow and hellou, and a convenience library between the two you could do it this way:

bin_PROGRAMS = hellow hellou

hellow_SOURCES = src/hellow.c
hellow_LDADD = libhello.a

hellou_SOURCES = src/hellou.c
hellow_LDADD = libhello.a

noinst_LIBRARIES = libhello.a
libhello_a_SOURCES = lib/libhello.c lib/libhello.h

dist_doc_DATA = README

But then you’d have to add AC_PROG_RANLIB to the calls. My suggestion is that if you want to link things statically and it’s just one or two files, just go for building it twice… it can actually makes it faster to build (one less serialization step) and with the new LTO options it should very well improve the optimization as well.

As you can see, this is really easy when done on the basis… I’ll keep writing a few more posts with easy solutions, and probably next week I’ll integrate all of this in Autotools Mythbuster and update the ebook with an “easy how to” as an appendix.

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
mongoDB v2.4.2 released (April 18, 2013, 10:53 UTC)

After the security issue related bumps of the previous releases which happened last weeks it was about time 10gen released a 2.4.x fixing the following issues:

  • Fix for upgrading sharded clusters
  • TTL assertion on replica set secondaries
  • Several V8 memory leak and performance fixes
  • High volume connection crash

I guess everything listed above would have affected our cluster at work so I’m glad we’ve been patient on following-up this release :) See the changelog for details.

Jeremy Olexa a.k.a. darkside (homepage, bugs)
I’ve been in Australia for two months (April 18, 2013, 08:05 UTC)

Well, the title says it. I’ve now been here for two months. I’m working at Skydive Maitland, which is 40 minutes from the coast and 2+ hours from Sydney. So far, I’ve broke even on my Australian travel/living expenses AND I’m skydiving 3-4 days a week, what could be better? I did 99 jumps in March, normally I do 400 per year. Australia is pretty nice, it is easy to live here and there is plenty to see but it is hard to get places since the country is so big and I need a few days break to go someplace.

How did I end up here? I knew I would goto Australia at some point during my trip since I would be passing by and it is a long way from home. (Sidenote: Of all the travelers at hostels in Europe, about 40-50% that I met were Aussie). In December, I bought my right to work in Australia by getting a working holiday visa. That required $270 and 10 minutes to fill out a form on the internet, overnight I had my approval. So, that was settled, I could now work for 12 months in Australia and show up there within a year. I knew I would be working in Australia because it is a rather expensive country to live/travel in. I thought about picking fruit in an orchard since they always hire backpackers, but skydiving sounded more fun in the end (of course!). So, in January, I emailed a few dropzones stating that I would be in Australia in the near future and looking for work. Crickets… I didn’t hear back from anyone. Fair enough, most businesses will have adequate staffing in the middle of the busy season. But, one place did get back to me some weeks later. Then, it took one Skype convo to come to a friendly agreement and I was looking for flights after. Due to some insane price scheming, there was one flight in two days that was 1/2 price of the others (thank you That sealed my decision, and I was off…

Onward looking, full time instructor for March and April then become part time in May and June so I can see more of Australia. I have a few road trips in the works, I just need my own vehicle to make that happen. Working on it. After Australia, I’m probably going to Japan or SE Asia like I planned.

Since my sister already asked, Yes, I do see kangaroos nearly everyday..

Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : streets (April 18, 2013, 06:02 UTC)


April 17, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
Bundling libraries for trouble (April 17, 2013, 12:01 UTC)

You might remember that I’ve been very opinionated against bundling libraries and, to a point, static linking of libraries for Gentoo. My reasons have been mostly geared toward security but there has been a few more instances I wrote about of problems with bundled libraries and stability, for instance the moment when you get symbol collisions between a bundled library and a different version of said library used by one of the dependencies, like that one time in xine.

But there are other reasons why bundling is bad in most cases, especially distributions, and it’s much worse than just statically linking everything. Unfortunately, while all the major distribution have, as far as I know, a policy against bundled (or even statically linked) libraries, there are very few people speaking against them outside your average distribution speaker.

One such a rare gem comes out of Steve McIntyre a few weeks ago, and actually makes two different topics I wrote about meet in a quite interesting way. Steve worked on finding which software packages make use of CPU-specific assembly code for performance-critical code, which would have to be ported for the new 64-bit ARM architecture (Aarch64). And this has mostly reminded me of x32.

In many ways, there are so many problems in common between Aarch64 and x32, and they mostly gear toward the fact that in both cases you have an architecture (or ABI) that is very similar to a known, well-understood architecture but is not identical. The biggest difference, a part from the implementations themselves, is in the way the two have been conceived: as I said before, Intel’s public documentation for the ABI’s inception noted explicitly the way that it was designed for closed systems, rather than open ones (the definition of open or closed system has nothing to do with open- or closed-source software, and has to be found more into the expectancies on what the users will be able to add to the system). The recent stretching of x32 on the open system environments is, in my opinion, not really a positive thing, but if that’s what people want …

I think Steve’s reports is worth a read, both for those who are interested to see what it takes to introduce a new architecture (or ABI). In particular, for those who maintained before that my complaining of x32 breaking assembly code all over the place was a moot point — people with a clue on how GCC works know that sometimes you cannot get away with its optimizations, and you actually need to handwrite code; at the same time, as Steve noted, sometimes the handwritten code is so bad that you should drop it and move back to plain compiled C.

There is also a visible amount of software where the handwritten assembly gets imported due to bundling and direct inclusion… this tends to be relatively common because handwritten assembly is usually tied to performance-critical code… which for many is the same code you bundle because a dynamic link is “not fast enough” — I disagree.

So anyway, give a read to Steve’s report, and then compare with some of the points made in my series of x32-related articles and tell me if I was completely wrong.

April 16, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
San Francisco : chinatown (April 16, 2013, 20:41 UTC)


Jeremy Olexa a.k.a. darkside (homepage, bugs)
Sri Lanka in February (April 16, 2013, 06:16 UTC)

I wrote about how I ended up in Sri Lanka in my last post, here. I ended up with a GI sickness during my second week, from the a bad meal or water and it spoiled the last week that I was there, but I had my own room, bathroom, a good book, and a resort on the beach. Overall, the first week was fun, teaching English, living in a small village and being immersed in the culture staying with a host family. Hats off to volunteers that can live there long term. I was craving “western culture” after a short time. I didn’t see as much as a wanted to, like the wild elephants, Buddhist temples or surf lessons. There will be other places or times to do that stuff though.

Sri Lanka pics

April 15, 2013
Paweł Hajdan, Jr. a.k.a. phajdan.jr (homepage, bugs)

It is now over a week since announcement of Blink, a rendering engine for the Chromium project.

I hope it could be useful to provide links to the best articles about it, which have good, technical contents.

Thoughts on Blink from HTML5 Test is a good summary about history of Chrome, WebKit, and puts this recent announcement in context. For even more context (nothing about Blink) you can read Paul Irish's excellent WebKit for Developers post.

Peter-Paul Koch (probably best known for has good articles about Blink: Blink and Blinkbait.

I also found it interesting to ready Krzysztof Kowalczyk's Thoughts on Blink.

Highly recommended Google+ posts by Chromium developers:

If you're interested in the technical details or want to participate in the discussions, why not follow blink-dev, the mailing list of the project?

Gentoo at FOSSCOMM 2013 (April 15, 2013, 19:03 UTC)

What? FOSSCOMM 2013

Free and Open Source Software COMmunities Meeting(FOSSCOMM) 2013

When? 20th, April 2013 - 21st, April 2013

Where? Harokopio University, Athens, Greece


FOSSCOMM 2013 is almost here, and Gentoo will be there!

We will have a booth with Gentoo promo stuff, stickers, flyers, badges, live DVD's and much more! Whether you're a developer, user, or simply curious, be sure and stop by. We are also going to represent Gentoo in a round table with other foss communities. See you there!

Pavlos Ratis contributed the draft for this announcement.

Rolling out systemd (April 15, 2013, 10:43 UTC)


We started to roll out systemd today.
But don’t panic! Your system will still boot with openrc and everything is expected to be working without troubles.
We are aiming to support both init systems, at least for some time (long time I believe) and having systemd replacing udev (note: systemd is a superset of udev) is a good way to make systemd users happy in Sabayon land. From my testing, the slowest part of the boot is now the genkernel initramfs, in particular the modules autoload code which, as you may expect, I’m going to try to improve.

Please note that we are not willing to accept systemd bugs yet, because we’re still fixing up service units and adding the missing ones, the live media scripts haven’t been migrated and the installer is not systemd aware. So, please be patient ;-)

Having said this, if you are brave enough to test systemd out, you’re lucky and in Sabayon, it’s just 2 commands away, thanks to eselect-sysvinit and eselect-settingsd. And since I expect those brave people to know how to use eselect, I won’t waste more time on them now.

April 14, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.8 (April 14, 2013, 20:33 UTC)

I went on a coding frenzy to implement most of the stuff I was not happy with py3status so far. Here comes py3status code name : San Francisco (more photos to come).
San Francisco


I always had the habit of using tabulators to indent my code. @Lujeni pointed out that this is not a PEP8 recommended method and that we should start respecting more of it in the near future. Well, he’s right and I guess it was time to move on so I switched to using spaces and corrected a lot of other coding style stuff which got my code a score going from around -1/10 to around 9.5/10 on pylint !

Threaded modules’ execution

This was the major thing I was not happy with : when a user-written module was executed for injection, the time it took to get its response would cause py3status to stop updating the bar. This means that if you had a database call to make to get some stuff you need displayed on the bar and it took 10 seconds, py3status was sleeping for those 10 seconds to update the bar ! This behavior could cause some delays in the clock ticking for example.

I decided to offload all of the modules’ detection and execution to a thread to solve this problem. To be frank, this also helped to rationalize the code better as well. No more delays and a cleaner handling is what you get, stuff will start appending themselves whatever the time they take to execute !


It was about time the examples available on py3status would also work using python3.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

For a long time, I've been extraordinarily happy with both NVIDIA graphics hardware and the vendor-supplied binary drivers. Functionality, stability, speed. However, things are changing and I'm frustrated. Let me tell you why.

Part of my job is to do teaching and presentations. I have a trusty thinkpad with a VGA output which can in principle supply about every projector with a decent signal. Most of these projectors do not display the native 1920x1200 resolution of the built-in display. This means, if you configure the second display to clone the first, you will end up seeing only part of the screen. In the past, I solved this by using nvidia-settings and setting the display to a lower resolution supported by the projector (nvidia-settings told me which ones I could use) and then let it clone things. Not so elegant, but everything worked fine- and this amount of fiddling is still something that can be done in the front of a seminar room while someone is introducing you and the audience gets impatient.

Now consider my surprise when suddenly after a driver upgrade the built-in display was completely glued to the native resolution. Only setting possible - 1920x1200. The first time I saw that I was completely clueless what to do; starting the talk took a bit longer than expected. A simple, but completely crazy solution exists; disable the built-in display and only enable the projector output. Then your X session is displayed there and resized accordingly. You'll have to look at the silver screen while talking, but that's not such a problem. A bigger pain actually is that you may have to leave the podium in a hurry and then have no video output at all...

Now, googling. Obviously a lot of other people have the same problem as well. Hacks like this one just don't work, I've ended up with nice random screen distortions. Here's a thread on the nvidia devtalk forum from where I can quote, "The way it works now is more "correct" than the old behavior, but what the user sees is that the old way worked and the new does not." It seems like now nVidia expects that each application handles any mode switching internally. My usecase does not even exist from their point of view. Here's another thread, and in general users are not happy about it.

Finally, I found this link where the following reply is given: "The driver supports all of the scaling features that older drivers did, it's just that nvidia-settings hasn't yet been updated to make it easy to configure those scaling modes from the GUI." Just great.

Gentlemen, this is a serious annoyance. Please fix it. Soon. Not everyone is willing to read up on xrandr command line options and fiddle with ViewPortIn, ViewPortOut, MetaModes and other technical stuff. Especially while the audience is waiting.

April 13, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So it stats my time in Ireland (April 13, 2013, 19:58 UTC)

With today it makes a full week I survived my move to Dublin. Word’s out on who my new employer is (but as usual, since this blog is personal and should not be tied to my employer, I’m not even going to name it), and I started the introductory courses. One thing I can be sure of: I will be eating healthily and compatibly with my taste — thankfully, chicken, especially spicy chicken, seems to be available everywhere in Ireland, yai!

I have spent almost all my life in Venice, never stayed for long periods of time away from it, with the exception of last year, which I spent for the most time, as you probably know, in Los Angeles — 2012 was a funny year like that: I never partied for the new year, but at 31st December 2011 I was at a friend’s place with friends, after which some of us ended up leaving at around 3am… for the first time in my life I ended up sleeping on a friend’s couch. Then it was time for my first week-long vacation since ever with the same group of friends in the Venetian Alps.

With this premise, it’s obvious that Dublin is looking a bit alien to me. It helps I’ve spent a few weeks over the past years in London, so that at least a few customs that are shared between the British and the Irish I already was used to — they probably don’t like to be remembered that they share some customs with the British, but there it goes. But it’s definitely more similar to Italy than Los Angeles.

Funny episode of the day was me going to Boots, and after searching the aisle for a while asking one of the workers if they kept hydrogen peroxide, which I used almost daily both in Italy and the US as a disinfectant – I cut or scrape very easily – and after being looked at in a very strange way I was informed that is not possible to sell it anymore in Ireland…. I’d guess it has something to do with the use of it in the London bombings of ‘05. Luckily they didn’t call the police.

I have to confess though that I like the restaurants better on the touristy, commercial areas than those in the upscale modern new districts — I love Nando’s for instance, which is nowhere Irish, but I love its spiciness (and this time around I could buy the freaking salt!). But also most pubs have very good chicken.

I still don’t have a permanent place though. I need to look into one soonish I suppose, but the job introduction took the priority for the moment. Even though, if the guests in the next apartment are going to throw another party at 4.30am I might decide to find something sooner, rather than later.

Andreas K. Hüttel a.k.a. dilfridge (homepage, bugs)

Gnupg is an excellent tool for encryption and signing, however, while breaking encryption or forging signatures of large key size is likely somewhere between painful and impossible even for agencies on significant budget, all this is always only as safe as your private key. Let's insert the obvious semi-relevant xkcd reference here, but someone hacking your computer, installing a keylogger and grabbing the key file is more likely. While there are no preventive measures that work for all conceivable attacks, you can at least make things as hard as possible. Be smart, use a smartcard. You'll get a number of additional bonuses on the way. I'm writing up here my personal experiences, as a kind of guide. Also, I am picking a compromise between ultra-security and convenience. Please do not complain if you find guides on the web on how to do things "better".

The smart cards

Obviously, you will need one or more OpenPGP-compatible smart cards and a reader device. I ordered my cards from kernel concepts since that shop is referred in the GnuPG smartcard howto. These are the cards developed by g10code, which is Werner Koch's company (he is the principal author of GnuPG). The website says "2048bit RSA capable", the text printed on the card says "3072bit RSA capable", but at least the currently sold cards support 4096bit RSA keys just fine. (You will need at least app-crypt/gnupg-2.0.19-r2 for encryption keys bigger than 3072bit, see this link and this portage commit.)

The readers

While the GnuPG smartcard howto provides a list of supported reader devices, that list (and indeed the whole document) is a bit stale. The best source of information that I found was the page on the Debian Wiki; Yutaka Niibe, who edits that page regularly, is also one of the code contributors to the smartcard part of GnuPG. In general there are two types of readers, those with a stand-alone pinpad and those without. The extra pinpad takes care that for normal operations like signing and encryption the pin for unlocking the keys is never entering the computer itself- so without tampering with the reader hardware it is impossible pretty hard to sniff it. I bought a SCM SPG532 reader, one of the devices supported ever first by GnuPG, however it's not produced anymore and you may have to resort to newer models soon.

Drivers and software

Now, you'll want to activate the USE flag "smartcard" and maybe "pkcs11", and rebuild app-crypt/gnupg. Afterwards, you may want to log out and back in again, since you may need the gpg-agent from the new emerge.
Several different standards for card reader access exist. One particular is the USB standard for integrated circuit card interface devices, short CCID; the driver for that one is directly built into GnuPG, and the SCM SPG532 is such a device. Another set of drivers is provided by sys-apps/pcsc-lite; that will be used by GnuPG if the built-in stuff fails, but requires a daemon to be running (pcscd, just add it to the default runlevel and start it). The page on the Debian Wiki also lists the required drivers.
These drivers do not need much (or any) configuration, but should work in principle out of the box. Testing is easy, plug in the reader, insert a card, and issue the command
gpg --card-status
If it works, you should see a message about (among other things) manufacturer and serial number of your card. Otherwise, you'll just get an uninformative error. The first thing to check is then (especially for CCID) if the device permissions are OK; just repeat above test as root. If you can now see your card, you know you have permission trouble.
Fiddling with the device file permissions was a serious pain, since all online docs are hopelessly outdated. Please forget about the files linked in the GnuPG smartcard howto. (One cannot be found anymore, the other does not work alone and tries to do things in unnecessarily complicated ways.) At some point in time I just gave up on things like user groups and told udev to hardwire the device to my user account: I created the following file into /etc/udev/rules.d/gnupg-ccid.rules:
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/e003/*", OWNER:="huettel", MODE:="600"
ACTION=="add", SUBSYSTEM=="usb", ENV{PRODUCT}=="4e6/5115/*", OWNER:="huettel", MODE:="600"
With similar settings it should in principle be possible to solve all the permission problems. (You may want to change the USB id's and the OWNER for your needs.) Then, a quick
udevadm control --reload-rules
followed by unplugging and re-plugging the reader. Now you should be able to check the contents of your card.
If you still have problems, check the following: for accessing the cards, GnuPG starts a background process, the smart card daemon (scdaemon). scdaemon tends to hang every now and then after removing a card. Just kill it (you need SIGKILL)
killall -9 scdaemon
and try again accessing the card afterwards; the daemon is re-started by gnupg. A lot of improvements in smart card handling are scheduled for gnupg-2.0.20; I hope this will be fixed as well.
Here's how a successful card status command looks like on a blank card:
huettel@pinacolada ~ $ gpg --card-status
Application ID ...: D276000124010200000500000AFA0000
Version ..........: 2.0
Manufacturer .....: ZeitControl
Serial number ....: 00000AFA
Name of cardholder: [not set]
Language prefs ...: de
Sex ..............: unspecified
URL of public key : [not set]
Login data .......: [not set]
Signature PIN ....: forced
Key attributes ...: 2048R 2048R 2048R
Max. PIN lengths .: 32 32 32
PIN retry counter : 3 0 3
Signature counter : 0
Signature key ....: [none]
Encryption key....: [none]
Authentication key: [none]
General key info..: [none]
huettel@pinacolada ~ $

That's it for now, part 2 will be about setting up the basic card data and gnupg functions, then we'll eventually proceed to ssh and pam...

April 11, 2013
Arun Raghavan a.k.a. ford_prefect (homepage, bugs)
PulseAudio in GSoC 2013 (April 11, 2013, 11:34 UTC)

That’s right — PulseAudio will be participating in the Google Summer of Code again this year! We had a great set of students and projects last year, and you’ve already seen some their work in the last release.

There are some more details on how to get involved on the mailing list. We’re looking forward to having another set of smart and enthusiastic new contributors this year!

p.s.: Mentors and students from organisations (GStreamer and BlueZ, for example), do feel free to get in touch with us if you have ideas for projects related to PulseAudio that overlap with those other projects.

April 10, 2013
Patrick Lauer a.k.a. bonsaikitten (homepage, bugs)
GCC 4.8 - building everything? (April 10, 2013, 13:49 UTC)

The last few days I've spent a few hundred CPU-hours building things with gcc 4.8. So far, alphabetically up to app-office/, it's been really boring.
The amount of failing packages is definitely lower than with 4.6 or 4.7. And most of the current troubles are unrelated - for example the whole info page generation madness.
At the current rate of filing and fixing bugs we should be able to unleash this new version on the masses really soon - maybe in about a month? (Or am I just too optimistic?)

Denis Dupeyron a.k.a. calchan (homepage, bugs)
Forking ebuilds (April 10, 2013, 00:14 UTC)

Here’s a response to an email thread I sent recently. This was on a private alias but I’m not exposing the context or quoting anybody, so I’m not leaking anything but my own opinion which has no reason to be secret.

GLEP39 explicitly states that projects can be competing. I don’t see how you can exclude competing ebuilds from that since nothing prevents anybody from starting a project dedicated to maintaining an ebuild.

So, if you want to prevent devs from pushing competing ebuilds to the tree you have to change GLEP 39 first. No arguing or “hey all, hear my opinion” emails on whatever list will be able to change that.

Some are against forking ebuilds and object duplicating effort and lack of manpower. I will bluntly declare those people shortsighted. Territoriality is exactly what prevents us from getting more manpower. I’m interested in improving package X but developer A who maintains it is an ass and won’t yield on anything. At best I’ll just fork it in an overlay (with all the issues that having a package in an overlay entail, i.e. no QA, it’ll die pretty quickly, etc…), at worst I’m moving to Arch, or Exherbo, or else… What have we gained by not duplicating effort? We have gained negative manpower.

As long as forked ebuilds can cohabit peacefully in the tree using say a virtual (note: not talking about the devs here but about the packages) we should see them as progress. Gentoo is about choice. Let consumers, i.e. users and devs depending on the ebuild in various ways, have that choice. They’ll quickly make it known which one is best, at which point the failing ebuild will just die by itself. Let me say it again: Gentoo is about choice.

If it ever happened that devs of forked ebuilds could not cohabit peacefully on our lists or channels, then I would consider that a deliberate intention of not cooperating. As with any deliberate transgression of our rules if I were devrel lead right now I would simply retire all involved developers on the spot without warning. Note the use of the word “deliberate” here. It is important we allow devs to make mistakes, even encourage it. But we are adults. If one of us knowingly chooses to not play by the rules he or she should not be allowed to play. “Do not be an ass” is one of those rules. We’ve been there before with great success and it looks like we are going to have to go there again soon.

There you have it. You can start sending me your hate mail in 3… 2… 1…

April 09, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
So there, I'm in Ireland (April 09, 2013, 21:50 UTC)

Just wanted to let everybody know that I’m in Ireland, as I landed at Dublin Airport on Saturday, and been roaming around the city for a few days now. Time looks like it’s running faster than usual, so I haven’t had much time to work on Gentoo stuff.

My current plan is to work, by the end of the week, on a testing VM as there’s an LVM2 bug that I owe Enrico to fix, and possibly work on the Autotools Mythbuster guide as well, there’s work to do there.

But today, I’m a bit too tired to keep going, it’s 11pm… I’ll doze off!

April 08, 2013
What’s cookin’ on the BBQ (April 08, 2013, 16:27 UTC)

While Spring has yet to come here, the rainy days are giving me some time to think about the future of Sabayon and summarize what’s been done during the last months.


As far as I can see, donations are going surprisingly well. The foundation has now enough money (see the campaign at to guarantee 24/7 operations, new hardware purchase and travel expenses for several months. Of course, the more the better (paranoia mode on) but I cannot really complain, given that’s our sole source of funds. Here is a list of stuff we’ve been able to buy during the last year (including prices, we’re in the EU, prices in the US are much lower, sigh):

  • one Odroid X2 (for Sabayon on ARM experiments) – 131€
  • one PandaBoard ES (for Sabayon on ARM experiments) – 160€
  • two 2TB Seagate Barracuda HDDs (one for Joost’s experiments, one for the Entropy tinderbox) – 185€
  • two 480GB Vertex3 OCZ SSDs for the Entropy tinderbox (running together with the Samsung 830 SSDs in a LVM setup) – 900€
  • one Asus PIKE 2008 SAS controller for the Entropy tinderbox – 300€
  • other 16GB of DDR3 for the Entropy tinderbox (now running with 64G) – 128€
  • @ maintenance (33€/mo for 1 year) – 396€
  • my personal FOSDEM 2013 travel expenses – 155€

Plus, travel expenses to data centers whenever there is a problem that cannot be fixed remotely. That’s more or less from 40€ to 60€ each depending on the physical distance.
As you may understand, this is just a part of the “costs”, because the time donated by individual developers is not accounted there, and I believe that it’s much more important than a piece of silicon.

monthly releases, entropy

Besides the money part, I spent the past months on Sabayon 11 (of course), on advancing with the automation agenda for 2013. Ideally, I would like to have stable releases automatically produced and tested monthly, and eventually pushed to mirrors. This required me to migrate to a different bittorrent tracker, one that scrapes a directory containing .torrents and publishes them automatically: you can see the outcome at Furthermore, a first, yet not advertised, set of monthly ISO images is available on our mirrors into the iso/monthly/ sub-directory. You can read more about them here. This may (eheh) indicate that the next Sabayon release will be versioned something like 13.05, who knows…
On the Entropy camp, nothing much has changed, besides the usual set of bug fixe, little improvements and the migration to an .ini-like repositories configuration files syntax for both Entropy Server and Client modules, see here. You may start realizing that all the good things I do are communicated through the devel mailing list.

leh systemd

I spent a week working on a Sabayon systemd system to see how it works and performs compared to openrc. Long story short, I am about to arrange some ideas on making the systemd migration come true at some point in the (near) future. Joost and I are experimenting with a private Entropy repository (thus chroot) that’s been migrated to systemd, from openrc. While I don’t want to start yet another flamewar about openrc vs systemd, I do believe in science, facts and benchmarks. Even though I don’t really like the vertical architecture of systemd, I am starting to appreciate its features and most importantly, its performance. The first thing I would like to sort out is to be able to switch between systemd and openrc at runtime, this may involve the creation of an eselect module (trivial) and patching some ebuilds. I think that’s the best thing to do, if we really want to design and deploy a migration path for current openrc users (I would like to remind people that Gentoo is about choice, after all). If you’re a Gentoo developer that hasn’t been bugged by me yet, feel free to drop a line to lxnay@g.o (expand the domain, duh!) if you’re interested.

April 07, 2013
Michal Hrusecky a.k.a. miska (homepage, bugs)
FOSDEM 2013 & etc-update (April 07, 2013, 16:00 UTC)



I started writing this post after FOSDEM, but never actually managed to finish it. But as I plan to blog about something again “soon”, I wanted to get this one out first. So let’s start with FOSDEM – it is awesome event and every open source hacker is there unless he has some really huge reasons why not to come (like being dead, in prison or locked down in psychiatric care). I was there together with bunch of openSUSE/SUSE folks. It was a lot of fun and we even managed to get some work done during the event. So how was it?


We had a lot of fun on the way already. You know, every year, we rent a bus just for us and we go from Nuremberg to Brussels and back all together by bus. And we talk and drink and talk and drink some more…. So although it sounds crazy – 8 hours drive – it’s not as bad as it sounds.


What the hack is etc-update and what does it have to do with me, openSUSE or FOSDEM? Isn’t it Gentoo tool? Yes, it is. It is Gentoo tool (actually part of portage, Gentoo package manager) that is used in Gentoo to merge updates to the configuration files. When you install package, portage is not going to overwrite your configuration files that you have spend days and nights tuning. It will create a new file with new upstream configuration and it is up to you to merge them. But you know, rpm does the same thing. In almost all cases rpm is not going to overwrite your configuration file, but will install the new one as config_file.rpmnew. And it is up to you to merge the changes. But it’s not fun. Searching for all files, compare them manually and choose what to merge and how. And here comes etc-update o the rescue ;-)

How does it work? Simple. You need to install it (will speak about that later) and run it. It’s command line tool and it doesn’t need any special parameters. All you need to do is to run etc-update as root (to be actually able to do something with these files). And the result?

# etc-update 
Scanning Configuration files...
The following is the list of files which need updating, each
configuration file is followed by a list of possible replacement files.
1) /etc/camsource.conf (1)
2) /etc/ntp.conf (1)
Please select a file to edit by entering the corresponding number.
              (don't use -3, -5, -7 or -9 if you're unsure what to do)
              (-1 to exit) (-3 to auto merge all files)
                           (-5 to auto-merge AND not use 'mv -i')
                           (-7 to discard all updates)
                           (-9 to discard all updates AND not use 'rm -i'):

What I usually do is that I select configuration files I do care about, review changes and merge them somehow and later just use -5 for everything else. It looks really simple, doesn’t it? And in fact it is!

Somebody asked a question on how to merge updates of configuration files while visiting our openSUSE booth at FOSDEM. When I learned that from Richard, we talked a little bit about how easy it is to do something like that and later during one of the less interesting talks, I took this Gentoo tool, patched it to work on rpm distributions, packaged it and now it is in Factory and it will be part of openSUSE 13.1 ;-) If you want to try it, you can get it either from my home project – home:-miska-:arm (even for x86 ;-) ) or from utilities repository.

Hope you will like it and that it will make many sysadmins happy ;-)

April 06, 2013
Alexys Jacob a.k.a. ultrabug (homepage, bugs)
py3status v0.7 (April 06, 2013, 15:30 UTC)

Some cool bugfixes happened since v0.5 and py3status broke the 20 github stars, I hope people are enjoying it.


  • clear the user class cache when receiving SIGUSR1
  • specify default folder for user defined classes
  • fix time transformation thx to @Lujeni
  • add Pingdom checks latency example module
  • fix issue #2 reported by @Detegr which caused the clock to drift on some use cases

April 04, 2013
Aaron W. Swenson a.k.a. titanofold (homepage, bugs)

If you’re using dev-db/postgresql-server, update now.

CVE-2013-1899 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13}
A connection request containing a database name that begins
with "-" may be crafted to damage or destroy files within a server's data directory.

CVE-2013-1900 <dev-db/postgresql-server-{9.2.4,9.1.9,9.0.13,8.4.17}
Random numbers generated by contrib/pgcrypto functions may be easy for another
database user to guess

CVE-2013-1901 <dev-db/postgresql-server-{9.2.4,9.1.9}
An unprivileged user can run commands that could interfere with in-progress backups.

April 03, 2013
Matthew Thode a.k.a. prometheanfire (homepage, bugs)


  1. Keep in mind that ZFS on Linux is supported upstream, for differing values of support
  2. I do not care much for hibernate, normal suspending works.
  3. This is for a laptop/desktop, so I choose multilib.
  4. If you patch the kernel to add in ZFS support directly, you cannot share the binary, the cddl and gpl2 are not compatible in that way.


Make sure your installation media supports zfs on linux and installing whatever bootloader is required (uefi needs media that supports it as well). I uploaded an iso that works for me at this link Live DVDs newer then 12.1 should also have support, but the previous link has the stable version of zfsonlinux. If you need to install the bootloader via uefi, you can use one of the latest Fedora CDs, though the gentoo media should be getting support 'soon'. You can install your system normally up until the formatting begins.


I will be assuming the following.

  1. /boot on /dev/sda1
  2. cryptroot on /dev/sda2
  3. swap inside cryptroot OR not used.

When using GPT for partitioning, create the first partition at 1M, just to make sure you are on a sector boundry Most newer drives are 4k advanced format drives. Because of this you need ashift=12, some/most newer SSDs need ashift=13 compression set to lz4 will make your system incompatible with upstream (oracle) zfs, if you want to stay compatible then just set compression=on

General Setup

#setup encrypted partition
cryptsetup luksFormat -l 512 -c aes-xts-plain64 -h sha512 /dev/sda2
cryptsetup luksOpen /dev/sda2 cryptroot

#setup ZFS
zpool create -f -o ashift=12 -o cachefile=/tmp/zpool.cache -O normalization=formD -m none -R /mnt/gentoo rpool /dev/mapper/cryptroot
zfs create -o mountpoint=none -o compression=lz4 rpool/ROOT
zfs create -o mountpoint=/ rpool/ROOT/rootfs
zfs create -o mountpoint=/opt rpool/ROOT/rootfs/OPT
zfs create -o mountpoint=/usr rpool/ROOT/rootfs/USR
zfs create -o mountpoint=/var rpool/ROOT/rootfs/VAR
zfs create -o mountpoint=none rpool/GENTOO
zfs create -o mountpoint=/usr/portage rpool/GENTOO/portage
zfs create -o mountpoint=/usr/portage/distfiles -o compression=off rpool/GENTOO/distfiles
zfs create -o mountpoint=/usr/portage/packages -o compression=off rpool/GENTOO/packages
zfs create -o mountpoint=/home rpool/HOME
zfs create -o mountpoint=/root rpool/HOME/root

cd /mnt/gentoo

#Download the latest stage3 and extract it.
tar -xf /mnt/gentoo/stage3-amd64-hardened-*.tar.bz2 -C /mnt/gentoo

#get the latest portage tree
emerge --sync

#copy the zfs cache from the live system to the chroot
mkdir -p /mnt/gentoo/etc/zfs
cp /tmp/zpool.cache /mnt/gentoo/etc/zfs/zpool.cache

Kernel Config

If you are compiling the modules into the kernel staticly, then keep these things in mind.

  • When configuring the kernel, make sure that CONFIG_SPL and CONFIG_ZFS are set to 'Y'.
  • Portage will want to install sys-kernel/spl when emerge sys-fs/zfs is run because of dependencies. Also, sys-kernel/spl is still necessary to make the sys-fs/zfs configure script happy.
  • You do not need to run or install module-rebuild.
  • There have been some updates to the kernel/userspace ioctl since 0.6.0-rc9 was tagged.
    • An issue occurs if newer userland utilities are used with older kernel modules.

Install as normal up until the kernel install.

echo "=sys-kernel/genkernel-3.4.40 ~amd64       #needed for zfs and encryption support" >> /etc/portage/package.accept_keywords
emerge sys-kernel/genkernel
emerge sys-kernel/gentoo-sources                #or hardned-sources

#patch the kernel

#If you want to build the modules into the kernel directly, you will need to patch the kernel directly.  Otherwise, skip the patch commands.
env EXTRA_ECONF='--enable-linux-builtin' ebuild /usr/portage/sys-kernel/spl/spl-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-kernel/spl-0.6.1/work/spl-0.6.1 && ./copy-builtin /usr/src/linux)
env EXTRA_ECONF='--with-spl=/usr/src/linux --enable-linux-builtin' ebuild /usr/portage/sys-fs/zfs-kmod/zfs-kmod-0.6.1.ebuild clean configure
(cd /var/tmp/portage/sys-fs/zfs-kmod-0.6.1/work/zfs-zfs-0.6.1/ && ./copy-builtin /usr/src/linux)
mkdir -p /etc/portage/profile
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use

#finish configuring, building and installing the kernel making sure to enable dm-crypt support

#if not building zfs into the kernel, install module-rebuild
emerge module-rebuild

#install SPL and ZFS stuff zfs pulls in spl automatically
mkdir -p /etc/portage/profile                                                   
echo 'sys-fs/zfs -kernel-builtin' >> /etc/portage/profile/package.use.mask      
echo 'sys-fs/zfs kernel-builtin' >> /etc/portage/package.use                    
emerge sys-fs/zfs

# Add zfs to the correct runlevels
rc-update add zfs boot
rc-update add zfs-shutdown shutdown

#initrd creation, add '--callback="module-rebuild rebuild"' to the options if not building the modules into the kernel
genkernel --luks --zfs --disklabel initramfs

Finish installing as normal, your kernel line should look like this, and you should also have a the initrd defined.

#kernel line for grub2, libzfs support is not needed in grub2 because you are not mounting the filesystem directly.
linux  /kernel-3.5.0-gentoo real_root=ZFS=rpool/ROOT/rootfs crypt_root=/dev/sda2 dozfs=force ro
initrd /initramfs-genkernel-x86_64-3.5.0

In /etc/fstab, make sure BOOT, ROOT and SWAP lines are commented out and finish the install.

You should now have a working encryped zfs install.

April 02, 2013
Diego E. Pettenò a.k.a. flameeyes (homepage, bugs)
The WebP experiment (April 02, 2013, 17:58 UTC)

You might have noticed over the last few days that my blog underwent some surgery, and in particular that some even now, on some browsers, the home page does not really look all that well. In particular, I’ve removed all but one of the background images and replaced them with CSS3 linear gradients. Users browsing the site with the latest version of Chrome, or with Firefox, will have no problem and will see a “shinier” and faster website, others will see something “flatter”, I’m debating whether I want to provide them with a better-looking fallback or not; for now, not.

But this was also a plan B — the original plan I had in mind was to leverage HTTP content negotiation to provide WebP variants of the images of the website. This was a win-win situation because, ludicrous as it was when WebP was announced, it turns out that with its dual-mode, lossy and lossless, it can in one case or the other outperform both PNG and JPEG without a substantial loss of quality. In particular, lossless behaves like a charm with “art” images, such as the CC logos, or my diagrams, while lossy works great for logos, like the Autotools Mythbuster one you see on the sidebar, or the (previous) gradient images you’d see on backgrounds.

So my obvious instinct was to set up content negotiation — I’ve used it before for multiple-language websites, I expected it to work for multiple times as well, as it’s designed to… but after setting all up, it turns out that most modern web browsers still do not support WebP at all… and they don’t handle content negotiation as intended. For this to work we need either of two options.

The first, best option, would be for browsers only Accept the image formats they support, or at least prefer them — this is what Opera for Android does: Accept: text/html, application/xml;q=0.9, application/xhtml+xml, multipart/mixed, image/png, image/webp, image/jpeg, image/gif, image/x-xbitmap, */*;q=0.1 but that seems to be the only browser doing it properly. In particular, in this listing you’ll see that it supports PNG, WebP, JPEG, GIF and bimap — and then it accepts whatever else with a lower reference. If WebP was not in the list, even if it had an higher preference for the server, it would not be sent to the client. Unfortunately, this is not going to work, as most browsers send Accept: */* without explicitly providing the list of supported image formats. This includes Safari, Chrome, and MSIE.

Point of interest: Firefox does explicit one image format before others: PNG.

The other alternative is for the server to default to the “classic” image formats (PNG, JPEG, GIF) and then expect the browsers supporting WebP prioritizing it over the other image formats. Again this is not the case; as shown above, Opera lists it but does not prioritize, and again, Firefox prioritizes PNG over anything else, and makes no special exception for WebP.

Issues are open at Chrome and Mozilla to improve the support but they haven’t reached mainstream yet. Google’s own suggested solution is to use mod_pagespeed instead — but this module – which I already named in passing in my post about unfriendly projects – is doing something else. It’s on-the-fly changing the content that is provided, based on the reported User-Agent.

Given that I’ve spent some time on user agents, I would say I have the experiences to say that this is a huge pandora’s vase. If I have trouble with some low-development browsers reporting themselves as Chrome to fake their way in with sites that check the user agent field in JavaScript, you can guess how many of those are going to actually support the features that PageSpeed thinks they support.

I’m going to go back to PageSpeed in another post, for now I’ll stop to say that WebP has the numbers to become the next generation format out there, but unless browser developers, as well as web app developers start to get their act straight, we’re going to have hacks over hacks over hacks for the years to come… Currently, my blog is using a CSS3 feature with the standardized syntax — not all browsers understand it, and they’ll see a flat website without gradients; I don’t care and I won’t start adding workarounds for that just because (although I might use SCSS which will fix it for Safari)… new browsers will fix the problem, so just upgrade, or use a sane browser.

I’m a content publisher, whether I like it or not. This blog is relatively well followed, and I write quite a lot in it. While my hosting provider does not give me grief for my bandwidth usage, optimizing it is something I’m always keen on, especially since I have been Slashdotted once before. This is one of the reasons why my ModSecurity Ruleset validates and filters crawlers as much as spammers.

Blogs’ feeds, be them RSS or Atom (this blog only supports the latter) are a very neat way to optimize bandwidth: they get you the content of the articles without styles, scripts or images. But they can also be quite big. The average feed for my blog’s articles is 100KiB which is a fairly big page, if you consider that feed readers are supposed to keep pinging the blog to check for new items. Luckily for everybody, the authors of HTTP did consider this problem, and solved it with two main features: conditional requests and compressed responses.

Okay there’s a sense of déjà-vu in all of this, because I already complained about software not using the features even when it’s designed to monitor web pages constantly.

By using conditional requests, even if you poke my blog every fifteen minutes, you won’t use more than 10KiB an hour, if no new article has been posted. By using compressed responses, instead of a 100KiB response you’ll just have to download 33KiB. With Google Reader, things were even better: instead of 113 requests for the feed, a single request was made by the FeedFetcher, and that was it.

But now Google Reader is no more (almost). What happens now? Well, of the 113 subscribers, a few will most likely not re-subscribe to my blog at all. Others have migrated to NewsBlur (35 subscribers), the rest seem to have installed their own feed reader or aggregator, including tt-rss, owncloud, and so on. This was obvious looking at the statistics from either AWStats or Munin, both showing a higher volume of requests and delivered content compared to last month.

I’ve then decided to look into improving the bandwidth a bit more than before, among other things, by providing WebP alternative for images, but that does not really work as intended — I have enough material for a rant post or two so I won’t discuss it now. But while doing so I found out something else.

One of the changes I made while hoping to use WebP is to serve the image files from a different domain ( which meant that the access log for the blog, while still not perfect, is decidedly cleaner than before. From there I noticed that a new feed reader started requesting my blog’s feed every half an hour. Without compression. In full every time. That’s just shy of 5MiB of traffic per day, but that’s not the worst part. The worst part is that said 5MiB are for a single reader as the requests come from a commercial, proprietary feed reader webapp.

And this is not the only one! Gwene also does the same, even though I sent a pull request to get it to use compressed responses, which hasn’t had a single reply. Even Yandex’s new product has the same issue.

While 5MiB/day is not too much taken singularly, my blog’s traffic averages on 50-60 MiB/day so it’s basically a 10% traffic for less than 1% of users, just because they do not follow the best practices when writing web software. I’ve now added these crawlers to the list of stealth robots, this means that they will receive a “406 Unacceptable” unless they finally implement at least the compressed responses support (which is the easy part in all this).

This has an unfortunate implication on users of those services that were reading me, who won’t get any new updates. If I was a commercial entity, I couldn’t afford this at all. The big problem, to me, is that with Google Reader going away, I expect more and more of this kind of issues to crop up repeatedly. Even NewsBlur, which is now my feed reader of choice fixed their crawlers yet, which I commented upon before — the code is open-source but I don’t want to deal with Python just yet.

Seriously, why are there so many people who expect to be able to deal with web software and yet have no idea how the web works at all? And I wonder if somebody expected this kind of fallout from the simple shut down of a relatively minor service like Google Reader.

March 31, 2013
David Abbott a.k.a. dabbott (homepage, bugs)
udev-200 interface names (March 31, 2013, 00:59 UTC)

Just updated to udev-200 and figured it was time to read the news item and deal with the Predictable Network Interface Names. I only have one network card and connect with a static ip address. It looked to me like more trouble to keep net.eth0 then to just go with the flow and paddle downstream and not fight it so here is what I did.

First I read the news item :) then found out what my new name would be.

eselect news read
udevadm test-builtin net_id /sys/class/net/eth0 2> /dev/null

That returned enp0s25 ...

Next remove the old symlink and create the new one.

cd /etc/init.d/
rm net.eth0
ln -s net.lo net.enp0s25

I removed all the files from /etc/udev/rules.d/

Next set up /etc/conf.d/net for my static address.

# Static
routes_enp0s25="default via"

That was it, rebooted, held my breath, and everything seems just fine, YES!

enp0s25: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet  netmask  broadcast
        inet6 fe80::21c:c0ff:fe91:5798  prefixlen 64  scopeid 0x20<link>
        ether 00:1c:c0:91:57:98  txqueuelen 1000  (Ethernet)
        RX packets 3604  bytes 1310220 (1.2 MiB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 2229  bytes 406258 (396.7 KiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xd3400000-d3420000  
lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 16436
        inet  netmask
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 0  (Local Loopback)
        RX packets 0  bytes 0 (0.0 B)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 0  bytes 0 (0.0 B)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

I had to edit /etc/vnstat.conf and change eth0 to enp0s25. I use vnstat with conky.

rm /var/lib/vnstat/*
vnstat -u -i enp0s25